CN114998912A - Data extraction method and device, electronic equipment and storage medium - Google Patents

Data extraction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114998912A
CN114998912A CN202210589180.6A CN202210589180A CN114998912A CN 114998912 A CN114998912 A CN 114998912A CN 202210589180 A CN202210589180 A CN 202210589180A CN 114998912 A CN114998912 A CN 114998912A
Authority
CN
China
Prior art keywords
image
identification
data
extracted
value corresponding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210589180.6A
Other languages
Chinese (zh)
Inventor
徐帅
刘勇成
胡志鹏
袁思思
程龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210589180.6A priority Critical patent/CN114998912A/en
Publication of CN114998912A publication Critical patent/CN114998912A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a data extraction method, a data extraction device, electronic equipment and a storage medium, wherein the number of key data of an image to be extracted is obtained by inputting the image to be extracted into a first fitting model, then the image to be extracted and the number of the key data are input into a second fitting model, and a proportion vector of the image to be extracted is output. And calculating to obtain a real value corresponding to each key data in the image to be extracted according to the real value corresponding to at least one given key data and the proportion vector, and finishing the data extraction in the image to be extracted. Through the first fitting model and the second fitting model, the images to be extracted can be accurately analyzed, the accuracy of data processing is improved, meanwhile, the labor cost and the time cost are effectively reduced, the method can be suitable for most image data, and the application range is wide.

Description

Data extraction method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a data extraction method and apparatus, an electronic device, and a storage medium.
Background
At present, as some graph data in a network or a paper newspaper is not marked with specific numerical information or has a phenomenon of partial missing of the numerical information, technicians need to manually measure the graph data or guess the data in the graph according to the general trend presented by the graph, a large amount of labor and time cost is consumed, and more accurate data is difficult to obtain. Therefore, a fast method for extracting chart data with less error is needed.
Disclosure of Invention
In view of the above, the present application provides a data extraction method, an apparatus, an electronic device, and a storage medium.
In a first aspect of the present application, a data extraction method is provided, including:
inputting an image to be extracted into a first fitting model, and outputting the number of data identifications of the image to be extracted through the first fitting model; the image to be extracted comprises a first identification value corresponding to the first data identification and a second data identification corresponding to the second identification value to be identified; the data identification represents an identification numerical value in the image to be extracted;
inputting the number of the images to be extracted and the data identifications to a second fitting model, and outputting the relative numerical relationship of the images to be extracted through the second fitting model; the relative numerical value relationship represents a comparison relationship between the second identification numerical value corresponding to the second data identification and the first identification numerical value corresponding to the first data identification;
determining a second identification value corresponding to the second data identification in the image to be extracted according to a first identification value corresponding to the first data identification and the relative value relationship,
wherein the first fitted model and the second fitted model are both pre-trained.
Optionally, the image to be extracted includes a to-be-extracted columnar image, the data identifier includes a columnar pattern, and the to-be-extracted columnar image includes a first identifier value corresponding to the first columnar pattern and a second columnar pattern to be recognized and corresponding to the second identifier value;
the relative numerical relationship represents a comparison relationship between the second identification numerical value corresponding to the second cylindrical pattern and the first identification numerical value corresponding to the first cylindrical pattern.
Optionally, the image to be extracted includes a broken line image to be extracted, the data identifier includes a broken line dot pattern, and the broken line image to be extracted includes a first identifier value corresponding to the first broken line dot pattern and a second broken line dot pattern to be identified and corresponding to the second identifier value;
the relative numerical value relationship represents the contrast relationship between the second identification numerical value corresponding to the second broken line point pattern and the first identification numerical value corresponding to the first broken line point pattern.
Optionally, the pre-training of the first fitting model includes:
acquiring a first image training set; each image in the first image training set comprises a first identification numerical value corresponding to the first data identification and a second data identification corresponding to the second identification numerical value to be identified; wherein the data identification characterizes an identification value in the image;
labeling the first image training set to obtain the number of the data identifications corresponding to each image in the first image training set;
and pre-training the first fitting model through the labeled first image training set until an iterative training termination condition is met, and obtaining the pre-trained first fitting model.
Optionally, the pre-training of the second fitting model includes:
acquiring a second image training set; each image in the second image training set comprises a first identification numerical value corresponding to the first data identification and a second data identification corresponding to the second identification numerical value to be identified; wherein the data identification characterizes an identification value in the image;
inputting the second image training set into the first fitting model to obtain the number of data identifications corresponding to each image in the second image training set;
segmenting each image in the second image training set according to the number of the data identifications to obtain a plurality of subgraphs corresponding to each image, wherein each subgraph comprises one data identification;
determining a relative numerical relationship between identification numerical values corresponding to data identifications of different subgraphs;
and pre-training the second fitting model based on the second image training set, the number of the data identifications and the relative numerical relationship until an iterative training termination condition is met, and obtaining the pre-trained second fitting model.
Optionally, the determining a relative numerical relationship between identification numerical values corresponding to data identifications of different sub-graphs includes:
calculating to obtain an identification numerical value corresponding to the data identification in each sub-image based on image pixels;
and normalizing the identification numerical values corresponding to all the sub-images in each image to obtain the relative numerical value relationship.
Optionally, the image in the second image training set includes a polygonal line image, the data identifier includes a polygonal line point in the polygonal line image, and the obtaining, based on the image pixel, an identifier value corresponding to the data identifier in each sub-image by computing includes:
establishing a pixel coordinate system of the broken line image;
and determining the coordinate value of the broken line point in the pixel coordinate system in the change direction of the characterization identification value to be used as the identification value corresponding to the broken line point.
Optionally, the image in the second image training set includes a cylindrical image, the data identifier includes a cylindrical pattern in the cylindrical image, and the obtaining, based on image pixel calculation, an identifier value corresponding to the data identifier in each sub-image includes:
establishing a pixel coordinate system of the cylindrical image;
determining a pixel area occupied by the cylindrical pattern;
and taking the maximum coordinate value of the pixel region in the pixel coordinate system in the direction of representing the change of the identification number as the identification number corresponding to the cylindrical pattern.
Optionally, the relative numerical relationship includes a proportional relationship, and the proportional relationship represents a ratio relationship between the second identification numerical value corresponding to the second data identification and the first identification numerical value corresponding to the first data identification;
and in the change direction of the representation identification numerical value, the numerical value change of the same image interval representation is the same.
Optionally, the first fitting model and/or the second fitting model is a machine learning model.
In a second aspect of the present application, there is provided a data extraction apparatus, including:
the first fitting module is configured to input the image to be extracted into a first fitting model and output the number of data identifications of the image to be extracted through the first fitting model; the image to be extracted comprises a first identification value corresponding to the first data identification and a second data identification corresponding to the second identification value to be identified; the data identification represents an identification numerical value in the image to be extracted;
the second fitting module is configured to input the number of the images to be extracted and the data identifications into a second fitting model and output the relative numerical relationship of the images to be extracted through the second fitting model; the relative numerical value relationship represents a comparison relationship between the second identification numerical value corresponding to the second data identification and the first identification numerical value corresponding to the first data identification;
an extraction module configured to determine the second identification value corresponding to the second data identification in the image to be extracted according to a first identification value corresponding to the first data identification and the relative numerical relationship,
wherein the first fitted model and the second fitted model are both pre-trained.
In a third aspect of the application, an electronic device is provided, comprising a memory, a processor and a computer program stored on the memory and executable by the processor, the processor implementing the method as described above when executing the computer program.
In a fourth aspect of the present application, there is provided a computer-readable storage medium storing computer instructions for causing a computer to perform the method as described above.
As can be seen from the above, according to the data extraction method, the data extraction device, the electronic device, and the storage medium provided by the present application, the number of the data identifiers of the image to be extracted is obtained by inputting the image to be extracted into the first fitting model, and then the number of the image to be extracted and the number of the data identifiers are input into the second fitting model, so as to output the relative numerical relationship of the image to be extracted. And calculating to obtain a second identification value corresponding to the second data identification to be identified according to a first identification value corresponding to the first data identification contained in the image to be extracted and the relative value relationship, and finishing the data extraction of the image to be extracted. Through first fitting model and second fitting model, the image of treating that can be accurate carries out analysis processes, improves the accuracy that the data drawed, avoids artifical measurement, effectively reduces human cost and time cost.
Drawings
In order to more clearly illustrate the technical solutions in the present application or the related art, the drawings needed to be used in the description of the embodiments or the related art will be briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of a data extraction method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a cylindrical image of an embodiment of the present application;
FIG. 3 is a schematic view of a broken line image of one embodiment of the present application;
FIG. 4 is a schematic flow diagram of pre-training of a first fitting model according to an embodiment of the present application;
FIG. 5 is a schematic flow diagram of pre-training of a second fitting model according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a data extraction device according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings in combination with specific embodiments.
It should be noted that technical terms or scientific terms used in the embodiments of the present application should have a general meaning as understood by those having ordinary skill in the art to which the present application belongs, unless otherwise defined. The use of "first," "second," and similar terms in the embodiments of the present application is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
The application provides a data extraction method, and fig. 1 shows a flow diagram of the data extraction method according to an embodiment of the application. As shown in fig. 1, the method comprises the following steps:
step 102: inputting an image to be extracted into a first fitting model, and outputting the number of data identifications of the image to be extracted through the first fitting model; the image to be extracted comprises a first identification value corresponding to the first data identification and a second data identification corresponding to the second identification value to be identified; and the data identification represents an identification numerical value in the image to be extracted.
Specifically, the image to be extracted may be obtained from the internet, or may be obtained from paper materials such as periodicals, newspapers, and impurities. When the images are obtained from the Internet, the images to be extracted can be directly downloaded and stored; when the image is obtained from paper materials, the paper images can be converted into the images to be extracted in a digital format through scanning, shooting and other modes. The embodiment does not specifically limit the acquisition mode of the image to be extracted and the corresponding digital conversion mode. The image to be extracted comprises a first data identifier and a second data identifier, wherein a first identifier value corresponding to the first data identifier is known, and a second identifier value corresponding to the second data identifier is unknown and needs to be identified through a data extraction method. The data identification is used for representing an identification numerical value, and the identification numerical value is a real numerical value of the data identification. It is understood that the number of the first data identifications and the corresponding first identification values thereof, and/or the number of the second data identifications and the corresponding second identification values thereof may be one or more.
And inputting the image to be extracted into a pre-trained first fitting model, and identifying the image to be extracted through the first fitting model, so that the number of data identifications in the image to be extracted can be accurately calculated. That is, the pre-trained first fitting model is used to extract data identifiers in a picture, and determine the number of extracted data identifiers.
Step 104: inputting the number of the images to be extracted and the data identification into a second fitting model, and outputting the relative numerical relationship of the images to be extracted through the second fitting model; the relative numerical value relationship represents a comparison relationship between the second identification numerical value corresponding to the second data identification and the first identification numerical value corresponding to the first data identification.
Specifically, after the number of the data identifiers of the image to be extracted is obtained in step 102, the number of the data identifiers and the image to be extracted are input into a pre-trained second fitting model, and the data in the image to be extracted is analyzed and calculated through the second fitting model and the relative numerical relationship of the image to be extracted is output. The relative numerical relationship can reflect a comparison relationship between the first identification numerical value and the second identification numerical value, so that the second identification numerical value corresponding to each unknown second data identification can be identified subsequently.
Step 106: and determining a second identification value corresponding to the second data identification in the image to be extracted according to a first identification value corresponding to the first data identification and the relative value relationship.
Specifically, based on the relative numerical value relationship obtained in step 104, according to the first identifier numerical value corresponding to the first data identifier, the second identifier numerical value corresponding to each second numerical value identifier is determined through conversion of the relative numerical value relationship, so as to complete data extraction of the image to be extracted.
In summary, based on the above steps 102 to 106, the extraction of the second identifier value corresponding to the second data identifier in the image to be extracted is completed. Through the first fitting model and the second fitting model, the image to be extracted can be accurately analyzed, the accuracy of data extraction is improved, and meanwhile, a large amount of labor cost and time cost are saved. The data extraction method of this embodiment can obtain the unknown second identification values corresponding to all the second data identifications in the image to be extracted through the calculation of the relative numerical value relationship on the premise that the first identification values corresponding to the first data identifications are known, and is suitable for processing data images with problems of missing part of the second identification values or fuzzy part of the second identification values. Therefore, the method of the embodiment can be suitable for most image data, provides relatively accurate numerical values contained in the image to be extracted for technicians quickly, and reduces relatively high labor cost and time cost brought by manual measurement.
In some embodiments, the image to be extracted includes a cylindrical image to be extracted, the data identifier includes a cylindrical pattern, and the cylindrical image to be extracted includes a first identifier value corresponding to the first cylindrical pattern and a second cylindrical pattern to be recognized and corresponding to a second identifier value;
the relative numerical relationship represents a comparison relationship between the second identification numerical value corresponding to the second cylindrical pattern and the first identification numerical value corresponding to the first cylindrical pattern.
Specifically, fig. 2 is a schematic diagram of a cylindrical image to be extracted according to an embodiment. As shown in fig. 2, the data is identified as cylindrical patterns in the cylindrical image, which contains a total of 4 cylindrical patterns. The first column pattern is a column pattern with an X-axis coordinate value of C, and the first column pattern corresponds to a first identification value of 960. The second columnar pattern includes columnar patterns whose coordinate values on the X axis are A, B, D respectively, and the second identification values corresponding to the three columnar patterns are unknown. As can be seen from fig. 2, A, B, C, D the four column patterns have different column heights, and the relative numerical relationship can represent the contrast relationship between the four column heights.
In some embodiments, the image to be extracted comprises a broken line image to be extracted, the data identifier comprises a broken line point pattern, and the broken line image to be extracted comprises a first identifier numerical value corresponding to the first broken line point pattern and a second broken line point pattern corresponding to the second identifier numerical value to be identified;
the relative numerical value relationship represents the contrast relationship between the second identification numerical value corresponding to the second broken line point pattern and the first identification numerical value corresponding to the first broken line point pattern.
Specifically, fig. 3 is a schematic diagram illustrating a polygonal line image to be extracted according to an embodiment. As shown in fig. 3, the data is identified as a polyline point pattern in the polyline image to be extracted, and 6 polyline points are contained in fig. 2. The first broken line point pattern is a broken line point pattern corresponding to January in X-axis coordinate value, and the first identification value corresponding to the first broken line point pattern is 300. The second polyline point pattern comprises polyline point patterns corresponding to the coordinate values of February, March, April, May and June respectively on the X axis, and the second identification numerical values corresponding to the 5 polyline point patterns are unknown. As can be seen from FIG. 3, the heights of the 6 polyline point patterns on the Y axis are different, and the relative numerical relationship can represent the contrast relationship between the heights of the 6 polyline points.
In some embodiments, as shown in FIG. 4, a flow diagram of pre-training of a first fitting model of an embodiment is shown. Said pre-training of said first fitted model comprises the steps of:
step 402, acquiring a first image training set; each image in the first image training set comprises a first identification numerical value corresponding to the first data identification and a second data identification corresponding to the second identification numerical value to be identified; wherein the data representation characterizes a representation value in the image.
Specifically, the first image training set may include public data images collected from a network or a large number of data images acquired from other channels, where the source of the data images is not limited, and only the data volume requirement of the first image training set needs to be met. Meanwhile, the images in the first image training set need to include a first identifier value corresponding to the first data identifier and a second data identifier to be recognized and corresponding to the second identifier value, that is, the images include the first data identifier corresponding to the known first identifier value and the second data identifier corresponding to the unknown second identifier value.
And 404, labeling the first image training set to obtain the number of the data identifications corresponding to each image in the first image training set.
And marking the first image training set in a manual marking mode, wherein the quantity of the data identifications included in the marked image is only needed. In a specific example, if the images in the first image training set are line images, as shown in fig. 3, the number of labeled data markers is the number of line points included in the line images, and since the number of line points in fig. 3 is 6, the number of data markers is also 6.
And 406, pre-training the first fitting model through the labeled first image training set until an iterative training termination condition is met, and obtaining the pre-trained first fitting model.
Specifically, a first image training set subjected to labeling is divided into a training set and a testing set, iterative training is carried out on the first fitting model through the training set until the first fitting model reaches a training termination condition, and the first fitting model is tested through the testing set. The pre-trained first fitting model can accurately identify the number of data identifications in the image to be extracted and output the number. It is to be understood that the iterative training termination condition may include that the accuracy of the first fitted model is not less than a preset accuracy threshold, for example, the accuracy is not less than 98%, or not less than 99%, and so on. The iterative training termination condition may further include that the number of iterative training is not less than a preset number threshold, for example, the number of iterative training is not less than 10 ten thousand or 50 ten thousand, and the like. In addition, the iterative training termination condition may further include other conditions besides the above-mentioned conditions, and this is not particularly limited in the embodiment of the present invention.
In some embodiments, as shown in FIG. 5, a flow diagram of the pre-training of the second fitting model of an embodiment is shown. The pre-training of the second fitting model, comprising the steps of:
step 502, acquiring a second image training set; each image in the second image training set comprises a first identification numerical value corresponding to the first data identification and a second data identification corresponding to the second identification numerical value to be identified; wherein the data representation characterizes a representation value in the image.
Specifically, the second image training set may include public data images collected from a network or a large number of data images acquired from other channels, where the source of the data images is not limited, and only the data volume requirement of the second image training set needs to be met. Meanwhile, the images in the second image training set need to include a first identification value corresponding to the first data identification and a second data identification corresponding to the second identification value to be recognized, that is, the images include the first data identification corresponding to the known first identification value and the second data identification corresponding to the unknown second identification value.
Step 504, inputting the second image training set into the first fitting model, and obtaining the number of data identifications corresponding to each image in the second image training set.
And inputting the second image training set into a pre-trained first fitting model, and obtaining the number of data identifications of each image in the second image training set through the first fitting model. For example, the cylindrical images to be extracted shown in fig. 2 are input into the first fitting model, and the number of output data identifications is 4.
Step 506, segmenting each image in the second image training set according to the number of the data identifications to obtain a plurality of sub-images corresponding to each image, wherein each sub-image comprises one data identification.
The cylindrical image to be extracted in fig. 2 is segmented according to the number of the data identifiers, the cylindrical image to be extracted is segmented into 4 sub-images, and the region of each solid line square box in fig. 2 represents one sub-image obtained by segmentation. Each subgraph comprises a cylindrical pattern, and the segmentation process can be completed through pre-edited software or manually.
When the image is sliced, the region where each sub-image is located needs to be identified and determined. If the image to be extracted is the cylindrical image to be extracted, the identification of the cylindrical pattern in the image can be determined by adopting a mode of identifying the color value of the pixel. Generally, each column pattern contains a color, the colors of the different column patterns are different, and each column pattern is different from the color of the background in the image. The boundary of the cylindrical pattern is judged by identifying the color value of the pixel, sub-image segmentation can be accurately carried out on the image, and the height of each sub-image obtained by segmentation is the same and equal to the height of the image to be extracted. Through sub-graph segmentation, the data identification in the image to be extracted can be accurately extracted, and when the identification value is subsequently calculated, the influence factors in the image to be extracted are eliminated, so that the calculation error is reduced, and the calculation accuracy of the identification value is improved.
It should be noted that, in order to further eliminate the influence factors in the image to be extracted, a frame removal operation may be performed first in the segmentation process, generally speaking, the color of the frame of the image to be extracted is different from the internal color of the image, and the software may determine the frame region by identifying the color value of the pixel in the image, where the specific identification process is that, when the color value of the continuous pixel region in the image to be extracted changes from outside to inside, the region before the color value changes is identified as the frame, and the frame is removed. By identifying the color values of the pixels, other interference data than the data identifications in some images can also be removed. And performing sub-graph segmentation based on the color values of the image pixels so as to enable the whole calculation process to be convenient, quick and effective, and analyzing and processing the image without introducing other complex calculation modes. Since most of the existing broken line images and column images contain different colors to show distinction, the data extraction method in the embodiment can effectively extract the identification numerical values in the broken line images and the column images, and has a wide application range.
And step 508, determining a relative numerical value relationship between identification numerical values corresponding to the data identifications of different subgraphs. Further, after the sub-graph segmentation is completed, a relative numerical relationship between identification numerical values in different sub-graphs needs to be determined, such as a relative height relationship between different cylindrical patterns in a cylindrical image, or a relative height relationship between different broken line points in a broken line image.
And 510, pre-training the second fitting model based on the second image training set, the number of the data identifications and the relative numerical relationship until an iterative training termination condition is met, and obtaining the pre-trained second fitting model.
And pre-training a second fitting model based on the obtained second image training set, the number of the data identifications and the relative numerical relationship, dividing the second image training set into a training set and a testing set, performing iterative training on the second fitting model through the training set until a preset iterative training termination condition is reached, and then testing the second fitting model through the testing set, wherein each image in the second image training set corresponds to the number of one data identification and a group of relative numerical relationships. The trained second fitting model can accurately extract the corresponding relative numerical relationship of each image. Wherein, the iterative training termination condition of the second fitting model can refer to the iterative training termination condition of the first fitting model, which is not elaborated herein.
In some embodiments, the determining a relative numerical relationship between identification values corresponding to data identifications of different subgraphs includes:
calculating to obtain an identification numerical value corresponding to the data identification in each sub-image based on image pixels;
and normalizing the identification numerical values corresponding to all the sub-images in each image to obtain the relative numerical value relationship.
Specifically, after each sub-graph is obtained, the identification value may be determined according to the pixels included in the sub-graph. By establishing a pixel coordinate system in the subgraph, the coordinate value of the data identifier contained in the subgraph can be accurately judged, and the corresponding identifier value is determined through the coordinate value. Since each obtained identification value is a contrast relationship between the data identification and the sub-image where the data identification is located, multiple identification values need to be normalized and associated, so that a contrast relationship, i.e., a relative value relationship, between the multiple identification values is obtained.
In some embodiments, the images in the second training set of images include broken line images, the data identifier includes broken line points in the broken line images, and the calculating, based on image pixels, to obtain an identifier value corresponding to the data identifier in each of the subgraphs includes:
establishing a pixel coordinate system of the broken line image;
and determining the coordinate value of the broken line point in the pixel coordinate system in the change direction of the characterization identification value to be used as the identification value corresponding to the broken line point.
Specifically, the pixel coordinate system of the broken line image may be constructed in various ways, and the positions of the origin, the X axis, and the Y axis in the pixel coordinate systems of different broken line images may be different. As shown in fig. 3, the origin of the pixel coordinate system of the broken-line image created in this embodiment is located at the lower left corner of the image, the horizontal direction of the image is taken as the X-axis of the pixel coordinate system, the vertical direction of the image is taken as the Y-axis of the pixel coordinate system, and the Y-axis is the direction in which the identification value changes. The position of the polyline point in the pixel coordinate system is determined by identifying the color value of the pixel, and generally, the color of the polyline point in the polyline image is different from the background color of the image, so that the polyline point can be clearly distinguished by the pixel color value. Each solid-line box in fig. 3 represents a sub-graph, each sub-graph only includes one polyline point, and the sub-graph scans line by line along the Y axis of the pixel coordinate system, when a change in the color value of a pixel is detected, the pixel point is determined to be the pixel occupied by the polyline point, the coordinate value of the pixel is recorded, and the Y axis coordinate in the coordinate values is the identification value of the polyline point.
In some embodiments, the images in the second training set of images include cylindrical images, the data identifier includes cylindrical patterns in the cylindrical images, and the obtaining, based on image pixel calculation, an identifier value corresponding to the data identifier in each of the subgraphs includes:
establishing a pixel coordinate system of the cylindrical image;
determining a pixel area occupied by the cylindrical pattern;
and taking the maximum coordinate value of the pixel region in the change direction of the symbolic identification number in the pixel coordinate system as the identification number corresponding to the cylindrical pattern.
Specifically, the pixel coordinate system of the cylindrical image may be constructed in various ways, and the positions of the origin, the X axis, and the Y axis in the pixel coordinate systems of different cylindrical images may be different. As shown in fig. 2, the origin of the pixel coordinate system of the cylindrical image created in this embodiment is located at the lower left corner of the image, the horizontal direction of the image is taken as the X-axis of the pixel coordinate system, the vertical direction of the image is taken as the Y-axis of the pixel coordinate system, and the Y-axis direction is the identification value change direction. The position of the cylindrical pattern in the pixel coordinate system is determined by identifying the color value of the pixel, and generally, the color of the cylindrical pattern in the cylindrical image is different from the background color of the image, so that the cylindrical pattern can be clearly distinguished by the color value of the pixel. In each sub-image, pixels with color values different from background color values are identified by scanning the pixels row by row and column by column along an X axis and a Y axis, so that a pixel area occupied by the cylindrical pattern is determined, and the maximum Y axis coordinate value of the pixel area in a pixel coordinate system is used as an identification value of the cylindrical pattern, wherein the identification value represents the height of the cylindrical pattern.
In a specific example, as shown in fig. 2, the bar image includes 4 bar patterns, and the height of the first bar pattern is 0.7, the height of the second bar pattern is 0.51, the height of the third bar data is 0.95, and the height of the fourth bar pattern is 0.8, which are obtained by detecting the color value of the pixel. After the height values are normalized, the obtained relative value relationship is [0.7368,0.5368,1.0,0.8421 ]. Wherein, the first cylindrical pattern is a third cylindrical pattern, and the corresponding first identification value is 960, and then the second identification values corresponding to other second cylindrical patterns are determined to be [707.328,515.328,960,808.416], respectively.
In some embodiments, the relative numerical relationship comprises a proportional relationship characterizing a ratio relationship between the second identification value corresponding to the second data identification and the first identification value corresponding to the first data identification;
and in the change direction of the representation identification numerical value, the numerical value change of the same image interval representation is the same.
Specifically, in this embodiment, the relative numerical relationship is a proportional relationship, that is, a numerical ratio between different identification numerical values. The proportional relationship may be represented in a vector manner, that is, the output of the second fitting model may be a proportional relationship between different identification values represented in a vector manner. For example, the relative numerical relationships in the preceding paragraph are [0.7368,0.5368,1.0,0.8421 ].
As shown in FIG. 2, the Y-axis direction is the variation direction of the identification number, and each column pattern is in the same height interval (A in the figure) 1 To A 2 One height interval in between), that is, the change range of the identification value in the height interval of each cylindrical pattern is the same, which is 200 to 400. As shown in FIG. 3, the Y-axis direction is the variation direction of the labeled value, and the subgraphs where each polyline point is located are in the same height interval (B in the graph) 1 To B 2 One height interval in between) are the same, that is, the variation range of the identification value in the height interval of each sub-graph is the same, and is 600 to 800.
In some embodiments, the first fitted model and/or the second fitted model is a machine learning model.
The first fitting model and the second fitting model in this embodiment may be Machine learning models, such as SVMs (Support Vector machines), and may also be deep learning models, such as CNNs (Convolutional Neural Networks). In this embodiment, specific model types of the first fitting model and the second fitting model are not limited, and the training method for the first fitting model and the second fitting model is a conventional training method for a machine learning model, and is not described here again.
It should be noted that the method of the embodiment of the present application may be executed by a single device, such as a computer or a server. The method of the embodiment can also be applied to a distributed scene and completed by the mutual cooperation of a plurality of devices. In such a distributed scenario, one of the multiple devices may only perform one or more steps of the method of the embodiment, and the multiple devices interact with each other to complete the method.
It should be noted that the above describes some embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Corresponding to any embodiment method, the present application further provides a data extraction device, and fig. 6 shows a schematic structural diagram of the data extraction device according to an embodiment of the present application, and as shown in fig. 6, the data extraction device includes:
an obtaining module 602, configured to obtain an image to be extracted, where the image to be extracted includes key data and a true value corresponding to at least one of the key data;
a first fitting module 604 configured to input the image to be extracted into a first fitting model, and output the amount of key data of the image to be extracted via the first fitting model;
a second fitting module 606 configured to input the number of the image to be extracted and the key data into a second fitting model, and output a scale vector of the image to be extracted via the second fitting model;
an extracting module 608 configured to obtain, according to a real value corresponding to at least one of the key data and the scale vector, a real value corresponding to each of the key data in the image to be extracted through calculation,
wherein the first fitted model and the second fitted model are both pre-trained.
In some embodiments, the image to be extracted includes a cylindrical image to be extracted, the data identifier includes a cylindrical pattern, and the cylindrical image to be extracted includes a first identifier value corresponding to the first cylindrical pattern and a second cylindrical pattern to be recognized and corresponding to a second identifier value;
the relative numerical relationship represents a comparison relationship between the second identification numerical value corresponding to the second cylindrical pattern and the first identification numerical value corresponding to the first cylindrical pattern.
In some embodiments, the image to be extracted includes a broken line image to be extracted, the data identifier includes a broken line dot pattern, and the broken line image to be extracted includes a first identifier value corresponding to the first broken line dot pattern and a second broken line dot pattern to be identified, which corresponds to the second identifier value;
the relative numerical value relationship represents the contrast relationship between the second identification numerical value corresponding to the second broken line point pattern and the first identification numerical value corresponding to the first broken line point pattern.
In some embodiments, the pre-training of the first fitted model comprises:
acquiring a first image training set; each image in the first image training set comprises a first identification numerical value corresponding to the first data identification and a second data identification corresponding to the second identification numerical value to be identified; wherein the data identification characterizes an identification value in the image;
labeling the first image training set to obtain the number of the data identifications corresponding to each image in the first image training set;
and pre-training the first fitting model through the labeled first image training set until an iterative training termination condition is met, and obtaining the pre-trained first fitting model.
In some embodiments, the pre-training of the second fitting model comprises:
acquiring a second image training set; each image in the second image training set comprises a first identification numerical value corresponding to the first data identification and a second data identification corresponding to the second identification numerical value to be identified; wherein the data identification characterizes an identification value in the image;
inputting the second image training set into the first fitting model to obtain the number of data identifications corresponding to each image in the second image training set;
segmenting each image in the second image training set according to the number of the data identifications to obtain a plurality of sub-images corresponding to each image, wherein each sub-image comprises one data identification;
determining a relative numerical relationship between identification numerical values corresponding to the data identifications of different subgraphs;
and pre-training the second fitting model based on the second image training set, the number of the data identifications and the relative numerical relationship until an iterative training termination condition is met, and obtaining the pre-trained second fitting model.
In some embodiments, the determining a relative numerical relationship between identification values corresponding to data identifications of different subgraphs includes:
calculating to obtain an identification numerical value corresponding to the data identification in each sub-image based on image pixels;
and normalizing the identification numerical values corresponding to all the sub-images in each image to obtain the relative numerical value relationship.
In some embodiments, the images in the second training set of images include broken line images, the data identifier includes broken line points in the broken line images, and the calculating, based on image pixels, to obtain an identifier value corresponding to the data identifier in each of the subgraphs includes:
establishing a pixel coordinate system of the broken line image;
and determining the coordinate value of the broken line point in the change direction of the characterization identification value in the pixel coordinate system to serve as the identification value corresponding to the broken line point.
In some embodiments, the images in the second training set of images include cylindrical images, the data identifier includes cylindrical patterns in the cylindrical images, and the obtaining, based on image pixel calculation, an identifier value corresponding to the data identifier in each of the subgraphs includes:
establishing a pixel coordinate system of the cylindrical image;
determining a pixel area occupied by the cylindrical pattern;
and taking the maximum coordinate value of the pixel region in the pixel coordinate system in the direction of representing the change of the identification number as the identification number corresponding to the cylindrical pattern.
In some embodiments, the relative numerical relationship comprises a proportional relationship characterizing a ratio relationship between the second identification value corresponding to the second data identification and the first identification value corresponding to the first data identification;
and in the change direction of the representation identification numerical value, the numerical value change of the same image interval representation is the same.
In some embodiments, the first fitted model and/or the second fitted model is a machine learning model.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, the functionality of the various modules may be implemented in the same one or more software and/or hardware implementations as the present application.
The apparatus of the foregoing embodiment is used to implement the corresponding data extraction method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Corresponding to any embodiment of the method, the application further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the data extraction method according to any embodiment of the method is implemented.
Fig. 7 is a schematic diagram illustrating a more specific hardware structure of an electronic device according to this embodiment, where the device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein the processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 are communicatively coupled to each other within the device via bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 1020 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 1020 and called to be executed by the processor 1010.
The input/output interface 1030 is used for connecting an input/output module to input and output information. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 1040 is used for connecting a communication module (not shown in the drawings) to implement communication interaction between the present apparatus and other apparatuses. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 1050 includes a path that transfers information between various components of the device, such as processor 1010, memory 1020, input/output interface 1030, and communication interface 1040.
It should be noted that although the above-mentioned device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
The electronic device of the foregoing embodiment is used to implement the corresponding data extraction method in any one of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Corresponding to any embodiment method, the present application also provides a computer-readable storage medium storing computer instructions for causing the computer to execute the data extraction method according to any embodiment.
Computer-readable media of the present embodiments, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The computer instructions stored in the storage medium of the foregoing embodiment are used to enable the computer to execute the data extraction method according to any one of the foregoing embodiments, and have the beneficial effects of the corresponding method embodiment, which are not described herein again.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the context of the present application, features from the above embodiments or from different embodiments may also be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present application as described above, which are not provided in detail for the sake of brevity.
In addition, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown in the provided figures for simplicity of illustration and discussion, and so as not to obscure the embodiments of the application. Further, devices may be shown in block diagram form in order to avoid obscuring embodiments of the application, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the embodiments of the application are to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that the embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present application are intended to be included within the scope of the present application.

Claims (13)

1. A method of data extraction, comprising:
inputting an image to be extracted into a first fitting model, and outputting the number of data identifications of the image to be extracted through the first fitting model; the image to be extracted comprises a first identification value corresponding to the first data identification and a second data identification corresponding to the second identification value to be identified; the data identification represents an identification numerical value in the image to be extracted;
inputting the number of the images to be extracted and the data identifications to a second fitting model, and outputting the relative numerical relationship of the images to be extracted through the second fitting model; the relative numerical value relationship represents a comparison relationship between the second identification numerical value corresponding to the second data identification and the first identification numerical value corresponding to the first data identification;
determining a second identification value corresponding to the second data identification in the image to be extracted according to a first identification value corresponding to the first data identification and the relative value relationship,
wherein the first fitted model and the second fitted model are both pre-trained.
2. The method according to claim 1, wherein the image to be extracted comprises a cylindrical image to be extracted, the data identifier comprises a cylindrical pattern, and the cylindrical image to be extracted comprises a first identifier value corresponding to the first cylindrical pattern and a second cylindrical pattern to be identified corresponding to a second identifier value;
the relative numerical relationship represents a comparison relationship between the second identification numerical value corresponding to the second cylindrical pattern and the first identification numerical value corresponding to the first cylindrical pattern.
3. The method of claim 1, wherein the image to be extracted comprises a broken line image to be extracted, the data identifier comprises a broken line dot pattern, and the broken line image to be extracted comprises a first identifier value corresponding to the first broken line dot pattern and a second broken line dot pattern corresponding to a second identifier value to be identified;
the relative numerical relationship represents a comparison relationship between the second identification numerical value corresponding to the second broken line point pattern and the first identification numerical value corresponding to the first broken line point pattern.
4. The method of claim 1, wherein the pre-training of the first fitting model comprises:
acquiring a first image training set; each image in the first image training set comprises a first identification numerical value corresponding to the first data identification and a second data identification corresponding to the second identification numerical value to be identified; wherein the data identification characterizes an identification value in the image;
labeling the first image training set to obtain the number of the data identifications corresponding to each image in the first image training set;
and pre-training the first fitting model through the labeled first image training set until an iterative training termination condition is met, and obtaining the pre-trained first fitting model.
5. The method of claim 1, wherein the pre-training of the second fitting model comprises:
acquiring a second image training set; each image in the second image training set comprises a first identification numerical value corresponding to the first data identification and a second data identification corresponding to the second identification numerical value to be identified; wherein the data identity characterizes an identity value in the image;
inputting the second image training set into the first fitting model to obtain the number of data identifications corresponding to each image in the second image training set;
segmenting each image in the second image training set according to the number of the data identifications to obtain a plurality of sub-images corresponding to each image, wherein each sub-image comprises one data identification;
determining a relative numerical relationship between identification numerical values corresponding to the data identifications of different subgraphs;
and pre-training the second fitting model based on the second image training set, the number of the data identifications and the relative numerical relationship until an iterative training termination condition is met, and obtaining the pre-trained second fitting model.
6. The method of claim 5, wherein determining a relative numerical relationship between identification values corresponding to data identifications of different subgraphs comprises:
calculating to obtain an identification numerical value corresponding to the data identification in each sub-image based on image pixels;
and normalizing the identification numerical values corresponding to all the sub-images in each image to obtain the relative numerical value relationship.
7. The method of claim 6, wherein the images in the second image training set comprise polyline images, the data identifiers comprise polyline points in the polyline images, and the computing based on image pixels to obtain the identifier value corresponding to the data identifier in each of the subgraphs comprises:
establishing a pixel coordinate system of the broken line image;
and determining the coordinate value of the broken line point in the pixel coordinate system in the change direction of the characterization identification value to be used as the identification value corresponding to the broken line point.
8. The method of claim 6, wherein the images in the second training set of images include cylindrical images, the data identifiers include cylindrical patterns in the cylindrical images, and the computing based on image pixels to obtain the identifier values corresponding to the data identifiers in each of the subgraphs comprises:
establishing a pixel coordinate system of the cylindrical image;
determining a pixel area occupied by the cylindrical pattern;
and taking the maximum coordinate value of the pixel region in the pixel coordinate system in the direction of representing the change of the identification number as the identification number corresponding to the cylindrical pattern.
9. The method according to any one of claims 1-8, wherein the relative numerical relationship comprises a proportional relationship characterizing a ratio relationship between the second identification value corresponding to the second data identification and the first identification value corresponding to the first data identification;
and in the change direction of the representation identification numerical value, the numerical value change of the same image interval representation is the same.
10. The method according to any of claims 1-8, wherein the first fitted model and/or the second fitted model is a machine learning model.
11. A data extraction apparatus, comprising:
the first fitting module is configured to input the image to be extracted into a first fitting model and output the number of data identifications of the image to be extracted through the first fitting model; the image to be extracted comprises a first identification value corresponding to the first data identification and a second data identification corresponding to the second identification value to be identified; the data identification represents an identification numerical value in the image to be extracted;
the second fitting module is configured to input the number of the images to be extracted and the data identifications into a second fitting model, and output the relative numerical relationship of the images to be extracted through the second fitting model; the relative numerical value relationship represents a comparison relationship between the second identification numerical value corresponding to the second data identification and the first identification numerical value corresponding to the first data identification;
an extraction module configured to determine the second identification value corresponding to the second data identification in the image to be extracted according to a first identification value corresponding to the first data identification and the relative numerical relationship,
wherein the first fitted model and the second fitted model are both pre-trained.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 10 when executing the program.
13. A computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 10.
CN202210589180.6A 2022-05-26 2022-05-26 Data extraction method and device, electronic equipment and storage medium Pending CN114998912A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210589180.6A CN114998912A (en) 2022-05-26 2022-05-26 Data extraction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210589180.6A CN114998912A (en) 2022-05-26 2022-05-26 Data extraction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114998912A true CN114998912A (en) 2022-09-02

Family

ID=83030008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210589180.6A Pending CN114998912A (en) 2022-05-26 2022-05-26 Data extraction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114998912A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115331013A (en) * 2022-10-17 2022-11-11 杭州恒生聚源信息技术有限公司 Data extraction method and processing equipment for line graph

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115331013A (en) * 2022-10-17 2022-11-11 杭州恒生聚源信息技术有限公司 Data extraction method and processing equipment for line graph

Similar Documents

Publication Publication Date Title
CN111028213B (en) Image defect detection method, device, electronic equipment and storage medium
CN111340752A (en) Screen detection method and device, electronic equipment and computer readable storage medium
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
CN112634227B (en) Detection identification method and device for PCB jointed boards, electronic equipment and storage medium
CN110569774B (en) Automatic line graph image digitalization method based on image processing and pattern recognition
CN112862706A (en) Pavement crack image preprocessing method and device, electronic equipment and storage medium
CN114998912A (en) Data extraction method and device, electronic equipment and storage medium
CN114374760A (en) Image testing method and device, computer equipment and computer readable storage medium
JP6405603B2 (en) Information processing apparatus, information processing system, and program
CN113508395B (en) Method and device for detecting objects in an image composed of pixels
CN114981838A (en) Object detection device, object detection method, and object detection program
CN115546219B (en) Detection plate type generation method, plate card defect detection method, device and product
CN112215827A (en) Electromigration region detection method and device, computer equipment and storage medium
CN114937037B (en) Product defect detection method, device and equipment and readable storage medium
CN116993654B (en) Camera module defect detection method, device, equipment, storage medium and product
CN111079752A (en) Method and device for identifying circuit breaker in infrared image and readable storage medium
CN115660952A (en) Image processing method, dictionary pen and storage medium
CN113689378B (en) Determination method and device for accurate positioning of test strip, storage medium and terminal
JP2016206909A (en) Information processor, and information processing method
CN113191351B (en) Reading identification method and device of digital electric meter and model training method and device
CN113760686B (en) User interface testing method, device, terminal and storage medium
CN115115596A (en) Electronic component detection method and device and automatic quality inspection equipment
CN114359382A (en) Cooperative target ball detection method based on deep learning and related device
CN111340677A (en) Video watermark detection method and device, electronic equipment and computer readable medium
WO2020107196A1 (en) Photographing quality evaluation method and apparatus for photographing apparatus, and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination