CN113841156A - Control method and device based on image recognition - Google Patents

Control method and device based on image recognition Download PDF

Info

Publication number
CN113841156A
CN113841156A CN201980096186.6A CN201980096186A CN113841156A CN 113841156 A CN113841156 A CN 113841156A CN 201980096186 A CN201980096186 A CN 201980096186A CN 113841156 A CN113841156 A CN 113841156A
Authority
CN
China
Prior art keywords
image
template
information
feature
dividing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980096186.6A
Other languages
Chinese (zh)
Inventor
唐立三
胡飞凰
周冠兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Original Assignee
Siemens AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG filed Critical Siemens AG
Publication of CN113841156A publication Critical patent/CN113841156A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Abstract

The invention discloses a control method and device based on image recognition. The method comprises the following steps: performing feature extraction on the image to obtain at least one feature image associated with the image; dividing the image based on the at least one characteristic image to obtain at least one image sub-block and position information of the at least one image sub-block; and index extraction is carried out on the at least one image subblock to obtain at least one index information associated with the at least one image subblock, and description information associated with the at least one image subblock is obtained based on the at least one index information to further obtain an identification result of the image. By adopting the technical scheme of the invention, the image recognition efficiency can be improved, and the system for receiving and checking the materials with the image recognition can be better integrated with other production systems.

Description

Control method and device based on image recognition Technical Field
The present invention relates to the field of data processing, and in particular, to a control method and apparatus based on image recognition.
Background
Bill of Materials (BOM) is used to describe an enterprise product composition that may indicate the structural relationship between the product's assembly, subassemblies, components, parts, and through raw Materials, as well as the quantities required.
The manufacturing process of the product is implemented based on a bill of materials. Material information is attached when the material is delivered from a material supplier to a product manufacturing plant. The material information is typically presented in the form of delivery orders and packaging labels, which are often printed by the material supplier based on a specified format.
Disclosure of Invention
Aiming at the problem that a material plan and a quality strategy of a traditional method are difficult to update in time, the invention provides a control method and a control device based on image recognition.
The invention provides a control method based on image recognition on one hand, which comprises the following steps: performing feature extraction on an image to obtain at least one feature image associated with the image; dividing the image based on the at least one feature image to obtain at least one image sub-block and position information of the at least one image sub-block; and performing index extraction on the at least one image subblock to obtain at least one index information associated with the at least one image subblock, and obtaining description information associated with the at least one image subblock based on the at least one index information to further obtain an identification result of the image. The image recognition method can realize semantic recognition after the image is divided, and further determine the information contained in the image.
In one embodiment, if a template tag is included in the at least one feature image, the image is partitioned based at least on a partitioning template associated with the template tag; if the template marker is not included in the at least one feature image, the image is partitioned based on a type of the at least one feature image. With this embodiment, the image can be divided flexibly. When the division template exists, the image can be divided and identified more quickly; when there is no division template, the image may also be divided according to the type of the feature image, thereby extending the application range of the present invention.
In one embodiment, if the matching degree of the division template and the image is larger than or equal to a first threshold value, dividing the image based on the division template; if the matching degree of the dividing template and the image is smaller than the first threshold value, dividing a first part of the image based on the dividing template, and dividing a second part of the image based on the characteristic image type, wherein the first part is a part matched with the dividing template in the image, and the rest part of the image comprises the second part. This embodiment illustrates in more detail how the image is divided using the division template, and realizes the feature that the division template is used in combination with the feature image.
In one embodiment, the recognition result is represented in a preset form of information text based on the description information. By implementing this embodiment, the identified content can be presented in a specified form while reducing the complexity of database maintenance, facilitating integration with other systems.
In one embodiment, verification information from a database is obtained, whether an abnormality exists in the identification result is verified based on the verification information, and if the abnormality exists in the identification result, a first updating operation instruction for the database is generated based on the type and content of the abnormality so as to update a part, associated with the identification result, in the database. When the database is updated, the efficiency and the accuracy of image recognition can be improved.
In one embodiment, if the identification result of the image has no abnormality, obtaining quality inspection information of the object to be tested associated with the image; and if the quality inspection information has an abnormality, generating a second updating operation instruction for the database so as to update the part of the database associated with the quality inspection information. This embodiment provides for determining whether to maintain the database based on the quality of the test object (e.g., the received material).
In one embodiment, before feature extraction is performed on the image, the image is preprocessed so that the difference value between the pixels of the designated area and other pixel areas in the image is greater than or equal to a second threshold value. By performing this embodiment, the performance of text recognition can be improved.
In another aspect of the present invention, an apparatus based on image recognition is further provided, including: an extraction module configured to perform feature extraction on an image to obtain at least one feature image associated with the image; a dividing module configured to divide the image based on the at least one feature image to obtain at least one image sub-block and position information of the at least one image sub-block; the identification module is configured to perform index extraction on the at least one image subblock to obtain at least one index information associated with the at least one image subblock, and obtain description information associated with the at least one image subblock based on the at least one index information to further obtain an identification result of the image.
In another aspect, the present invention also provides a computer storage medium having stored thereon computer-executable instructions that, when executed, perform the aforementioned method.
In another aspect, the present invention further provides a computer device, which includes a memory and a processor, wherein the memory stores computer-executable instructions, and when the executable instructions are executed, the processor executes the method described above.
Yet another aspect of the invention proposes a computer program product, tangibly stored on a computer-readable medium and comprising computer-executable instructions that, when executed, cause at least one processor to perform the aforementioned method.
By adopting the technical scheme of the invention, the material receiving and checking system can be better integrated with other production systems, the automation of the whole factory can be realized, a large amount of manual operation and manual errors are reduced, and the cost is reduced.
Drawings
Embodiments are shown and described with reference to the drawings. These drawings are provided to illustrate the basic principles and thus only show the aspects necessary for understanding the basic principles. The figures are not to scale. In the drawings, like reference numerals designate similar features.
FIG. 1 is a flow chart of an image recognition method according to an embodiment of the present invention;
FIG. 2 is a diagram of an image recognition apparatus according to an embodiment of the present invention;
FIG. 3 is a flow chart of material inspection according to an embodiment of the present invention;
FIG. 4 is a network architecture diagram of a process control system according to an embodiment of the invention.
Detailed Description
In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings which form a part hereof. The accompanying drawings illustrate, by way of example, specific embodiments in which the invention may be practiced. The illustrated embodiments are not intended to be exhaustive of all embodiments according to the invention. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. The following detailed description, therefore, is not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
First, terms related to the present invention will be explained. Semantics refers to the meaning that a field is intended to express, and a plurality of different fields may represent the same semantics. The semantic meaning of an image refers to the meaning of a feature image, a character, or the like in the image to be expressed.
Through a great deal of practice, the inventors have discovered that for bills of materials from different suppliers, material information is often presented in different languages, even in handwriting. In order to obtain an accurate bill of materials, material receipt and inspection is highly dependent on the manual operation of the plant, which results in low productivity and also in problems of anomalies and product quality, for example, the personnel involved must compare random numbers to check the type of material and, when the wrong type of material is considered as the correct batch, the final product is rejected. In addition, due to the large amount of manual operation and non-real-time feedback, the material plan and quality strategy based on the traditional method are difficult to update in time.
Based on the problems, the invention provides a data processing method and a data processing system based on image recognition, so as to realize the automatic process of material receiving and detecting. The data processing method and system for image recognition proposed by the present invention are explained below with reference to the accompanying drawings.
First, please refer to fig. 1, which is a flowchart illustrating an image recognition method according to an embodiment of the invention.
Step S101: the acquired image is pre-processed.
As can be seen from the above, the type, definition, color or other characteristics of the obtained image may be different for different image sources, and therefore, in this step, the obtained image is preprocessed to make the preprocessed image more easily recognizable. The preprocessing in this embodiment may be a color reversal, a contrast adjustment, a gray area removal, a bubble removal, and the like, so that the difference between the pixels of the designated area (e.g., character, feature code) and the pixels around the designated area in the image is greater than or equal to a designated threshold.
Step S102: and extracting the characteristic image of the preprocessed image.
From the extracted at least one feature image, a classification of the image with the feature image, such as a feature code, a table, a predetermined mark, etc., can be determined. It will be appreciated that one or more different types of feature images may be included in an image. For example, when the feature image includes a predetermined mark (e.g., a logo of a provider, a feature code, etc.), a provider corresponding to the logo can be determined by recognizing the logo in the image, and a division template corresponding to the provider can be found in the database.
In one embodiment, the above-described feature images are stored in the database in the form of key information and are continuously improving over the life cycle. For example, the key information may be stored in the form of key-value pairs in a database. The keyword K may be a category of the feature image, a formatting identification of the related company, a name of the related company, etc., and the corresponding value V may be an information set of these keywords, such as company information, a delivery slip template, a material package label, etc.
The image may then be divided based on the feature image to determine at least one image sub-block and location information for the sub-block.
Step S103 is performed to determine whether the feature image points to an existing division template. Specifically, if a template mark is included in the feature image, it is further determined whether the matching degree of the division template associated with the template mark with the image is equal to or greater than a specified threshold value (step S104). It is understood that the template mark may be a logo of a supplier or other designated marks. If the matching degree of the division template and the image is less than the specified threshold, dividing a first part of the image based on the division template, and dividing a second part of the image based on the characteristic image type in the second part, wherein the first part is a part matched with the division template in the image, and the rest part of the image comprises the second part (step S105); if the matching degree of the division template and the image is equal to or greater than a specified threshold, the image is divided based on the division template (step 109). The matching degree here may be the similarity between the division template and the layout of the image sub-blocks in the image, or may be other parameters for measuring the matching degree between the division template and the image.
For example, if a division template a corresponding to a vendor can be found in the database by the feature image, the image may be divided by the division template a. It is understood that in the image provided by the supplier a, at least a part of the image is regularly distributed according to the existing division template a. Therefore, for this partial image with regularity, the image may be divided based on the division template a to obtain image sub-blocks and corresponding position information; for the remaining portion of the image, a division may be performed based at least on the type of feature image included in the portion, thereby obtaining at least one image sub-block. In other words, the remaining portion of the image may further include a third portion, which is divided by a specified division rule, for example, by a factor of size, position, shape, and the like.
Taking a feature code (e.g., a barcode, a two-dimensional code, etc.) as an example, an image sub-block corresponding to the feature code may be determined based on a preset rule. Specifically, when the feature code is a barcode, the image sub-block corresponds to a region formed by extending at least one edge of the barcode by a specified length in a direction perpendicular to the edge, so as to be identified subsequently. It is understood that when a table or other extraction marks (e.g., empty lines, black spaces, bold characters) are included in the image, the region corresponding to the table or the region corresponding to the extraction marks may be marked out from the image according to a specified rule. For example, the top and bottom edges of the table may be used as a basis to extend upward and downward by a specified length, respectively, to determine the image sub-blocks corresponding to the table. The position-size relationship between the image sub-blocks and the feature image can be set based on specific applications, and need not be exhaustive. In one embodiment, through the above steps, the existing partitioning rule corresponding to the vendor a may also be updated, i.e., the partitioning template is adjusted based on the correctness of the final recognition result.
If there is no division template corresponding to the vendor in the database, a division rule of the image may be determined based on the type of the extracted feature image, and the image may be divided based on the division rule (step S110).
Step S106: and identifying the divided image subblocks.
In this step, index extraction (e.g., recognizing characters, feature codes, etc. in the image) is performed on the image sub-block to obtain index information included in the image sub-block. For example, when the feature image is a barcode, a character/two-dimensional code located above the barcode may be included in the image sub-block corresponding to the feature image, and the index information may be obtained by recognizing the character/two-dimensional code.
When a plurality of barcodes are included in an image, there may be coincidence of the number strings represented by the plurality of barcodes after recognition. The description information, i.e., the product category and related information, associated with the plurality of barcodes may be found in the database by the index information, the number string and the location of the barcode corresponding to the barcode.
Step S107: based on the description information, a recognition result having a specified representation form is obtained.
In practical applications, different vendors may give different types for the same semantic key. For example, the term "telephone number" may be expressed in different languages or abbreviations for different suppliers (e.g., Tel, phone, telephone). In this step, all "Tel, phone" are mapped to "phone numbers" in the database based on preset learning models and/or matching rules. Thus, by learning the model and/or matching rules, one entry corresponding to a plurality of fields of different representation forms can be obtained in the database, i.e. a many-to-one conversion is achieved. In other words, both the learning model and the matching rule are used for semantic matching to convert the identified contents of the image sub-blocks into entries in the information text in a preset form (e.g., standardized information text).
By representing the identified content in a specified representation form (e.g., a standardized informative text), user and integration among multiple systems may be facilitated. For example, the term "telephone number" expressed in a different language or abbreviation can be expressed in a designated term (e.g., Tel No.) that other systems can directly call without recognitions or conversions.
Step S108: and verifying the identification result.
In this step, verification information from the database is obtained, and whether the identification result is abnormal or not is verified based on the verification information, that is, the identification content can be verified through the information that is pre-stored in the database. And if the identification result has an exception, generating a first updating operation instruction for the database based on the type and the content of the exception. It will be appreciated that when the database is operated according to the first update operation indication, the part related to the recognition result (e.g. the template, the partition rule, etc. involved in the image recognition) may be updated. As an example, at least one of the following may be updated: key information, division rules, division templates, learning models and order information. In one embodiment, the identification result may be verified by calling order information in a database, and the verified and corrected identification content may be provided to the database. It will be appreciated that an identification result having a specified representation may facilitate verification with information in a database.
In one embodiment, for both of the foregoing cases: (1) there is no division template (2) there is a division template but a part of the image cannot be matched with the division template, and the template can be updated based on the verification result. And if the verification result is accurate, generating or updating the division template based on the division rule for subsequent use. For example, when there is no division template, the division template may be generated based on at least index information and positions of the image subblocks.
FIG. 2 is a diagram of an image recognition apparatus according to an embodiment of the present invention.
The image processing apparatus 200 includes a preprocessing module 201, an extraction module 202, a division module 203, a recognition module 204, and a verification module 205. Specifically, the preprocessing module 201 is configured to preprocess the obtained image so that a difference value between a pixel of a text in the image and other pixels reaches a specified threshold value. The extraction module 202 receives the preprocessed image and performs feature image extraction on the image to obtain at least one feature image associated with the image, such as feature codes, tables, and/or other extraction marks (e.g., empty lines, black spaces, bold type, etc.).
The dividing module 203 divides the image based on the obtained feature image to obtain image sub-blocks corresponding to the feature image and positions of the image sub-blocks, respectively, wherein the image sub-blocks include the feature image. The identification module 204 identifies the image sub-block to determine index information of the image sub-block, and further determines a data record corresponding to the index information. The recognition module 204 also converts the recognized contents into an entry of the information text in a preset form based on the specified matching rule.
The verification module 205 is configured to obtain verification information from a verification database, verify whether the above-mentioned identification content has an abnormality based on the verification information, and if the abnormality exists, generate a corresponding database update operation instruction based on the type and content of the abnormality.
Fig. 3 is a flow chart of material inspection according to an embodiment of the invention.
First, step S301 is executed to obtain an identification result of an image for characterizing a material, and step S302 is further executed to determine whether the identification result is abnormal. When the identification result is accurate, executing step S303 to obtain a material inspection result; otherwise, step S306 is executed to update the relevant content in the database. For example, a first update operation instruction to the database is generated to update the portion of the database associated with the image recognition result.
Executing step S304, determining whether the inspection of the material passes, that is, determining whether the inspection result of the material is abnormal, and if the accurate and precise lot reaches a certain value, periodically adjusting the inspection plan (S305), for example, reducing the inspection items for the material; if not, step S306 is executed to generate a second update operation instruction. When the database is operated according to the second update operation instruction, the portion of the database related to the quality check may be updated. It will be appreciated that the test plan for the test object may be updated in the database based on the type of anomaly and/or the stage of the anomaly generation.
FIG. 4 is a network architecture diagram of a process control system according to an embodiment of the invention.
As shown, the data processing system DPS receives image data (e.g., a plurality of images of a delivery slip and/or package label captured by a field device) from an image acquisition device IMD, and identifies the received material by recognizing the images.
The image obtaining apparatus may include, but is not limited to, a 2D color or grayscale type camera, a 3D camera, and the like. The 2D camera can be used to inspect whether the material package is damaged, and the 3D camera can be used to inspect whether the shape of material is unanimous with the shape of prerecording in the database.
The data processing system DPS is communicatively coupled to the ERP system to query the ERP system for material orders and supplier information, and may also submit the inspection results of the material to the ERP system to perform corresponding operations with respect to the order for the material, such as completing a purchase order or returning an unacceptable material.
In addition, the data processing system DPS is also communicatively connected to the failure data system EDS to obtain material usage and quality inspection information, such as material usage information, material quality consistency information during the production process, and to update the data processing system's content, such as models and rules, in the databases involved in image recognition, material inspection, etc.
In one embodiment, the data processing system DPS is also communicatively coupled to a remote server (e.g., a cloud server) to implement material usage data analysis, update models and rules, manage suppliers, etc. on the cloud server.
The flows of the methods recited in fig. 1, 3 also represent computer readable instructions, including programs executed by a processor. The program may be embodied in a tangible computer readable medium such as a CD-ROM, floppy disk, hard disk, Digital Versatile Disk (DVD), blu-ray disk, or other form of memory. Alternatively, some or all of the steps in the example methods of fig. 1, 3 may be implemented using any combination of Application Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), field programmable logic devices (EPLDs), discrete logic, hardware, firmware, etc. The information may be stored on the readable medium at any time. It will be appreciated that the computer readable instructions may also be stored on a cloud platform in a web server for ease of use by a user.
The invention also provides a computer device which comprises a processor and a memory. The memory is used for storing instructions which, when executed, cause the processor to perform the method described in fig. 1, 3.
Another aspect of the invention also proposes a computer program product, tangibly stored on a computer-readable medium and comprising computer-executable instructions that, when executed, cause at least one processor to perform the method as set forth in fig. 1, 3.
While the invention has been illustrated and described in detail in the drawings and foregoing description with reference to preferred embodiments, the invention is not limited to the embodiments disclosed, and other arrangements derived therefrom by those skilled in the art are within the scope of the invention.

Claims (17)

  1. The control method based on image recognition is characterized by comprising the following steps:
    performing feature extraction on an image to obtain at least one feature image associated with the image;
    dividing the image based on the at least one feature image to obtain at least one image sub-block and position information of the at least one image sub-block; and
    index extraction is carried out on the at least one image subblock to obtain at least one index information associated with the at least one image subblock, and description information associated with the at least one image subblock is obtained based on the at least one index information to further obtain an identification result of the image.
  2. The method of claim 1, further comprising:
    if a template tag is included in the at least one feature image, partitioning the image based at least on a partitioning template associated with the template tag;
    if the template marker is not included in the at least one feature image, the image is partitioned based on a type of the at least one feature image.
  3. The method of claim 2, further comprising:
    if the matching degree of the dividing template and the image is larger than or equal to a first threshold value, dividing the image based on the dividing template;
    if the degree of matching of the division template with the image is less than the first threshold, dividing a first portion of the image based on the division template and dividing a second portion of the image based on a feature image type in the second portion,
    wherein the first part is a part of the image matching the division template, and the remaining part of the image includes the second part.
  4. The method of claim 1, further comprising:
    and representing the recognition result in an information text in a preset form based on the description information.
  5. The method of claim 1, further comprising:
    obtaining verification information from a database, verifying whether the identification result has an abnormality or not based on the verification information, and if the identification result has the abnormality, generating a first updating operation instruction for the database based on the type and content of the abnormality so as to update the part of the database associated with the identification result.
  6. The method of claim 5, comprising:
    if the identification result of the image is not abnormal, obtaining quality inspection information of the object to be inspected, which is associated with the image;
    and if the quality inspection information has an abnormality, generating a second updating operation instruction for the database so as to update the part of the database associated with the quality inspection information.
  7. The method of claim 1, comprising:
    before feature extraction is carried out on the image, the image is preprocessed, so that the difference value between pixels of a specified area and other pixel areas in the image is larger than or equal to a second threshold value.
  8. An apparatus based on image recognition, comprising:
    an extraction module configured to perform feature extraction on an image to obtain at least one feature image associated with the image;
    a dividing module configured to divide the image based on the at least one feature image to obtain at least one image sub-block and position information of the at least one image sub-block;
    the identification module is configured to perform index extraction on the at least one image subblock to obtain at least one index information associated with the at least one image subblock, and obtain description information associated with the at least one image subblock based on the at least one index information to further obtain an identification result of the image.
  9. The apparatus of claim 8, wherein the partitioning module is further configured to:
    if a template tag is included in the at least one feature image, partitioning the image based at least on a partitioning template associated with the template tag;
    if the template marker is not included in the at least one feature image, the image is partitioned based on a type of the at least one feature image.
  10. The apparatus of claim 9, wherein the partitioning module is further configured to:
    if the matching degree of the dividing template and the image is larger than or equal to a first threshold value, dividing the image based on the dividing template;
    if the degree of matching of the division template with the image is less than the first threshold, dividing a first portion of the image based on the division template and dividing a second portion of the image based on a feature image type in the second portion,
    wherein the first part is a part of the image matching the division template, and the remaining part of the image includes the second part.
  11. The apparatus of claim 8, wherein the identification module is further configured to:
    and representing the recognition result in an information text in a preset form based on the description information.
  12. The apparatus of claim 8, further comprising:
    the verification module is configured to obtain verification information from a database, verify whether the identification result has an abnormality or not based on the verification information, and if the identification result has the abnormality, generate a first updating operation instruction for the database based on the type and content of the abnormality so as to update the part, associated with the identification result, in the database.
  13. The apparatus of claim 12, wherein the verification module is further configured to:
    if the identification result of the image is not abnormal, obtaining the quality inspection information of the object to be tested associated with the image,
    if the quality check information represents an anomaly, generating a second update operation indication for the database to update the portion of the database associated with the quality check information.
  14. The apparatus of claim 8, further comprising:
    the preprocessing module is configured to preprocess the image before extracting the characteristic image of the image, so that the difference value between the pixel of the designated area and the pixel of other areas in the image is larger than or equal to a second threshold value.
  15. A computer storage medium having stored thereon computer-executable instructions that, when executed, perform the method of any of claims 1 to 7.
  16. Computer device comprising a memory and a processor, the memory having stored thereon computer executable instructions that, when executed, cause the processor to perform the method of any of claims 1 to 7
  17. A computer program product tangibly stored on a computer-readable medium and comprising computer-executable instructions that, when executed, cause at least one processor to perform the method of any of claims 1 to 7.
CN201980096186.6A 2019-05-27 2019-05-27 Control method and device based on image recognition Pending CN113841156A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/088640 WO2020237480A1 (en) 2019-05-27 2019-05-27 Control method and device based on image recognition

Publications (1)

Publication Number Publication Date
CN113841156A true CN113841156A (en) 2021-12-24

Family

ID=73552194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980096186.6A Pending CN113841156A (en) 2019-05-27 2019-05-27 Control method and device based on image recognition

Country Status (2)

Country Link
CN (1) CN113841156A (en)
WO (1) WO2020237480A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807651B (en) * 2021-08-09 2024-02-23 中建二局安装工程有限公司 Component whole process management system based on big data, terminal equipment and management method
CN116740581B (en) * 2023-08-16 2023-10-27 深圳市欢创科技有限公司 Method for determining material identification model, method for returning to base station and electronic equipment
CN117170314B (en) * 2023-11-02 2024-01-26 天津诺瑞信精密电子有限公司 Control method and system for cutting and drilling numerical control machine tool

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002175304A (en) * 1998-12-17 2002-06-21 Matsushita Electric Ind Co Ltd Image-retrieving device and its method
CN102339289A (en) * 2010-07-21 2012-02-01 阿里巴巴集团控股有限公司 Match identification method for character information and image information and server
CN102831694A (en) * 2012-08-09 2012-12-19 广州广电运通金融电子股份有限公司 Image identification system and image storage control method
DE102012005325A1 (en) * 2012-03-19 2013-09-19 Ernst Pechtl Machine image recognition method based on a Kl system
WO2016107475A1 (en) * 2014-12-30 2016-07-07 清华大学 Vehicle identification method and system
CN105824886A (en) * 2016-03-10 2016-08-03 西安电子科技大学 Rapid food recognition method based on Markov random field
CN106294535A (en) * 2016-07-19 2017-01-04 百度在线网络技术(北京)有限公司 The recognition methods of website and device
CN108319964A (en) * 2018-02-07 2018-07-24 嘉兴学院 A kind of fire image recognition methods based on composite character and manifold learning
CN108427959A (en) * 2018-02-07 2018-08-21 北京工业大数据创新中心有限公司 Board state collection method based on image recognition and system
CN108446717A (en) * 2018-02-07 2018-08-24 苏州工业大数据创新中心有限公司 A kind of board state collection method and system based on image recognition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10013637B2 (en) * 2015-01-22 2018-07-03 Microsoft Technology Licensing, Llc Optimizing multi-class image classification using patch features
CN106708904A (en) * 2015-11-17 2017-05-24 北京奇虎科技有限公司 Image search method and apparatus

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002175304A (en) * 1998-12-17 2002-06-21 Matsushita Electric Ind Co Ltd Image-retrieving device and its method
CN102339289A (en) * 2010-07-21 2012-02-01 阿里巴巴集团控股有限公司 Match identification method for character information and image information and server
DE102012005325A1 (en) * 2012-03-19 2013-09-19 Ernst Pechtl Machine image recognition method based on a Kl system
CN102831694A (en) * 2012-08-09 2012-12-19 广州广电运通金融电子股份有限公司 Image identification system and image storage control method
WO2016107475A1 (en) * 2014-12-30 2016-07-07 清华大学 Vehicle identification method and system
CN105824886A (en) * 2016-03-10 2016-08-03 西安电子科技大学 Rapid food recognition method based on Markov random field
CN106294535A (en) * 2016-07-19 2017-01-04 百度在线网络技术(北京)有限公司 The recognition methods of website and device
CN108319964A (en) * 2018-02-07 2018-07-24 嘉兴学院 A kind of fire image recognition methods based on composite character and manifold learning
CN108427959A (en) * 2018-02-07 2018-08-21 北京工业大数据创新中心有限公司 Board state collection method based on image recognition and system
CN108446717A (en) * 2018-02-07 2018-08-24 苏州工业大数据创新中心有限公司 A kind of board state collection method and system based on image recognition

Also Published As

Publication number Publication date
WO2020237480A1 (en) 2020-12-03

Similar Documents

Publication Publication Date Title
CN109840519B (en) Self-adaptive intelligent bill identification and input device and application method thereof
US20230021040A1 (en) Methods and systems for automated table detection within documents
CN109887153B (en) Finance and tax processing method and system
CN108256074B (en) Verification processing method and device, electronic equipment and storage medium
US11756323B2 (en) Method of automatically recognizing and classifying design information in imaged PID drawing and method of automatically creating intelligent PID drawing using design information stored in database
US20210182494A1 (en) Post-filtering of named entities with machine learning
KR102177550B1 (en) Method of automatically recognizing and classifying information of design in imaged PID drawings
US8676731B1 (en) Data extraction confidence attribute with transformations
US9495658B2 (en) System, method, and apparatus for barcode identification workflow
CN113841156A (en) Control method and device based on image recognition
EP2414992A1 (en) Apparatus and methods for analysing goods packages
US20220292861A1 (en) Docket Analysis Methods and Systems
CN108363943A (en) Clearance robot based on Weigh sensor technology
CN111626177A (en) PCB element identification method and device
CN112418813B (en) AEO qualification intelligent rating management system and method based on intelligent analysis and identification and storage medium
CN112613367A (en) Bill information text box acquisition method, system, equipment and storage medium
CN107563689A (en) Use bar code management system and method
CN110688445B (en) Digital archive construction method
CN116018623A (en) Improved product label inspection
CN117436419B (en) Control method and device for automatically updating goods registration report data
CN110991411A (en) Intelligent document structured extraction method suitable for logistics industry
CN111047261A (en) Warehouse logistics order identification method and system
CN114174996A (en) Repair support system and repair support method
CN114780521A (en) Verification method for delivered data quality
CN114743198A (en) Method and device for identifying bill with form

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination