CN112560718A - Method and device for acquiring material information, storage medium and electronic device - Google Patents

Method and device for acquiring material information, storage medium and electronic device Download PDF

Info

Publication number
CN112560718A
CN112560718A CN202011517201.0A CN202011517201A CN112560718A CN 112560718 A CN112560718 A CN 112560718A CN 202011517201 A CN202011517201 A CN 202011517201A CN 112560718 A CN112560718 A CN 112560718A
Authority
CN
China
Prior art keywords
network
target image
preprocessing
material information
optical character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011517201.0A
Other languages
Chinese (zh)
Inventor
邓云芳
吴志伟
潘家贤
张俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202011517201.0A priority Critical patent/CN112560718A/en
Publication of CN112560718A publication Critical patent/CN112560718A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Character Discrimination (AREA)

Abstract

The application discloses a method and a device for acquiring material information, a storage medium and an electronic device. Wherein, the method comprises the following steps: collecting a target image of a material; the material information is recognized from the target image by computer vision classification recognition and optical character recognition, the appearance of the material and the text on the material are read in real time through the camera, the working efficiency is greatly improved and the manpower resource is saved by means of computer vision classification recognition and OCR recognition, and the technical problem of low efficiency of obtaining the material information in the related technology can be solved.

Description

Method and device for acquiring material information, storage medium and electronic device
Technical Field
The application relates to the field of artificial intelligence, in particular to a method and a device for acquiring material information, a storage medium and an electronic device.
Background
Many companies evaluate the aggregate conditions of suppliers (e.g., quality evaluation, cost evaluation, supply assurance evaluation, mutual customers, etc.) and most of them are in the impression of the purchasing businessman. The disadvantage of this approach is that the subjectivity is too high, the advantages of the suppliers in each dimension cannot be obtained quickly, and there is a lack of strong and effective data support when different suppliers are compared. The purchasing businessman does not have objective, visual, clear and transparent data support for selecting which supplier, and is difficult to quickly select the supplier with the maximum benefits for the company from a plurality of suppliers, the characteristics and the classification of the materials need to be judged by the businessman, so that the time of the businessman is consumed to read the labels and the text information of the materials, and the materials are difficult to be accurately classified according to the appearance characteristics of the materials.
Aiming at the problem of low efficiency of acquiring material information in the related technology, no effective solution is provided at present.
Disclosure of Invention
The embodiment of the application provides a method and a device for acquiring material information, a storage medium and an electronic device, and aims to at least solve the technical problem of low efficiency of acquiring the material information in the related art.
According to an aspect of an embodiment of the present application, a method for acquiring material information is provided, including: collecting a target image of a material; and identifying material information from the target image by using computer vision classification identification and optical character identification.
Optionally, when material information is identified from the target image by using computer vision classification identification and optical character identification, inputting the target image into a preprocessing network for preprocessing; inputting a preprocessing result into an SSD network, and identifying a material type in the target image through the SSD network, wherein the material information comprises the material type; inputting the preprocessing result into an optical character recognition network, and recognizing a material text in the target image through the optical character recognition network, wherein the material information comprises the material text.
Optionally, the preprocessing network includes a mobile network, wherein when the target image is input into the preprocessing network for preprocessing, material features are extracted from the target image through the mobile network, and the preprocessing result includes the material features.
Optionally, the preprocessing network further includes a pyramid network, an attention network layer, and a residual network, wherein when the target image is input into the preprocessing network for preprocessing, in the process of extracting material features from the target image through the mobile network, multi-scale transformation is performed through the pyramid network, an attention mechanism is operated through the attention network, and a residual is added through the residual network, so as to improve accuracy of subsequent classification and text extraction by using the material features.
Optionally, when the type of the material in the target image is identified through the SSD network, processing the preprocessing result into feature maps of different scales through the SSD network; generating an identification frame, and determining a region to be identified in a feature map by adjusting the position of the identification frame; and classifying the area to be identified by using a Circle layer of the SSD network to obtain the material type.
Optionally, when the material text in the target image is recognized through the optical character recognition network, angle correction processing, denoising processing, defogging processing and image enhancement processing are performed on the preprocessing result, then recognized characteristic region characters are subjected to line segmentation, recognized lines of characters are cut, line segmentation is performed on each line of characters, then line segmentation is performed on each line of characters, each character is cut, and each character is analyzed to obtain the recognized material text.
Optionally, the pre-processing network, the SSD network, and the optical character recognition network are trained using pre-labeled material images before identifying material information from the target image using computer vision classification recognition and optical character recognition.
According to another aspect of the embodiments of the present application, there is also provided an apparatus for acquiring material information, including: the acquisition unit is used for acquiring a target image of the material; and the identification unit is used for identifying the material information from the target image by utilizing computer vision classification identification and optical character identification.
Optionally, the identification unit is further configured to input the target image into a preprocessing network for preprocessing when the material information is identified from the target image by using computer vision classification identification and optical character identification; inputting a preprocessing result into an SSD network, and identifying a material type in the target image through the SSD network, wherein the material information comprises the material type; inputting the preprocessing result into an optical character recognition network, and recognizing a material text in the target image through the optical character recognition network, wherein the material information comprises the material text.
Optionally, the preprocessing network includes a mobile network, wherein the identification unit is further configured to extract material features from the target image through the mobile network when the target image is input into the preprocessing network for preprocessing, where the preprocessing result includes the material features.
Optionally, the preprocessing network further includes a pyramid network, an attention network layer, and a residual network, wherein when the target image is input into the preprocessing network for preprocessing, in the process of extracting material features from the target image through the mobile network, multi-scale transformation is performed through the pyramid network, an attention mechanism is operated through the attention network, and a residual is added through the residual network, so as to improve accuracy of subsequent classification and text extraction by using the material features.
Optionally, the identifying unit is further configured to process the preprocessing result into feature maps of different scales through the SSD network when the material type in the target image is identified through the SSD network; generating an identification frame, and determining a region to be identified in a feature map by adjusting the position of the identification frame; and classifying the area to be identified by using a Circle layer of the SSD network to obtain the material type.
Optionally, the recognition unit is further configured to, when the material text in the target image is recognized through the optical character recognition network, perform angle correction processing, denoising processing, defogging processing, and image enhancement processing on the preprocessing result, then perform line segmentation on the recognized characters in the feature region, cut off each recognized line of characters, perform line segmentation on each line of text, cut out each character, and analyze each character to obtain the recognized material text.
Optionally, the apparatus of the present application may further include a training unit, configured to train the preprocessing network, the SSD network, and the optical character recognition network with the pre-marked material images before identifying the material information from the target image by using the computer vision classification recognition and the optical character recognition.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program which, when executed, performs the above-described method.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the above method through the computer program.
In the embodiment of the application, a target image of a material is collected; the material information is recognized from the target image by computer vision classification recognition and optical character recognition, the appearance of the material and the text on the material are read in real time through the camera, the working efficiency is greatly improved and the manpower resource is saved by means of computer vision classification recognition and OCR recognition, and the technical problem of low efficiency of obtaining the material information in the related technology can be solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of an alternative material information obtaining method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an alternative material information acquisition scheme according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an alternative identification scheme for material information according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative identification scheme for material information according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an alternative material information acquisition device according to an embodiment of the present application;
And
fig. 6 is a block diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of an embodiment of the present application, an embodiment of a method for acquiring material information is provided. Fig. 1 is a flowchart of an optional material information obtaining method according to an embodiment of the present application, and as shown in fig. 1, the method may include the following steps:
and step S1, acquiring a target image of the material.
The appearance of the material and the text on the material are read in real time through the camera, and the working efficiency is greatly improved and the human resources are saved by means of visual classification recognition and OCR recognition based on a MobileNet V3-SSD computer.
And step S2, recognizing material information from the target image by using computer vision classification recognition and optical character recognition.
Optionally, the pre-processing network, the SSD network, and the optical character recognition network are trained using pre-labeled material images before identifying material information from the target image using computer vision classification recognition and optical character recognition.
Optionally, when material information is identified from the target image by using computer vision classification identification and optical character identification, inputting the target image into a preprocessing network for preprocessing; inputting a preprocessing result into an SSD network, and identifying a material type in the target image through the SSD network, wherein the material information comprises the material type; inputting the preprocessing result into an optical character recognition network, and recognizing a material text in the target image through the optical character recognition network, wherein the material information comprises the material text.
By optimizing and fusing a Mobile Net V3-SSD (single shot multi-box detector) computer vision classification recognition model and an OCR model, a backbone network can be shared to perform image classification recognition and an OCR function network layer to extract texts in materials, a Circle layer is added in the classification process, a Circle loss function is added in the training process to improve the classification recognition progress, and a warp steps method is used for adjusting the training learning rate to avoid the large-amplitude fluctuation of the model under the training condition.
Optionally, the preprocessing network includes a mobile network, wherein when the target image is input into the preprocessing network for preprocessing, material features are extracted from the target image through the mobile network, and the preprocessing result includes the material features.
Optionally, the preprocessing network further includes a pyramid network, an attention network layer, and a residual network, wherein when the target image is input into the preprocessing network for preprocessing, in the process of extracting material features from the target image through the mobile network, multi-scale transformation is performed through the pyramid network, an attention mechanism is operated through the attention network, and a residual is added through the residual network, so as to improve accuracy of subsequent classification and text extraction by using the material features.
Optionally, when the type of the material in the target image is identified through the SSD network, processing the preprocessing result into feature maps of different scales through the SSD network; generating an identification frame, and determining a region to be identified in a feature map by adjusting the position of the identification frame; and classifying the area to be identified by using a Circle layer of the SSD network to obtain the material type.
Optionally, when the material text in the target image is recognized through the optical character recognition network, angle correction processing, denoising processing, defogging processing and image enhancement processing are performed on the preprocessing result, then recognized characteristic region characters are subjected to line segmentation, recognized lines of characters are cut, line segmentation is performed on each line of characters, then line segmentation is performed on each line of characters, each character is cut, and each character is analyzed to obtain the recognized material text.
Through the steps, acquiring a target image of the material; the material information is recognized from the target image by computer vision classification recognition and optical character recognition, the appearance of the material and the text on the material are read in real time through the camera, the working efficiency is greatly improved and the manpower resource is saved by means of computer vision classification recognition and OCR recognition, and the technical problem of low efficiency of obtaining the material information in the related technology can be solved.
According to the scheme, data of various aspects of a supplier can be conveniently input, the comprehensive score of the supplier can be obtained by utilizing an evaluation model which is set up in advance by the system, the data can be visually displayed in front of the business staff, and the business staff can conveniently inquire, analyze and use the data. When data are input, the material is read in real time through the camera or a local uploaded picture is read for classification, and text information in the material is read through an OCR. As an alternative example, as shown in fig. 2 to 4, the following further details the technical solution of the present application with reference to specific embodiments.
As shown in fig. 2 and 4, a material is placed in a camera view field range or a material picture is uploaded to read by a MobileNet V3-SSD and OCR dual model, feature extraction of the material picture is performed by a MobileNet V3 (mobile network), FPN (pyramid network is mainly used for multi-scale transformation) is added in the process of extracting the material feature picture, and then an attention mechanism (attention mechanism) and a residual idea are added in the MobileNet V3 to improve classification progress and accuracy of extracting text information on the original basis.
OCR recognition layer step:
as shown in fig. 3, when a material is recognized in a camera view field, the material to be recognized may be framed by a system image module through a multi-scale algorithm to extract feature information, where a base network (backbone network) uses MobileNet V3, and then angle correction, denoising, defogging and image enhancement processes are performed through a series of image preprocessing, then characters in a feature region recognized in a block diagram are segmented, each character recognized is cut, and finally each text is segmented in a row and column, each character is cut, and each cut text is analyzed to obtain recognized text information, which is also an OCR (optical character recognition) general flow. And performing bidirectional LSTM semantic analysis through the recognized text, and analyzing the text into corresponding positions where text semantics are respectively added to a supplier evaluation system.
Optimized SSD classification layer steps:
the SSD classification network is divided into 2 parts, the base network refers to a MobileNet V3 backbone network in the image input step, the latter part is a network added by an author, feature maps with different scales are obtained by processing on the basis of extracting features, a plurality of sets of default boxes are generated for carrying out prediction classification and position adjustment information, and the prediction classification process adopts a Circle layer for classification.
Model fusion and training:
the functional network OCR and the base network of the SSD image classification layer both use the MobileNet V3, the main purpose is to extract the characteristic information of the material only by the MobileNet V3 during image classification and text extraction and text semantic classification, and the function is to reduce the system computation and the running time. In the training process, pictures of various materials and similar materials are adopted for training, then the learning rate is adjusted through a warp step, and the target position of the material to be recognized is obtained by performing linear regression on the picture classification of the Circle loss training materials and the cross loss function.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
According to another aspect of the embodiment of the application, a material information acquiring device for implementing the material information acquiring method is also provided. Fig. 5 is a schematic diagram of an alternative material information acquiring apparatus according to an embodiment of the present application, and as shown in fig. 5, the apparatus may include:
the acquisition unit 501 is used for acquiring a target image of the material; and the identification unit 503 is used for identifying the material information from the target image by utilizing computer vision classification identification and optical character identification.
It should be noted that the acquiring unit 501 in this embodiment may be configured to execute step S1 in this embodiment, and the identifying unit 503 in this embodiment may be configured to execute step S2 in this embodiment.
Acquiring a target image of the material through the module; the material information is recognized from the target image by computer vision classification recognition and optical character recognition, the appearance of the material and the text on the material are read in real time through the camera, the working efficiency is greatly improved and the manpower resource is saved by means of computer vision classification recognition and OCR recognition, and the technical problem of low efficiency of obtaining the material information in the related technology can be solved.
Optionally, the identification unit is further configured to input the target image into a preprocessing network for preprocessing when the material information is identified from the target image by using computer vision classification identification and optical character identification; inputting a preprocessing result into an SSD network, and identifying a material type in the target image through the SSD network, wherein the material information comprises the material type; inputting the preprocessing result into an optical character recognition network, and recognizing a material text in the target image through the optical character recognition network, wherein the material information comprises the material text.
Optionally, the preprocessing network includes a mobile network, wherein the identification unit is further configured to extract material features from the target image through the mobile network when the target image is input into the preprocessing network for preprocessing, where the preprocessing result includes the material features.
Optionally, the preprocessing network further includes a pyramid network, an attention network layer, and a residual network, wherein when the target image is input into the preprocessing network for preprocessing, in the process of extracting material features from the target image through the mobile network, multi-scale transformation is performed through the pyramid network, an attention mechanism is operated through the attention network, and a residual is added through the residual network, so as to improve accuracy of subsequent classification and text extraction by using the material features.
Optionally, the identifying unit is further configured to process the preprocessing result into feature maps of different scales through the SSD network when the material type in the target image is identified through the SSD network; generating an identification frame, and determining a region to be identified in a feature map by adjusting the position of the identification frame; and classifying the area to be identified by using a Circle layer of the SSD network to obtain the material type.
Optionally, the recognition unit is further configured to, when the material text in the target image is recognized through the optical character recognition network, perform angle correction processing, denoising processing, defogging processing, and image enhancement processing on the preprocessing result, then perform line segmentation on the recognized characters in the feature region, cut off each recognized line of characters, perform line segmentation on each line of text, cut out each character, and analyze each character to obtain the recognized material text.
Optionally, the apparatus of the present application may further include a training unit, configured to train the preprocessing network, the SSD network, and the optical character recognition network with the pre-marked material images before identifying the material information from the target image by using the computer vision classification recognition and the optical character recognition.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules as a part of the apparatus may run in a corresponding hardware environment, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.
According to another aspect of the embodiment of the application, a server or a terminal for implementing the method for acquiring the material information is also provided.
Fig. 6 is a block diagram of a terminal according to an embodiment of the present application, and as shown in fig. 6, the terminal may include: one or more processors 201 (only one shown), memory 203, and transmission means 205, as shown in fig. 6, the terminal may further comprise an input-output device 207.
The memory 203 may be configured to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for acquiring material information in the embodiment of the present application, and the processor 201 executes various functional applications and data processing by running the software programs and modules stored in the memory 203, that is, implements the method for acquiring material information. The memory 203 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 203 may further include memory located remotely from the processor 201, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 205 is used for receiving or sending data via a network, and can also be used for data transmission between a processor and a memory. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 205 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 205 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
Wherein the memory 203 is specifically used for storing application programs.
The processor 201 may call the application stored in the memory 203 via the transmission means 205 to perform the following steps:
collecting a target image of a material; and identifying material information from the target image by using computer vision classification identification and optical character identification.
The processor 201 is further configured to perform the following steps:
when material information is identified from the target image by utilizing computer vision classification identification and optical character identification, inputting the target image into a preprocessing network for preprocessing; inputting a preprocessing result into an SSD network, and identifying a material type in the target image through the SSD network, wherein the material information comprises the material type; inputting the preprocessing result into an optical character recognition network, and recognizing a material text in the target image through the optical character recognition network, wherein the material information comprises the material text.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 6 is only an illustration, and the terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, etc. Fig. 6 is a diagram illustrating a structure of the electronic device. For example, the terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 6, or have a different configuration than shown in FIG. 6.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Embodiments of the present application also provide a storage medium. Optionally, in this embodiment, the storage medium may be used to execute a program code of the method for acquiring material information.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
collecting a target image of a material; and identifying material information from the target image by using computer vision classification identification and optical character identification.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
when material information is identified from the target image by utilizing computer vision classification identification and optical character identification, inputting the target image into a preprocessing network for preprocessing; inputting a preprocessing result into an SSD network, and identifying a material type in the target image through the SSD network, wherein the material information comprises the material type; inputting the preprocessing result into an optical character recognition network, and recognizing a material text in the target image through the optical character recognition network, wherein the material information comprises the material text.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, network devices, or the like) to execute all or part of the steps of the method described in the embodiments of the present application.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A method for acquiring material information is characterized by comprising the following steps:
collecting a target image of a material;
and identifying material information from the target image by using computer vision classification identification and optical character identification.
2. The method of claim 1, wherein identifying material information from the target image using computer vision classification recognition and optical character recognition comprises:
inputting the target image into a preprocessing network for preprocessing;
inputting a preprocessing result into an SSD network, and identifying a material type in the target image through the SSD network, wherein the material information comprises the material type;
inputting the preprocessing result into an optical character recognition network, and recognizing a material text in the target image through the optical character recognition network, wherein the material information comprises the material text.
3. The method of claim 2, wherein the pre-processing network comprises a mobile network, and wherein inputting the target image into the pre-processing network for pre-processing comprises:
extracting material features from the target image through the mobile network, wherein the preprocessing result comprises the material features.
4. The method of claim 3, wherein the pre-processing network further comprises a pyramid network, an attention network layer, and a residual network, and wherein inputting the target image into the pre-processing network for pre-processing further comprises:
in the process of extracting the material features from the target image through the mobile network, multi-scale transformation is performed through a pyramid network, an attention mechanism is operated through an attention network, and residual errors are added through the residual error network, so that the accuracy of classification and text extraction performed by using the material features subsequently is improved.
5. The method of claim 2, wherein identifying the type of material in the target image via the SSD network comprises:
processing the preprocessing result into feature maps of different scales through the SSD network;
Generating an identification frame, and determining a region to be identified in a feature map by adjusting the position of the identification frame;
and classifying the area to be identified by using a Circle layer of the SSD network to obtain the material type.
6. The method of claim 2, wherein identifying material text in the target image through the optical character recognition network comprises:
and carrying out angle correction processing, denoising processing, defogging processing and image enhancement processing on the preprocessing result, then carrying out line segmentation on the recognized characters in the characteristic region, cutting off each recognized character, carrying out line segmentation on each recognized character, cutting out each character, and analyzing each character to obtain the recognized material text.
7. The method of claim 1, wherein prior to identifying material information from the target image using computer vision classification recognition and optical character recognition, the method further comprises:
and training the preprocessing network, the SSD network and the optical character recognition network by using the pre-marked material images.
8. An acquisition device of material information, characterized by comprising:
The acquisition unit is used for acquiring a target image of the material;
and the identification unit is used for identifying the material information from the target image by utilizing computer vision classification identification and optical character identification.
9. A storage medium, characterized in that the storage medium comprises a stored program, wherein the program when executed performs the method of any of the preceding claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the method of any of the preceding claims 1 to 7 by means of the computer program.
CN202011517201.0A 2020-12-21 2020-12-21 Method and device for acquiring material information, storage medium and electronic device Pending CN112560718A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011517201.0A CN112560718A (en) 2020-12-21 2020-12-21 Method and device for acquiring material information, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011517201.0A CN112560718A (en) 2020-12-21 2020-12-21 Method and device for acquiring material information, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN112560718A true CN112560718A (en) 2021-03-26

Family

ID=75031654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011517201.0A Pending CN112560718A (en) 2020-12-21 2020-12-21 Method and device for acquiring material information, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN112560718A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114308718A (en) * 2021-11-16 2022-04-12 江汉大学 Method and device for sorting clothes according to sizes of clothes
CN115810193A (en) * 2023-02-20 2023-03-17 深圳普菲特信息科技股份有限公司 Feeding visual identification method and system and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016058410A1 (en) * 2014-10-17 2016-04-21 中山大学 Method for extracting biomedical image features
CN110705559A (en) * 2019-10-09 2020-01-17 杭州高达软件系统股份有限公司 Steel information recording method, device and equipment based on steel label image recognition
CN110826377A (en) * 2018-08-13 2020-02-21 珠海格力电器股份有限公司 Material sorting method and device
CN111507271A (en) * 2020-04-20 2020-08-07 北京理工大学 Airborne photoelectric video target intelligent detection and identification method
CN111753744A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Method, device and equipment for classifying bill images and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016058410A1 (en) * 2014-10-17 2016-04-21 中山大学 Method for extracting biomedical image features
CN110826377A (en) * 2018-08-13 2020-02-21 珠海格力电器股份有限公司 Material sorting method and device
CN110705559A (en) * 2019-10-09 2020-01-17 杭州高达软件系统股份有限公司 Steel information recording method, device and equipment based on steel label image recognition
CN111507271A (en) * 2020-04-20 2020-08-07 北京理工大学 Airborne photoelectric video target intelligent detection and identification method
CN111753744A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Method, device and equipment for classifying bill images and readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114308718A (en) * 2021-11-16 2022-04-12 江汉大学 Method and device for sorting clothes according to sizes of clothes
CN115810193A (en) * 2023-02-20 2023-03-17 深圳普菲特信息科技股份有限公司 Feeding visual identification method and system and readable storage medium
CN115810193B (en) * 2023-02-20 2023-04-21 深圳普菲特信息科技股份有限公司 Method, system and readable storage medium for visual identification of batch

Similar Documents

Publication Publication Date Title
CN110533097B (en) Image definition recognition method and device, electronic equipment and storage medium
CN110555372A (en) Data entry method, device, equipment and storage medium
WO2016029796A1 (en) Method, device and system for identifying commodity in video image and presenting information thereof
US20180268458A1 (en) Automated recommendation and virtualization systems and methods for e-commerce
CN108319888B (en) Video type identification method and device and computer terminal
CN111178147B (en) Screen crushing and grading method, device, equipment and computer readable storage medium
CN113627402B (en) Image identification method and related device
CN104732226A (en) Character recognition method and device
CN110610169B (en) Picture marking method and device, storage medium and electronic device
CN112560718A (en) Method and device for acquiring material information, storage medium and electronic device
JP2021514228A (en) Image processing methods and devices, and training methods for neural network models
CN111949702B (en) Abnormal transaction data identification method, device and equipment
CN110798709A (en) Video processing method and device, storage medium and electronic device
CN114419363A (en) Target classification model training method and device based on label-free sample data
CN112183296A (en) Simulated bill image generation and bill image recognition method and device
CN112132766A (en) Image restoration method and device, storage medium and electronic device
CN112149690A (en) Tracing method and tracing system based on biological image feature recognition
CN111738199A (en) Image information verification method, image information verification device, image information verification computing device and medium
CN116824135A (en) Atmospheric natural environment test industrial product identification and segmentation method based on machine vision
CN114386013A (en) Automatic student status authentication method and device, computer equipment and storage medium
CN110598705A (en) Semantic annotation method and device for image
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN113204695A (en) Website identification method and device
CN115439850B (en) Method, device, equipment and storage medium for identifying image-text characters based on examination sheets
CN115497152A (en) Customer information analysis method, device, system and medium based on image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination