CN112036465A - Image recognition method, device, equipment and storage medium - Google Patents
Image recognition method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN112036465A CN112036465A CN202010872596.XA CN202010872596A CN112036465A CN 112036465 A CN112036465 A CN 112036465A CN 202010872596 A CN202010872596 A CN 202010872596A CN 112036465 A CN112036465 A CN 112036465A
- Authority
- CN
- China
- Prior art keywords
- image
- data model
- recognized
- adjusted
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/243—Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses an image identification method, an image identification device, image identification equipment and a storage medium. The method comprises the following steps: acquiring an image containing a target to be identified, adjusting the image, and labeling the adjusted image; acquiring an image carrying at least one label; wherein the label comprises the amount information of the paper money and a label frame; inputting the adjusted image into a data model to obtain a trained data model; and inputting the image to be recognized into the trained data model to obtain the amount information and the number of the paper money in the image to be recognized. By adopting the technical means, the aim of improving the identification accuracy of the cash paper money in the disc warehouse can be fulfilled.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image identification method, an image identification device, image identification equipment and a storage medium.
Background
In general, when the cash of the stock is checked, 100 bundles of the cash are required, 10 bundles of the cash are arranged in sequence, then the cash is photographed and uploaded by a worker, and then another worker checks and accepts the cash through the photo. The manual checking and acceptance check accuracy is high, but the consumption cost is high, and a large amount of manpower and material resources are consumed under the condition that a large number of photos need to be accepted.
Therefore, an image recognition method is needed to improve the accuracy of the cash note recognition in the disk.
Disclosure of Invention
The embodiment of the invention provides an image recognition method, an image recognition device, image recognition equipment and a storage medium, and aims to improve the recognition accuracy of cash paper money in a disc warehouse.
In a first aspect, an embodiment of the present invention provides an image recognition method, including:
acquiring an image containing a target to be identified, adjusting the image, and labeling the adjusted image;
acquiring an image carrying at least one label;
adjusting the image, and inputting the adjusted image into a data model to obtain a trained data model;
and inputting the image to be recognized into the trained data model to obtain the amount information and the number of the paper money in the image to be recognized.
Optionally, the adjusting the image includes:
processing the image into a gray image, and carrying out edge detection on the gray image to obtain an edge detection result;
determining an angle to be adjusted according to the edge detection result;
and adjusting the image according to the angle to be adjusted.
Optionally, determining an angle to be adjusted according to the result of the edge detection includes:
determining the coordinates of at least two points according to the edge detection result;
determining a detection result of the straight line according to the coordinates of the at least two points;
and determining the angle to be adjusted according to the detection result of the straight line and the included angle of the coordinate axis.
Optionally, the data model is a YOLOv3 model.
Optionally, before the image to be recognized is input to the trained data model, the method further includes:
and adjusting the image to be recognized, and inputting the adjusted image to be recognized into the trained data model.
Optionally, after the image to be recognized is input to the trained data model and the amount information and the number of the banknotes in the image to be recognized are obtained, the method further includes:
and calculating the total number of the paper currencies in the image to be identified.
Optionally, the number of images is at least one.
Optionally, the label includes amount information of the banknote and a label frame.
Optionally, the adjusting the image is implemented by probability Hough transformation.
Optionally, the data model is a YOLOv3 model, and the specific training process includes:
selecting 3 prior frames with different sizes in the image by using k-means + +;
extracting a characteristic sequence from the image by using a convolutional layer;
and predicting the multi-scale characteristic graph by adopting a residual error network.
In a second aspect, an embodiment of the present invention further provides an image recognition apparatus, including:
the image annotation module is used for acquiring an image containing a target to be identified, adjusting the image and annotating the adjusted image;
the image acquisition module is used for acquiring an image carrying at least one label;
the trained data model determining module is used for adjusting the image and inputting the adjusted image into the data model to obtain a trained data model;
and the paper money amount information and number determining module is used for inputting the image to be recognized into the trained data model to obtain the amount information and the number of the paper money in the image to be recognized.
Optionally, the trained data model determining module is configured to:
processing the image into a gray image, and carrying out edge detection on the gray image to obtain an edge detection result;
determining an angle to be adjusted according to the edge detection result;
and adjusting the image according to the angle to be adjusted.
Optionally, the trained data model determining module is configured to:
determining the coordinates of at least two points according to the edge detection result;
determining a detection result of the straight line according to the coordinates of the at least two points;
and determining the angle to be adjusted according to the detection result of the straight line and the included angle of the coordinate axis.
Optionally, the data model is a YOLOv3 model.
Optionally, the apparatus further comprises:
and the image to be recognized adjusting module is used for adjusting the image to be recognized and inputting the adjusted image to be recognized into the trained data model.
The device further comprises:
and the total number of paper money calculating module is used for calculating the total number of paper money in the image to be identified.
Optionally, the number of images is at least one.
Optionally, the label includes amount information of the banknote and a label frame.
Optionally, the adjusting the image is implemented by probability Hough transformation.
Optionally, the data model is a YOLOv3 model, and the specific training process includes:
selecting 3 prior frames with different sizes in the image by using k-means + +;
extracting a characteristic sequence from the image by using a convolutional layer;
and predicting the multi-scale characteristic graph by adopting a residual error network.
In a third aspect, an embodiment of the present invention further provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the image recognition method according to any one of the embodiments of the present invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the image recognition method according to any one of the embodiments of the present invention.
The embodiment of the invention provides an image identification method, which comprises the following steps: acquiring an image containing a target to be identified, adjusting the image, and labeling the adjusted image; acquiring an image carrying at least one label; wherein the label comprises the amount information of the paper money and a label frame; adjusting the image, and inputting the adjusted image into a data model to obtain a trained data model; and inputting the image to be recognized into the trained data model to obtain the amount information and the number of the paper money in the image to be recognized. By adopting the technical means, the aim of improving the identification accuracy of the cash paper money in the disc warehouse can be fulfilled.
Drawings
Fig. 1a is a schematic flowchart of an image recognition method according to a first embodiment of the present invention;
FIG. 1b is a schematic diagram of a note labeling box according to a first embodiment of the present invention;
FIG. 1c is a diagram illustrating an image adjustment according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an image recognition apparatus according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an apparatus provided in the third embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, subprograms, and the like.
Example one
Fig. 1a is a schematic flow chart of an image recognition method according to an embodiment of the present invention, where the embodiment is applicable to a situation of cash counting in a stock, and the method may be executed by an image recognition apparatus, where the apparatus may be implemented in software and/or hardware, and may be integrated in an electronic device, and specifically includes the following steps:
s110, obtaining an image containing a target to be recognized, adjusting the image, and labeling the adjusted image.
In this embodiment, the image of the target to be recognized is a banknote image in the tray magazine, and the target to be recognized is a banknote. In this embodiment, the image is adjusted by probability Hough transform. Specifically, the probability Hough transform mainly uses the transform between the space where the image is located and the Hough space to map a curve or a straight line with a shape in a rectangular coordinate system where the image is located to a point in the Hough space to form a peak value, so that the problem of detecting any shape is converted into the problem of calculating the peak value. That is, a point is formed by converting a straight line of a rectangular coordinate system where an image is located into hough space, and the point is formed by intersecting a plurality of straight lines, and the statistical peak value is the number of intersecting lines of the intersecting point.
And S120, acquiring an image carrying at least one label.
In the present embodiment, the label may be amount information of a banknote, and the amount information of the banknote may be, for example, 100 yuan, 50 yuan, 20 yuan, 10 yuan, and the like. The labeling frame is a block diagram for manually labeling the amount information of the paper money in the image, and can be a labeling frame or a bundling labeling frame. In particular, reference may be made to a schematic diagram of a note labeling box shown in fig. 1 b. Wherein the image is an image of at least one disc library.
S130, adjusting the image, and inputting the adjusted image into the data model to obtain the trained data model.
In this embodiment, optionally, the data model is a YOLOv3 model. YOLOv3(You Look Only Once) is an object detection algorithm based on Convolutional Neural Networks (CNN).
The YOLOv3 detection algorithm generally needs to set a priori boxes, extract features by using a full convolution network and a residual error network, and in YOLOv3, the priori boxes serve as predefined candidate areas to detect whether an object exists in a neural network, and fine-tune the position of a border to detect whether the object exists. The prior frame size is set by training the marked frames in the set and carrying out cluster analysis, so as to find the frame size matched with the sample as far as possible, thereby reducing the difficulty of fine adjustment of the prior frame. The label boxes in YOLOv3 are all square boxes, and when the sample is tilted at an angle, the size of the label box of the data set in the same sample will vary with different tilt angles. Therefore, the prior frame size extracted by using the clustering algorithm cannot be accurately matched with the actual sample size, and finally the difficulty in finding the actual position of the sample in the training process is improved. Meanwhile, when the square marking frame is attached to the inclined object as much as possible, if the marking frame contains all the samples, a part of disordered backgrounds appear in the marking frame of the data set, and if the marking frame is only marked in the samples, the situation that the characteristics of the samples cannot be completely extracted can be caused; these factors all affect the accuracy of the identification. Optionally, the data model is a YOLOv3 model, and the specific training process includes: selecting 3 prior frames with different sizes in the image by using k-means + +; extracting a characteristic sequence from the image by using a convolutional layer; and predicting the multi-scale characteristic graph by adopting a residual error network. Specifically, the output of the YOLOv3 model is a feature map of 3 scales, which are 13 × 13, 26 × 26 and 52 × 52 respectively, and correspond to 9 anchors, and each scale equally divides 3 anchors prior frames.
For a ground channel in a training picture, if a center point of the ground channel falls within a certain grid, 3 anchor box prior frames in the grid are responsible for predicting the ground channel, and specifically, which anchor box prior frame predicts the ground channel needs to be determined in training. YOLOv3 requires that each grid contains at most one group channel (real label box) and in practice there are substantially no more than 1 cases. The anchor box prior box matching the ground truth calculates the coordinate error, confidence error (target is 1 at this time) and classification error, while the other anchor box prior boxes only calculate the confidence error (target is 0 at this time). The training process of YOLOv3 is to make the anchor box prior frame overlap with the grand truth (real labeled frame) by fine tuning using translation (tx, ty) and scale (tw, th), so as to reduce the error.
Furthermore, the anchor box prior frames are several frames with different sizes obtained by statistics or clustering from real frames (ground route) in a training set, so that a data model is prevented from being found blindly during training, and rapid convergence of the model is facilitated. It is assumed that each grid corresponds to k anchors, that is, when the model is trained, it only finds the k shapes near each grid, and does not find others. The Anchor actually restrains the predicted object range and adds the prior experience of the size, thereby realizing the aim of multi-scale learning. YOLOv3 uses the k-means algorithm to cluster in the real box (ground route) of all samples in the training set, resulting in a width and height with representative shape.
In this embodiment, the image needs to be adjusted, and the adjusted image is input to the YOLOv3 model.
Optionally, the adjusting the image is implemented by probability Hough transformation.
Optionally, the adjusting the image includes:
processing the image into a gray image, and carrying out edge detection on the gray image to obtain an edge detection result;
determining an angle to be adjusted according to the edge detection result;
and adjusting the image according to the angle to be adjusted.
Optionally, determining an angle to be adjusted according to the result of the edge detection includes:
determining the coordinates of at least two points according to the edge detection result;
determining a detection result of the straight line according to the coordinates of the at least two points;
and determining the angle to be adjusted according to the detection result of the straight line and the included angle of the coordinate axis.
In this embodiment, the probability Hough transformation is used for implementation. The image adjustment result can be seen in a schematic diagram of image adjustment shown in fig. 1 c.
S140, inputting the image to be recognized into the trained data model to obtain the amount information and the number of the paper money in the image to be recognized.
In this embodiment, the trained data model can identify the amount information and the number of the paper money, thereby improving the efficiency of paper money identification.
Optionally, before the image to be recognized is input to the trained data model, the method further includes:
and adjusting the image to be recognized, and inputting the adjusted image to be recognized into the trained data model.
In this embodiment, if the image to be recognized is inclined, the image to be recognized needs to be adjusted, and the adjusted image to be recognized is input to the trained data model.
Optionally, after the image to be recognized is input to the trained data model and the amount information and the number of the banknotes in the image to be recognized are obtained, the method further includes:
and calculating the total number of the paper currencies in the image to be identified.
In the present embodiment, the total number of banknotes is the number of banknotes multiplied by the amount of banknotes.
The embodiment of the invention provides an image identification method, which comprises the following steps: acquiring an image containing a target to be identified, adjusting the image, and labeling the adjusted image; acquiring an image carrying at least one label; wherein the label comprises the amount information of the paper money and a label frame; adjusting the image, and inputting the adjusted image into a data model to obtain a trained data model; and inputting the image to be recognized into the trained data model to obtain the amount information and the number of the paper money in the image to be recognized. By adopting the technical means, the aim of improving the identification accuracy of the cash paper money in the disc warehouse can be fulfilled.
Example two
Fig. 2 is a schematic structural diagram of an image recognition apparatus according to a second embodiment of the present invention. The image recognition device provided by the embodiment of the invention can execute the image recognition method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. As shown in fig. 2, the apparatus includes:
the image labeling module 210 is configured to obtain an image containing a target to be identified, adjust the image, and label the adjusted image;
an image obtaining module 220, configured to obtain an image with at least one label;
a trained data model determining module 230, configured to adjust the image and input the adjusted image to a data model to obtain a trained data model;
and a paper money amount information and number determining module 240, configured to input the image to be recognized to the trained data model, so as to obtain amount information and the number of paper money in the image to be recognized.
Optionally, the trained data model determining module 230 is configured to:
processing the image into a gray image, and carrying out edge detection on the gray image to obtain an edge detection result;
determining an angle to be adjusted according to the edge detection result;
and adjusting the image according to the angle to be adjusted.
Optionally, the trained data model determining module 230 is configured to:
determining the coordinates of at least two points according to the edge detection result;
determining a detection result of the straight line according to the coordinates of the at least two points;
and determining the angle to be adjusted according to the detection result of the straight line and the included angle of the coordinate axis.
Optionally, the data model is a YOLOv3 model.
Optionally, the apparatus further comprises:
and an image to be recognized adjusting module 250, configured to adjust the image to be recognized, and input the adjusted image to be recognized into the trained data model.
The device further comprises:
and the total number of paper money calculating module 260 is used for calculating the total number of paper money in the image to be identified.
Optionally, the number of images is at least one.
Optionally, the label includes amount information of the banknote and a label frame.
Optionally, the adjusting the image is implemented by probability Hough transformation.
Optionally, the data model is a YOLOv3 model, and the specific training process includes:
selecting 3 prior frames with different sizes in the image by using k-means + +;
extracting a characteristic sequence from the image by using a convolutional layer;
and predicting the multi-scale characteristic graph by adopting a residual error network.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the above-described apparatus may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an apparatus according to a third embodiment of the present invention, and fig. 3 is a schematic structural diagram of an exemplary apparatus suitable for implementing the embodiment of the present invention. The device 12 shown in fig. 3 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present invention.
As shown in FIG. 3, device 12 is in the form of a general purpose computing device. The components of device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. Device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 3, and commonly referred to as a "hard drive"). Although not shown in FIG. 3, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of embodiments described herein.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, to implement an image recognition method provided by an embodiment of the present invention, including:
acquiring an image containing a target to be identified, adjusting the image, and labeling the adjusted image;
acquiring an image carrying at least one label;
adjusting the image, and inputting the adjusted image into a data model to obtain a trained data model;
and inputting the image to be recognized into the trained data model to obtain the amount information and the number of the paper money in the image to be recognized.
Example four
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program (or referred to as a computer-executable instruction) is stored, where the computer program, when executed by a processor, can implement an image recognition method according to any of the embodiments described above, where the computer program includes:
acquiring an image containing a target to be identified, adjusting the image, and labeling the adjusted image;
acquiring an image carrying at least one label;
adjusting the image, and inputting the adjusted image into a data model to obtain a trained data model;
and inputting the image to be recognized into the trained data model to obtain the amount information and the number of the paper money in the image to be recognized.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (15)
1. An image recognition method, comprising:
acquiring an image containing a target to be identified, adjusting the image, and labeling the adjusted image;
acquiring the image carrying at least one label;
adjusting the image, and inputting the adjusted image into a data model to obtain a trained data model;
and inputting the image to be recognized into the trained data model to obtain the amount information and the number of the paper money in the image to be recognized.
2. The method of claim 1, wherein the adjusting the image comprises:
processing the image into a gray image, and carrying out edge detection on the gray image to obtain an edge detection result;
determining an angle to be adjusted according to the edge detection result;
and adjusting the image according to the angle to be adjusted.
3. The method of claim 2, wherein determining the angle to be adjusted according to the result of the edge detection comprises:
determining the coordinates of at least two points according to the edge detection result;
determining a detection result of the straight line according to the coordinates of the at least two points;
and determining the angle to be adjusted according to the detection result of the straight line and the included angle of the coordinate axis.
4. The method of claim 1, wherein the data model is a YOLOv3 model.
5. The method of claim 1, wherein before inputting the image to be recognized into the trained data model, further comprising:
and adjusting the image to be recognized, and inputting the adjusted image to be recognized into the trained data model.
6. The method according to claim 1, wherein after inputting the image to be recognized into the trained data model and obtaining the amount information and the number of the paper currency in the image to be recognized, the method further comprises:
and calculating the total number of the paper currencies in the image to be identified.
7. The method of claim 1, wherein the image is at least one.
8. The method of claim 1, wherein the label includes amount information for the note and a label box.
9. The method of claim 1, wherein the adjusting the image is performed by a probabilistic Hough transform.
10. The method of claim 4, wherein the data model is a YOLOv3 model, and the specific training process comprises:
selecting 3 prior frames with different sizes in the image by using k-means + +;
extracting a characteristic sequence from the image by using a convolutional layer;
and predicting the multi-scale characteristic graph by adopting a residual error network.
11. An image recognition apparatus, comprising:
the image annotation module is used for acquiring an image containing a target to be identified, adjusting the image and annotating the adjusted image;
the image acquisition module is used for acquiring the image carrying at least one label; wherein the label comprises the amount information of the paper money and a label frame;
the trained data model determining module is used for adjusting the image and inputting the adjusted image into the data model to obtain a trained data model;
and the paper money amount information and number determining module is used for inputting the image to be recognized into the trained data model to obtain the amount information and the number of the paper money in the image to be recognized.
12. The apparatus of claim 11, wherein the trained data model determination module is configured to:
processing the image into a gray image, and carrying out edge detection on the gray image to obtain an edge detection result;
determining an angle to be adjusted according to the edge detection result;
and adjusting the image according to the angle to be adjusted.
13. The apparatus of claim 11, wherein the trained data model determination module is configured to:
determining the coordinates of at least two points according to the edge detection result;
determining a detection result of the straight line according to the coordinates of the at least two points;
and determining the angle to be adjusted according to the detection result of the straight line and the included angle of the coordinate axis.
14. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the image recognition method according to any one of claims 1 to 10 when executing the program.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image recognition method of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010872596.XA CN112036465A (en) | 2020-08-26 | 2020-08-26 | Image recognition method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010872596.XA CN112036465A (en) | 2020-08-26 | 2020-08-26 | Image recognition method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112036465A true CN112036465A (en) | 2020-12-04 |
Family
ID=73579990
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010872596.XA Pending CN112036465A (en) | 2020-08-26 | 2020-08-26 | Image recognition method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112036465A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117372510A (en) * | 2023-12-05 | 2024-01-09 | 中交天津港湾工程研究院有限公司 | Map annotation identification method, terminal and medium based on computer vision model |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009066297A2 (en) * | 2007-11-21 | 2009-05-28 | Ioimage Ltd. | A method of verifying the contents of bundles of paper currency |
CN101799868A (en) * | 2010-02-01 | 2010-08-11 | 南京中钞长城金融设备有限公司 | Machine vision inspection method for paper money |
WO2017175769A1 (en) * | 2016-04-07 | 2017-10-12 | グローリー株式会社 | Paper-currency-processing apparatus |
CN107862685A (en) * | 2017-11-03 | 2018-03-30 | 王美金 | A kind of artificial intelligence study and the system and method for identification bluetooth cash box bank note number |
CN108074227A (en) * | 2017-08-24 | 2018-05-25 | 深圳市中钞科信金融科技有限公司 | Detecting system and detection method before damaged RMB is destroyed |
CN109635833A (en) * | 2018-10-30 | 2019-04-16 | 银河水滴科技(北京)有限公司 | A kind of image-recognizing method and system based on cloud platform and model intelligent recommendation |
-
2020
- 2020-08-26 CN CN202010872596.XA patent/CN112036465A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009066297A2 (en) * | 2007-11-21 | 2009-05-28 | Ioimage Ltd. | A method of verifying the contents of bundles of paper currency |
CN101799868A (en) * | 2010-02-01 | 2010-08-11 | 南京中钞长城金融设备有限公司 | Machine vision inspection method for paper money |
WO2017175769A1 (en) * | 2016-04-07 | 2017-10-12 | グローリー株式会社 | Paper-currency-processing apparatus |
CN108074227A (en) * | 2017-08-24 | 2018-05-25 | 深圳市中钞科信金融科技有限公司 | Detecting system and detection method before damaged RMB is destroyed |
CN107862685A (en) * | 2017-11-03 | 2018-03-30 | 王美金 | A kind of artificial intelligence study and the system and method for identification bluetooth cash box bank note number |
CN109635833A (en) * | 2018-10-30 | 2019-04-16 | 银河水滴科技(北京)有限公司 | A kind of image-recognizing method and system based on cloud platform and model intelligent recommendation |
Non-Patent Citations (2)
Title |
---|
付亚平;王从庆;杨英杰;: "基于马尔可夫矩阵的纸币图像识别", 信息技术, no. 03 * |
焦梦姝;彭佳红;: "人民币纸币面额手机识别系统设计研究", 电脑知识与技术, no. 17 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117372510A (en) * | 2023-12-05 | 2024-01-09 | 中交天津港湾工程研究院有限公司 | Map annotation identification method, terminal and medium based on computer vision model |
CN117372510B (en) * | 2023-12-05 | 2024-03-01 | 中交天津港湾工程研究院有限公司 | Map annotation identification method, terminal and medium based on computer vision model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107798299B (en) | Bill information identification method, electronic device and readable storage medium | |
CN108734089B (en) | Method, device, equipment and storage medium for identifying table content in picture file | |
CN112016638B (en) | Method, device and equipment for identifying steel bar cluster and storage medium | |
CN111178345A (en) | Bill analysis method, bill analysis device, computer equipment and medium | |
CN112396122B (en) | Method and system for multiple optimization of target detector based on vertex distance and cross-over ratio | |
CN112149663A (en) | RPA and AI combined image character extraction method and device and electronic equipment | |
US20210334573A1 (en) | Text line normalization systems and methods | |
CN113239227A (en) | Image data structuring method and device, electronic equipment and computer readable medium | |
CN111124863A (en) | Intelligent equipment performance testing method and device and intelligent equipment | |
CN114140649A (en) | Bill classification method, bill classification device, electronic apparatus, and storage medium | |
CN114463858B (en) | Signature behavior recognition method and system based on deep learning | |
CN113538291B (en) | Card image inclination correction method, device, computer equipment and storage medium | |
CN111008635A (en) | OCR-based multi-bill automatic identification method and system | |
CN113807416A (en) | Model training method and device, electronic equipment and storage medium | |
CN112036465A (en) | Image recognition method, device, equipment and storage medium | |
CN113506288A (en) | Lung nodule detection method and device based on transform attention mechanism | |
CN113011249A (en) | Bill auditing method, device, equipment and storage medium | |
CN112857746A (en) | Tracking method and device of lamplight detector, electronic equipment and storage medium | |
CN112036516A (en) | Image processing method and device, electronic equipment and storage medium | |
CN112232288A (en) | Satellite map target identification method and system based on deep learning | |
CN109934185B (en) | Data processing method and device, medium and computing equipment | |
CN111723799A (en) | Coordinate positioning method, device, equipment and storage medium | |
CN115424254A (en) | License plate recognition method, system, equipment and storage medium | |
CN112464892B (en) | Bill area identification method and device, electronic equipment and readable storage medium | |
CN111753625B (en) | Pedestrian detection method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220926 Address after: 25 Financial Street, Xicheng District, Beijing 100033 Applicant after: CHINA CONSTRUCTION BANK Corp. Address before: 25 Financial Street, Xicheng District, Beijing 100033 Applicant before: CHINA CONSTRUCTION BANK Corp. Applicant before: Jianxin Financial Science and Technology Co.,Ltd. |
|
TA01 | Transfer of patent application right |