CN110490214B - Image recognition method and system, storage medium and processor - Google Patents

Image recognition method and system, storage medium and processor Download PDF

Info

Publication number
CN110490214B
CN110490214B CN201810457675.7A CN201810457675A CN110490214B CN 110490214 B CN110490214 B CN 110490214B CN 201810457675 A CN201810457675 A CN 201810457675A CN 110490214 B CN110490214 B CN 110490214B
Authority
CN
China
Prior art keywords
image
features
image features
matrix
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810457675.7A
Other languages
Chinese (zh)
Other versions
CN110490214A (en
Inventor
张帆
刘永亮
黄继武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810457675.7A priority Critical patent/CN110490214B/en
Publication of CN110490214A publication Critical patent/CN110490214A/en
Application granted granted Critical
Publication of CN110490214B publication Critical patent/CN110490214B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Abstract

The application discloses an image identification method and system, a storage medium and a processor. Wherein the method comprises the following steps: acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image. The method and the device solve the technical problems that the existing image recognition method is adopted to recognize the paper flip image, and accuracy and practicality are low.

Description

Image recognition method and system, storage medium and processor
Technical Field
The present invention relates to the field of image processing, and in particular, to an image recognition method and system, a storage medium, and a processor.
Background
The digital image is image data obtained by shooting through a digital imaging device, and because the digital image can record real events in the real world, the digital image is common in important fields such as judicial evidence obtaining, payment authentication and the like, and the credibility of the digital image is also receiving more and more attention from society.
For example, in the case of a digital image as judicial evidence, if there are some lawbreakers to tamper with the digital image or to flip the image for other purposes to mask the tamper trace or other processing trace of the image, it is necessary to identify whether the digital image is flipped by a secondary image.
Among existing image recognition methods, there are the following relatively effective recognition methods:
firstly, a picture is represented by specular reflection components and diffuse reflection components, then analysis proves that the ratio of the specular reflection components to the total image of a natural image is different from that of a flip image, the gradient histogram of the specular reflection ratio of the flip image is Rayleigh-like distribution, and the natural image is Gaussian-like distribution.
In the second method, the unnatural image is distinguished from the natural image according to the High-order wavelet statistical features (HoWS, high-order Wavelet Statistics) of the digital image.
The third method is to classify natural images and flip images by using common physical features, such as context information of background, surface gradient, spatial distribution of specular reflection, color histograms before and after flip, chromaticity, ambiguity, contrast, and the like.
In the fourth method, according to the process of the flip image similar to the process of JPEG double compression, the flip image can be distinguished from the natural image by detecting the secondary compression of the image, so that a method of extracting MBFDF feature (MBFDF_R-DCT) from the R component DCT coefficient of the image, and another method of extracting MBFDF (MBFDF_Y-DCT) feature from the Y component DCT coefficient of the image are proposed, and the flip image can be detected according to Markov (Markov) transition probability, and the flip image can be distinguished from the natural image.
However, in the above-mentioned two-shot image reproduction and identification scheme, there are some obvious disadvantages, for example, each method uses an image to be tested in an image library, and the images in an image library are easy to have a certain correlation, so that when the same method detects images in other image libraries, especially when images outside the training image library are identified, the existing image identification method is adopted to identify paper reproduction images, the accuracy is lower and the identification cost is higher, therefore, the above-mentioned several identification methods cannot be effectively applied to actual life.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides an image identification method and system, a storage medium and a processor, which are used for at least solving the technical problems of low accuracy and practicability of identifying a paper flip image by adopting the existing image identification method.
According to an aspect of an embodiment of the present application, there is provided an image recognition method, including: acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image.
According to another aspect of the embodiments of the present application, there is provided an image recognition method, including: acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image; the image features of the sample image are obtained by fusing a third image feature extracted from a color image of the sample object and a fourth image feature extracted from a grayscale image of the sample object.
According to another aspect of the embodiments of the present application, there is provided an image recognition method, including: acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; and determining the type of the target image according to the image characteristics.
According to another aspect of the embodiments of the present application, there is provided a storage medium, where the storage medium includes a stored program, and the apparatus in which the storage medium is controlled to execute any one of the above-described image recognition methods when the program runs.
According to another aspect of the embodiments of the present application, there is provided a processor, where the processor is configured to execute a program, and when the program runs, perform any one of the image recognition methods described above.
According to another aspect of the embodiments of the present application, there is provided an image recognition system including: a processor; and a memory, coupled to the processor, for providing instructions to the processor to process the steps of: acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image.
In the embodiment of the application, by adopting a mode of analyzing the image characteristics of the target image according to a preset model, the image characteristics of the target image are obtained by fusing the first image characteristics and the second image characteristics, wherein the first image characteristics are extracted from the color image of the target object, and the second image characteristics are extracted from the gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the image characteristics of the sample image and the type of the sample image achieve the aim of improving the accuracy and the practicability of identifying the paper flip image, thereby realizing the technical effect of enhancing the credibility of the digital image, and further solving the technical problems of low accuracy and practicability of identifying the paper flip image by adopting the existing image identification method.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
Fig. 1 is a hardware block diagram of a computer terminal (or mobile device) for implementing an image recognition method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of image recognition according to an embodiment of the present application;
FIG. 3 is a flow chart of an alternative image recognition method according to an embodiment of the present application;
FIG. 4 is a flow chart of an alternative image recognition method according to an embodiment of the present application;
FIG. 5 is a flow chart of an alternative image recognition method according to an embodiment of the present application;
FIG. 6 is a flow chart of another method of image recognition according to an embodiment of the present application;
FIG. 7 is a flow chart of another method of image recognition according to an embodiment of the present application;
fig. 8 is a schematic structural view of an image recognition apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural view of an identification device of another image according to an embodiment of the present application; and
fig. 10 is a schematic structural view of an identification device of yet another image according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the embodiments of the present application, are intended to be within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in describing embodiments of the present application are applicable to the following explanation:
original image: refers to image data obtained by directly shooting a real scene by adopting image acquisition equipment.
Paper flip image: the image is obtained by color printing an original image on paper (for example, A4 paper) and shooting the color-printed image, namely, an image obtained by double image flipping.
JPEG color image: JPEG is an image file format under the international static image compression standard, redundant image data is removed in a lossy compression mode, extremely high compression rate is obtained, meanwhile, very rich and vivid images can be displayed, namely, the best image quality is obtained by adopting the minimum disk space; the JPEG color image is a JPEG color image.
Residual image: a scatter plot with a certain residual error as the ordinate and other suitable amounts as the abscissa is referred to, and in this embodiment of the present application, the residual image may be an image obtained by convolving an image with a filter.
Local binary pattern (Local Binary Patterns, LBP): the method comprises the steps of comparing a central pixel point in a small area with other pixel points in the area, setting the value of a certain position in the area to be 1 if the pixel value of the certain position in the area is larger than the central pixel value, and setting the value of the certain position in the area to be 0 if the pixel value of the certain position in the area is smaller than the central pixel value.
ILBP: the improved local binary pattern LBP can be used for comparing all pixels including the central pixel in a small area with the average value of the pixels, wherein the LBP is a linear back projection algorithm.
Symbiotic matrix: the result of statistics of a certain gray level of a single pixel on an image is meant, and the gray level co-occurrence matrix is obtained by statistics of a situation that two pixels with a certain distance on the image respectively have a certain gray level, so that the co-occurrence matrix can be used for indicating joint probability density among the pixels and reflecting position distribution characteristics among the pixels.
Feature fusion: by combining two or more features together, one feature is ultimately obtained, e.g., one feature is [1234], the other feature is [567], and the feature obtained after the feature fusion process is [1234567].
Classifier (Ensemble): the integrated classifier can comprise a plurality of mutually independent sub-classifiers, wherein the final classification result can be obtained by processing the classification results of all the sub-classifiers by adopting a majority decision method.
Example 1
According to the embodiments of the present application, there is provided a method embodiment of an image recognition method, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different from that herein.
The method embodiment provided in embodiment 1 of the present application may be executed in a mobile terminal, a computer terminal or a similar computing device. Fig. 1 shows a block diagram of a hardware configuration of a computer terminal (or mobile device) for implementing an image recognition method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more processors 102 (shown as 102a, 102b, … …,102 n) which may include, but are not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA, a memory 104 for storing data, and a transmission module 106 for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuits described above may be referred to generally herein as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone determination module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the present application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination to interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the image recognition method in the embodiment of the present application, and the processor 102 executes the software programs and modules stored in the memory 104, thereby executing various functional applications and data processing, that is, implementing the above-mentioned vulnerability detection method of application programs. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
With the continuous development of network technology, electronic devices, digital image acquisition devices such as digital cameras, and various image editing software are gradually perfected, and the internet is an essential part of people's daily life as a basic tool for people's life, work and study, so the reality of digital images existing on the internet becomes more important. The tampered image can bring great negative influence to society, especially news, politics, scientific research, judicial and other fields.
Digital images are mostly captured by digital imaging devices, the most commonly used capturing devices being digital cameras or cell phones. The image is a special case, the scenery shot by the image shooting device is an image, and the image is obtained after being processed by the shooting device. In this case, the photographed subject image generally has two forms: an image displayed on an electronic display screen or a developed image.
It should be noted that, in the embodiment of the present application, the image recognition method provided in the present application may be, but is not limited to, applied to the following scenarios: the secondary reproduction and identification of the developed image is applied to the field of image evidence obtaining which needs to identify whether the image is an original image or a paper reproduction image.
In addition, the method for identifying any image provided by the embodiment of the application can be applied to identifying whether the image evidence uploaded by the user is an original image or not, and can also be applied to a face recognition card punch as a method for identifying the fact that the front of a camera of the face recognition card punch is a real employee face by using software.
The purpose of digital image evidence obtaining and recognition is to judge whether the image is tampered or not on the premise of not having any priori knowledge on the image, so as to determine whether the image is true and reliable or not. Image tampering not only causes us to lose trust of the image, but also loses the effect of the image to record real events in the real world, and therefore, is of great importance for the identification of the authenticity and integrity of the image. Thus, research into digital image evidence and recognition technology is becoming more urgent and meaningful.
For example, when a digital image exists as a judicial evidence, if some lawbreakers tamper with the image or flip the image for other purposes to mask the tampered trace or other processing trace of the image, then it is necessary to identify the originality and authenticity of the image. Based on the image recognition method provided by the application, a recognition person (such as judicial appraisal personnel) can be assisted to rapidly and accurately recognize whether the image is an original developed image or a paper flip image, the recognition person can be helped to recognize non-original images such as paper flip and the like, and the originality and the authenticity of judicial evidence are improved to a certain extent.
In addition, in the process of face image recognition, for example, when the face recognition card punch carries out attendance on an employee, the problem of card punching can be effectively prevented by recognizing whether the front of a camera of the face recognition card punch is a real face of the employee or a face photo of the employee.
In the above-described operating environment, the present application provides a method for recognizing an image as shown in fig. 2. Fig. 2 is a flowchart of an image recognition method according to an embodiment of the present application, and as shown in fig. 2, the image recognition method provided by the embodiment of the present application may be implemented by the following method steps:
step S202, obtaining image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object.
Alternatively, in the step S202, the target image may be a digital image, for example, image data captured by a digital imaging device such as a digital camera, a smart phone, or the like, and may be used to record a real event in the real world.
Alternatively, the image features may be texture features, which are used to describe the surface properties of the image, and may represent periodic changes or slow changes of the surface of the object, which represent the properties of the surface structure of the object.
It should be noted that, in the embodiment of the present application, the main objective of extracting the texture features is: the dimension of the extracted texture features is lower, but the robustness is better, the discrimination capability is stronger, the calculated amount in the process of extracting the features is as small as possible, and the method can be applied in practice.
The texture information in the texture features is different from other image features such as gray level, color and the like, and can be represented by pixels and neighborhood distribution of the space around the pixels, and the texture analysis method which is relatively commonly used at the present stage mainly comprises the following four types: statistical type texture features, structural type texture features, signal processing type texture features, and model type texture features.
As an alternative embodiment, the gray scale image of the target object may be, but is not limited to, determined by: and carrying out graying treatment on the color image of the target object to obtain the gray image. Since the gradation image is obtained by subjecting a color image to gradation processing, the values of the three channel components in the gradation image are the same, and thus the calculation amount can be reduced.
In an alternative embodiment, the color image is an original image obtained by directly photographing a target object (may be an object photographed by the photographing device, for example, a real scene or things) by the photographing device, and optionally, the color image may be a JPEG color image.
In another alternative embodiment, the gray-scale image is a paper-based flip image obtained by converting the JPEG color image, for example, the gray-scale process may be a paper-based flip image obtained by photographing a printed paper after printing the color image. Among them, most of color images acquired by photographing devices are in a JPEG image format, and extraction of texture information of gray-scale images requires conversion of JPEG images into gray-scale images in advance.
It should be noted that, in the embodiment of the present application, the types of the target image may include, but are not limited to: the image identification method provided by the application can be used for identifying the type of the target image so as to determine that the target image is the original image or the paper flip image.
In an alternative embodiment, the first image feature is a color image texture feature extracted from a color image of the sample object, and the application may extract the first image feature from the color image of the target object by: the method comprises the steps of adopting filters in a plurality of filter banks to respectively and sequentially convolve with an R channel component, a G channel component and a B channel component of the color image to obtain an image residual error; obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors; the first image feature is determined based on the texture matrix.
In another optional embodiment, the second image feature is a gray texture feature extracted from a gray image of the target object, where in this embodiment, the second image feature may be extracted from the gray image by: respectively and sequentially convolving filters in a preset filter bank with the gray level images to obtain image residual errors; obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors; the second image feature is determined based on the texture matrix.
According to the above alternative embodiment, after extracting the first image feature from the color image of the target object and the second image feature from the grayscale image, the first image feature and the second image feature are fused.
In an alternative embodiment provided herein, the first image feature and the second image feature may be fused, but not limited to, by: and combining the first image feature and the second image feature to obtain the image feature.
Step S204, analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of sets of data through machine learning training, and each set of data in the plurality of sets of data comprises: image features of the sample image and type of sample image.
Optionally, the preset model includes: and integrating a plurality of classifiers to obtain a classification model. The classifier may be any type of classifier, including but not limited to an Ensemble classifier.
In an alternative embodiment, the image features of the sample image are image features obtained by fusing third image features and fourth image features, where the third image features are image features extracted from a color image of the sample object, and the fourth image features are image features extracted from a grayscale image of the sample object.
As an alternative embodiment, the third image feature is a color image texture feature extracted from a color image of the sample object, and the fourth image feature is a gray texture feature extracted from a gray image of the sample object.
In the above embodiment of the present application, the color image and the gray image of a portion of the sample object are selected in advance as the training image, for example, the third image feature may be extracted from the color image of the sample object in advance, the fourth image feature may be extracted from the gray image of the sample object, the image feature of the sample image and the type of the corresponding sample image may be obtained by fusing the third image feature and the fourth image feature, and the training of the preset model may be completed.
Furthermore, in the embodiment of the present application, the image features of the target image may be used as the input of the preset model, and the preset model may be used to analyze the image features to obtain the type of the target image.
Based on the scheme defined by the above embodiment, it can be known that the image features of the target image are obtained by analyzing the image features of the target image according to a preset model, where the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image.
Through the scheme provided by the embodiment of the application, the purpose of improving the accuracy and the practicability of identifying the paper flip image is achieved, so that the technical effect of enhancing the credibility of the digital image is achieved, and the technical problem that the accuracy and the practicability are lower when the existing image identification method is adopted to identify the paper flip image is solved.
The following explains the image recognition method provided in the present application through an alternative embodiment: in the application scenario where the embodiment of the present application is implemented, two image libraries may be used, but not limited to, storing images, wherein 10000 color images (original images) and 10000 grayscale images (paper-like flip images) may be stored in the image library 1, and 10000 color images and 10000 grayscale images may be stored in the image library 2.
In the image libraries 1 and 2, the sizes of all the images may be, but not limited to, 512×512, and the sources of the image data in the image libraries 1 and 2 may be various, such as outdoor scene images, indoor scene images, and character images.
Compared with the prior art that the same image library has larger correlation or the images in one image library have higher similarity, the identification method has the technical problem of lower identification accuracy and practicability when being applied to the actual situation. In the embodiment of the image identification method provided by the application, the selected image library 1 and the selected image library 2 have no correlation, and the images in the same image library have good diversity, so that the identification accuracy and the practicability can be effectively improved.
The classifier selected in the embodiment of the present application may be an Ensemble classifier, and the image features and the recognition results used when classifying according to the Ensemble classifier are as follows:
the following table 1 shows the accuracy result of identifying the image features of the target image in the same image library in the image identification method provided in the embodiment of the present application, and optionally, in the embodiment of the present application, a preset model may be trained by randomly selecting 5000 color images and 5000 gray-scale images in the image library 1, and the remaining 5000 color images and the remaining 5000 gray-scale images in the image library 1 may be selected for identification test.
The recognition accuracy in the same image library in the embodiment of the present application and the recognition accuracy for performing image feature recognition of the target image in the same image library in the prior art are specifically shown in the following table 1:
TABLE 1
Figure BDA0001660085340000101
Figure BDA0001660085340000111
According to table 1 above, it can be seen that, even under the same condition (the image features of the target image in the same image library are identified), the identification accuracy in the embodiment of the present application is significantly higher than that of the existing identification method.
The following table 2 shows the accuracy result of identifying the image features of the target image of the cross-image library (for example, two or more image libraries) in the image identification method in the embodiment of the application, and optionally, in the embodiment of the application, the preset model may be trained by using 10000 color images and 10000 gray images in the image library 1, and the identification test may be performed by using 10000 color images and 10000 gray images in the image library 2. The recognition accuracy of the cross-image library is shown in the following table 2:
TABLE 2
Features (Features) Accuracy (Accuracy)
Image features for local binary pattern extraction 83.05
Image features extracted by rotating local binary pattern 86.85
Improved local binary pattern TLBP extracted image features 88.15
Improved local binary pattern ILBP extracted image features 89.85
As can be seen from the identification results of the table 2, in the identification results of the cross-image library experiments, the identification accuracy of the method can reach 89.85%, the experimental performance of the cross-image library is not explicitly listed by the existing other algorithms, but in the reproduction experiments, the identification of the target image is carried out by using the two image libraries of the method by adopting the existing algorithm, and the identification accuracy of the existing algorithm is far lower than that of the method. It can be seen that the practical applicability of the present application is relatively strong.
In an alternative embodiment, fig. 3 is a flowchart of an alternative method for identifying an image according to an embodiment of the present application, and as shown in fig. 3, the first image feature may be further determined by:
step S302, the filter in the preset filter bank is respectively and sequentially convolved with the R channel component, the G channel component and the B channel component of the color image, so as to obtain an image residual error.
In an alternative embodiment, in the embodiment of the present application, the color image may be input into a filter in a preset filter bank, where a plurality of filters are respectively convolved with an R channel component, a G channel component, and a B channel component of the JPEG color image, to obtain image residuals of the color image corresponding to the filters one by one.
Optionally, the preset filter bank may be any multiple filters, for example, but not limited to, 11 filters, which correspondingly obtain 11 image residuals, where the filters may be high-pass filters.
It should be noted that, the number of filters in the preset filter bank is not particularly limited, and may be determined according to the type of the target image specifically identified in practice or the user requirement.
Step S304, obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors.
In the step S304, the image residual of each color image may be analyzed according to the ILBP longitudinal texture analysis method, so as to obtain a texture matrix corresponding to the filter one by one.
It should be noted that, in the embodiment of the present application, the ILBP is an improved local binary pattern ILBP, where the definition formula of the improved local binary pattern ILBP may be as follows:
Figure BDA0001660085340000121
Figure BDA0001660085340000122
Figure BDA0001660085340000123
Wherein R is the selected neighborhood radius, and P is the selected neighborhood except the center pixelThe number of points, R, is the selected neighborhood radius, P is the variable value, p=0, 1,2,3, … … P-1, g c A value g representing the center pixel point p The gray value corresponding to the p-th pixel.
Step S306, determining the first image feature based on the texture matrix.
In an alternative embodiment, the step S306 of determining the first image feature based on the texture matrix may be implemented by the following method steps:
step S3062, analyzing the texture matrix by using a symbiotic matrix to obtain the symbiotic matrix;
and step 3064, performing dimension reduction processing on the co-occurrence matrix to obtain the first image feature.
Specifically, the above-mentioned co-occurrence matrix may describe a distribution characteristic between pixels, and thus, a statistical joint probability of the co-occurrence matrix after the truncation process may be adopted as a statistical feature of the gray-scale image.
It should be noted that, because the distance between the pixels increases, the correlation between the pixels will be correspondingly weakened, so in the embodiment of the present application, a fourth-order co-occurrence matrix of the pixels may be configured by selecting a horizontal direction and a vertical direction, the obtained fourth-order co-occurrence matrix is subjected to dimension reduction and simplification, and the statistical matrices are sequentially arranged into a row according to a row order for all the obtained gray image statistical matrices, so as to obtain gray image features.
In an alternative embodiment, in the case of obtaining the texture matrix of the color image, the texture matrix of each color image may be truncated, and a statistical analysis may be performed using a fourth-order co-occurrence matrix to obtain a fourth-order co-occurrence matrix corresponding to the number of the texture matrices, and after the fourth-order co-occurrence matrices are subjected to the dimension reduction and simplification processes according to symmetry, all the matrices obtained after the dimension reduction and simplification processes are aligned to obtain the first image feature.
Wherein, the truncation processing refers to judging whether the element value in the texture matrix belongs to a preset value interval [ a, b ]; and carrying out the following processing on the element values of the texture matrix according to the judging result: retaining element values belonging to the preset value intervals [ a, b ]; and modifying an element value less than a to a and an element value greater than b to b.
It should be noted that, the image recognition method provided by the application does not directly process the color image, but processes the image after high-pass filtering, so that the application has small dependency on the content of the image, the texture information of the image is more prominent, and the performance of the algorithm is improved.
It should be noted that, in the process of sequentially performing convolution processing on the R channel component, the G channel component, and the B channel component in the color image, the application does not consider each channel component separately, but considers the influence of the secondary flip on each channel component.
In addition, in the alternative embodiment provided in the application, the determination is not limited to relying on the correlation coefficient and the energy ratio between the channel components, but the method of longitudinal ILBP analysis can be used to extract the information between the channel components more comprehensively based on the image features with higher dimensions, and in the embodiment of the application, the filter is preprocessed to obtain the filter combination. Therefore, the image recognition method provided in the embodiment of the application not only has better performance in one image library, but also has better performance in two or more image libraries (namely, equivalent to the actual situation) compared with the prior art.
As an alternative embodiment, if the filter is F, in the case where the filter bank includes 11 filters, the 11 filters may be expressed as:
F 1 =D 1
F 2 =D 2
F 3 =D 5
F 4 =min(D 2 ,D 4 )
F 5 =max(D 2 ,D 4 )
F 6 =min(D 2 ,D 3 )
F 7 =max(D 2 ,D 3 )
F 8 =min(D 4 ,D 5 )
F 9 =max(D 4 ,D 5 )
F 10 =min(D 2 ,D 3 ,D 4 ,D 5 )
F 11 =max(D 2 ,D 3 ,D 4 ,D 5 )
wherein:
D 1 =a 11 *X(i-1,j-1)+a 12 *X(i-1,j)+a 13 *X(i-1,j+1)+a 21 *X(i,j-1)+a 22 *X(i,j)+a 23 *X(i,j+1)+a 31 *X(i+1,j-1)+a 32 *X(i+1,j)+a 33 *X(i+1,j+1);
D 2 =a 11 *X(i-1,j-1)+a 12 *X(i-1,j)+a 13 *X(i-1,j+1)+a 21 *X(i,j-1)+a 22 *X(i,j)+a 23 *X(i,j+1);
D 3 =a 21 *X(i,j-1)+a 22 *X(i,j)+a 23 *X(i,j+1)+a 31 *X(i+1,j-1)+a 32 *X(i+1,j)+a 33 *X(i+1,j+1);
D 4 =a 11 *X(i-1,j-1)+a 12 *X(i-1,j)+a 21 *X(i,j-1)+a 22 *X(i,j)+a 31 *X(i+1,j-1)+a 32 *X(i+1,j);
D 5 =a 12 *X(i-1,j)+a 13 *X(i-1,j+1)+a 22 *X(i,j)+a 23 *X(i,j+1)+a 32 *X(i+1,j)+a 33 *X(i+1,j+1);
in the embodiment of the present application, a is as described above 11 -a 33 The values of (2) may be as follows:
a 11 =-1,a 12 =2,a 13 =-1,a 21 =2,a 22 =-4,a 23 =2,a 31 =-1,a 32 =2,a 33 -1; the pixel value of the grayscale image X is represented as x= (X) ij ) E { 0., where, 255}, i.e. X ij The gray value at the (i, j) position is represented.
Based on the above optional embodiment, in the embodiment of the present application, a plurality of filters with better performance are selected, and are combined to obtain a preset filter bank, and then the filters in the preset filter bank are respectively and sequentially convolved with the R channel component, the G channel component, and the B channel component of the color image, so as to obtain an image residual, and the convolution effect of the filters can be significantly improved.
In the embodiment of the application, the filter combination and the gray level image convolution can be used first to obtain the image residual error, and then the image residual error is analyzed, wherein the application can also carry out high-pass filtering operation on the gray level image, so that the texture characteristic information in the image is more prominent, and the influence of the image content on the algorithm performance is reduced.
In an alternative embodiment, the first image feature may be determined, but is not limited to, by:
and acquiring correlation information among the R channel component, the G channel component and the B channel component of the color image, wherein the correlation information is used for indicating characteristic information among the R channel component, the G channel component and the B channel component.
It should be noted that the above-mentioned correlation information is not limited to the determination by means of the correlation coefficient and the energy ratio between the channel components, but also includes features with higher dimensions, for example, an ILBP longitudinal texture analysis method may be adopted (see the description about the ILBP longitudinal texture analysis in the embodiment of the present application, which is not repeated here) so as to more comprehensively extract the information between the channel components.
In another alternative embodiment, fig. 4 is a flowchart of an alternative method for identifying an image according to an embodiment of the present application, and as shown in fig. 4, the second image feature is determined by:
step S402, respectively convolving the filters in the preset filter group with the gray level image of the target object in sequence to obtain an image residual error.
In an optional embodiment, in this embodiment of the present application, the gray-scale image may be input into a filter in a preset filter bank, where a plurality of filters are respectively convolved with an R-channel component, a G-channel component, and a B-channel component of the color image, so as to obtain image residuals of the gray-scale image corresponding to the filters one by one.
Optionally, the preset filter set may be a plurality of filters, for example, but not limited to, 11 filters, and correspondingly obtain 11 image residuals, where the preset filters may be the same as the preset filters in step S302, that is, the same filter, or may be different from the preset filters.
It should be noted that, in the method of combining a plurality of filters to obtain a filter bank, the performance of the algorithm in the embodiment of the present application may be effectively improved.
Step S404, obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors.
In step S404, the image residual of each gray-scale image is analyzed according to the ILBP lateral texture analysis method to obtain a texture matrix corresponding to the filter one by one.
In addition, in the step S404, other optional manners than the above method may be adopted to obtain a texture matrix corresponding to the filters in the preset filter bank, which is not limited in this application.
Step S406, determining the second image feature based on the texture matrix.
In an optional embodiment, in step S406, when the texture matrix of the gray scale image is obtained, the texture matrix of each gray scale image may be truncated, and a statistical analysis may be performed using a fourth-order co-occurrence matrix to obtain a fourth-order co-occurrence matrix corresponding to the number of the texture matrices, and after the fourth-order co-occurrence matrices are subjected to dimension reduction and simplification according to symmetry, all the dimension reduced matrices are aligned to obtain the second image feature.
In an alternative embodiment, before step S3062, that is, analyzing the texture matrix using the co-occurrence matrix, the method further includes the following method steps:
step S3060, judging whether the element values in the texture matrix belong to preset value intervals [ a, b ];
step S3061, the following processing is performed on the element values of the texture matrix according to the judging result: retaining element values belonging to the preset value intervals [ a, b ]; and modifying an element value less than a to a and an element value greater than b to b.
As an alternative embodiment, in the texture matrix of the color image, a includes, but is not limited to, 4, and b includes, but is not limited to, 8; through experiment statistics, when the element values in the texture matrix of the color image belong to the preset value intervals [4,8], the statistical difference between the color image and the paper flip image is large, and the change is obvious, so that in the embodiment of the application, the color image of which the element values in the texture matrix belong to the preset value intervals [4,8] can be selected but not limited.
In the above optional embodiment, if it is determined that the element value in the texture matrix does not belong to the preset value interval [4,8], the element value in the preset value interval [4,8] is retained; and modifying an element value less than 4 to 4 and an element value greater than 8 to 8.
As an alternative embodiment, in the texture matrix of the gray-scale image, a includes, but is not limited to, 15, and b includes, but is not limited to, 19; through experiment statistics, when the element values in the texture matrix of the gray level image belong to the preset value intervals [15,19], the statistical difference between the color image and the paper flip image is large, and the change is obvious, so that in the embodiment of the application, the color image of which the element values in the texture matrix belong to the preset value intervals [15,19] can be selected but not limited.
Under the condition that the element values in the texture matrix are judged not to belong to the preset value intervals [15,19], the element values in the preset value intervals [15,19] are reserved; and modifying an element value less than 15 to 15 and an element value greater than 19 to 19.
It should be noted that, in the embodiment of the present application, the values of a and b in the preset value intervals [ a, b ] may be, but are not limited to, those listed in the embodiment, and may be also be obtained according to actual situations and user requirements, which is not particularly limited in this application.
In an alternative embodiment, the image recognition method provided by the application may first perform preprocessing on the color images according to a filter bank to obtain image residuals of the color images, and then perform texture analysis on the image residuals by using an ILBP longitudinal texture analysis method; preprocessing the gray level image of the target object according to a filter bank to obtain an image residual error of the gray level image, obtaining texture matrixes corresponding to the filters in the preset filter bank one by using an ILBP transverse texture analysis method, performing cutting-off processing on the obtained texture matrixes, performing statistical analysis on the four-order co-occurrence matrixes to obtain four-order co-occurrence matrixes corresponding to the number of the texture matrixes, performing dimension reduction and simplification processing on the four-order co-occurrence matrixes according to symmetry, and arranging all the dimension reduced matrixes into a line to obtain the first image characteristic and the second image characteristic.
Based on the problem of poor image feature extraction effect of a color image in the prior art, the application provides a realization mode of adopting an ILBP longitudinal texture analysis method under the condition of extracting a first image feature and adopting an ILBP transverse texture analysis method under the condition of extracting a second image feature, so that the texture feature relation among image color channels is effectively captured, and a preset value interval capable of improving the algorithm performance is further determined.
In an alternative embodiment, the texture matrix includes: a three-dimensional matrix, wherein each dimension in the three-dimensional matrix corresponds to one of the R-channel component, the G-channel component, and the B-channel component.
Fig. 5 is a flowchart of an alternative image recognition method according to an embodiment of the present application, as shown in fig. 5, for obtaining, according to the image residual, a texture matrix corresponding to a filter of the filters one by one, including:
in step S502, the points of the same position and different channel components in the three-dimensional matrix are used as a column, and a column of the same channel component is used as a row, so as to obtain a plurality of two-dimensional matrices of the same channel component.
In the above alternative embodiment, each dimension in the three-dimensional matrix corresponds to one of the R-channel component, G-channel component, and B-channel component.
As an alternative embodiment, the three-dimensional matrix may be analyzed according to the above-mentioned ILBP longitudinal texture analysis method, taking the number of the above-mentioned different channel components as 8 as an example, and since the color image has R, G, B three channel components, the matrix obtained after convolution is the three-dimensional matrix, in this embodiment of the present application, the points of the same position and different channel components are taken as a column, a column of the same channel component is taken as a row, several two-dimensional matrices of the same channel component are obtained, a number of points with radius of 1 and surrounding 8 channel components are selected for each matrix, ILBP is used for each matrix, and finally a new matrix is obtained.
Taking the above three-dimensional matrix as 512×512×3 matrix as an example, in this embodiment of the present application, the three-dimensional matrix may be longitudinally decomposed into 512 512×3 two-dimensional matrices, each 512×3 two-dimensional matrix is analyzed by using an ILBP longitudinal texture analysis method, points with radius of 1 and surrounding 8 channel components are selected to obtain 512 matrices with radius of 510×1, and then a single dimension in the matrices is deleted, and the obtained result forms a new two-dimensional matrix, i.e. a 510×512 two-dimensional matrix is finally obtained.
Step S504, selecting a neighborhood with a radius of m for each two-dimensional matrix, and analyzing and processing the neighborhood by using an improved local binary pattern ILBP to obtain the texture matrix, wherein m is a constant.
In the step S504, the m may be 2, but is not limited thereto, and in the embodiment of the present application, the m may be valued according to the specific situation and the user requirement, which is not particularly limited herein.
As another alternative embodiment, the two-dimensional matrix may be analyzed according to the ILBP lateral texture analysis method, for example, the m is 2, and the number of different channel components is 16, for example, the two-dimensional matrix is a matrix of 512×512, points with radius of 2 and 16 surrounding channel components are selected, and after the two-dimensional matrix is analyzed according to the ILBP lateral texture analysis method, a texture matrix of 508×508 may be obtained.
Example 2
An embodiment of an image recognition method as shown in fig. 6 is also provided according to an embodiment of the present application in the above-described operation environment, which is the same as or similar to the operation environment described in the above-described embodiment 1. It should be noted that, the method embodiment provided in embodiment 2 of the present application may be executed in the computer terminal 10 (or the mobile device 10) shown in fig. 1 or similar computing device.
Fig. 6 is a flowchart of another image recognition method according to an embodiment of the present application, and as shown in fig. 6, the image recognition method provided by the embodiment of the present application may be implemented by the following method steps:
Step S602, obtaining image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object;
alternatively, in the step S602, the target image may be a digital image, for example, image data captured by a digital imaging device such as a digital camera, a smart phone, or the like, and may be used to record a real event in the real world.
Alternatively, the image features may be texture features, where the image features are used to describe the surface properties of an image, and may represent periodic changes or slow changes on the surface of an object, which represents the properties of the surface structure of the object.
It should be noted that, the main targets of texture feature extraction are: the dimension of the extracted texture features is lower, but the robustness is better, the discrimination capability is stronger, the calculated amount in the process of extracting the features is as small as possible, and the method can be applied in practice.
The texture information in the texture features is different from other image features such as gray level, color and the like, and can be represented by pixels and neighborhood distribution of the space around the pixels, and the texture analysis method which is relatively commonly used at the present stage mainly comprises the following four types: statistical type texture features, structural type texture features, signal processing type texture features, and model type texture features.
As an alternative embodiment, the gray scale image of the target object may be, but is not limited to, determined by: and carrying out graying treatment on the color image of the target object to obtain the gray image. Since the gradation image is obtained by subjecting a color image to gradation processing, the values of the three channel components in the gradation image are the same, and thus the calculation amount can be reduced.
In an alternative embodiment, the color image is an original image obtained by directly photographing a target object (may be an object photographed by the photographing device, for example, a real scene or things) by the photographing device, and optionally, the color image may be a JPEG color image.
In another alternative embodiment, the gray-scale image is a paper-based flip image obtained by converting the JPEG color image, for example, the gray-scale process may be a paper-based flip image obtained by photographing a printed paper after printing the color image. Among them, most of color images acquired by photographing devices are in a JPEG image format, and extraction of texture information of gray-scale images requires conversion of JPEG images into gray-scale images in advance.
It should be noted that, in the embodiment of the present application, the types of the target image may include, but are not limited to: the image identification method provided by the application can be used for identifying the type of the target image so as to determine that the target image is the original image or the paper flip image.
In an alternative embodiment, the first image feature is a color image texture feature extracted from a color image of the sample object, and the application may extract the first image feature from the color image of the target object by: the method comprises the steps of adopting filters in a plurality of filter banks to respectively and sequentially convolve with an R channel component, a G channel component and a B channel component of the color image to obtain an image residual error; obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors; the first image feature is determined based on the texture matrix.
In another optional embodiment, the second image feature is a gray texture feature extracted from a gray image of the target object, where in this embodiment, the second image feature may be extracted from the gray image of the target object by: respectively and sequentially convolving filters in a preset filter bank with the gray level image of the target object to obtain an image residual error; obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors; the second image feature is determined based on the texture matrix.
According to the above alternative embodiment, after extracting the first image feature from the color image of the above-mentioned target object and the second image feature from the gray-scale image of the above-mentioned target object, the first image feature and the second image feature are fused.
In an alternative embodiment provided herein, the first image feature and the second image feature may be fused, but not limited to, by: and combining the first image feature and the second image feature to obtain the image feature.
Step S604, analyzing the image features by using a preset model to obtain the type of the target image, where the preset model is obtained by using multiple sets of data through machine learning training, and each set of data in the multiple sets of data includes: image features of the sample image and type of sample image; the image features of the sample image are obtained by fusing a third image feature extracted from a color image of the sample object and a fourth image feature extracted from a grayscale image of the sample object.
Optionally, the preset model includes: and integrating a plurality of classifiers to obtain a classification model. The classifier may be any type of classifier, including but not limited to an Ensemble classifier.
In an alternative embodiment, the image features of the sample image are image features obtained by fusing third image features and fourth image features, where the third image features are image features extracted from a color image of the sample object, and the fourth image features are image features extracted from a grayscale image of the sample object.
Wherein the third image feature is a color image texture feature extracted from a color image of the sample object, and the fourth image feature is a grayscale texture feature extracted from a grayscale image of the sample object.
In the foregoing embodiment of the present application, the color image and the gray-scale image of a portion of the sample object are selected in advance as the training image, for example, the third image feature may be extracted from the color image of the sample object in advance, the fourth image feature may be extracted from the gray-scale image of the target object, the image feature of the sample image and the type of the corresponding sample image may be obtained by fusing the third image feature and the fourth image feature, and the training of the preset model may be completed.
Furthermore, the image features of the target image may be used as input of the preset model, and the preset model may be used to analyze the image features to obtain the type of the target image.
Based on the scheme defined by the above embodiment, it can be known that the image features of the target image are obtained by analyzing the image features of the target image according to a preset model, where the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image.
Through the scheme provided by the embodiment of the application, the purpose of improving the accuracy and the practicability of identifying the paper flip image is achieved, so that the technical effect of enhancing the credibility of the digital image is achieved, and the technical problem that the accuracy and the practicability are lower when the existing image identification method is adopted to identify the paper flip image is solved.
It should be noted that, the preferred implementation manner of this embodiment may be referred to the related description in embodiment 1, and will not be repeated here.
Example 3
An embodiment of an image recognition method as shown in fig. 7 is also provided according to an embodiment of the present application in the above-described operation environment, which is the same as or similar to the operation environment described in the above-described embodiment 1. It should be noted that, the method embodiment provided in embodiment 3 of the present application may be executed in the computer terminal 10 (or the mobile device 10) shown in fig. 1 or similar computing device.
Fig. 7 is a flowchart of another image recognition method according to an embodiment of the present application, and as shown in fig. 7, the image recognition method provided by the embodiment of the present application may be implemented by the following method steps:
step S702, obtaining image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object;
alternatively, in the step S702, the target image may be a digital image, for example, image data captured by a digital imaging device such as a digital camera, a smart phone, or the like, and may be used to record a real event in the real world.
Alternatively, the image features may be texture features, where the image features are used to describe the surface properties of an image, and may represent periodic changes or slow changes on the surface of an object, which represents the properties of the surface structure of the object.
It should be noted that, the main targets of texture feature extraction are: the dimension of the extracted texture features is lower, but the robustness is better, the discrimination capability is stronger, the calculated amount in the process of extracting the features is as small as possible, and the method can be applied in practice.
The texture information in the texture features is different from other image features such as gray level, color and the like, and can be represented by pixels and neighborhood distribution of the space around the pixels, and the texture analysis method which is relatively commonly used at the present stage mainly comprises the following four types: statistical type texture features, structural type texture features, signal processing type texture features, and model type texture features.
As an alternative embodiment, the gray scale image of the target object may be, but is not limited to, determined by: and carrying out graying treatment on the color image of the target object to obtain the gray image. Since the gradation image is obtained by subjecting a color image to gradation processing, the values of the three channel components in the gradation image are the same, and thus the calculation amount can be reduced.
In an alternative embodiment, the color image is an original image obtained by directly photographing a target object (may be an object photographed by the photographing device, for example, a real scene or things) by the photographing device, and optionally, the color image may be a JPEG color image.
In another alternative embodiment, the gray-scale image is a paper-based flip image obtained by converting the JPEG color image, for example, the gray-scale process may be a paper-based flip image obtained by photographing a printed paper after printing the color image. Among them, most of color images acquired by photographing devices are in a JPEG image format, and extraction of texture information of gray-scale images requires conversion of JPEG images into gray-scale images in advance.
It should be noted that, in the embodiment of the present application, the types of the target image may include, but are not limited to: the image identification method provided by the application can be used for identifying the type of the target image so as to determine that the target image is the original image or the paper flip image.
In an alternative embodiment, the first image feature is a color image texture feature extracted from a color image of the sample object, and the application may extract the first image feature from the color image of the target object by: the method comprises the steps of adopting filters in a plurality of filter banks to respectively and sequentially convolve with an R channel component, a G channel component and a B channel component of the color image to obtain an image residual error; obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors; the first image feature is determined based on the texture matrix.
In another optional embodiment, the second image feature is a gray texture feature extracted from a gray image of the target object, where in this embodiment, the second image feature may be extracted from the gray image by: respectively and sequentially convolving filters in a preset filter bank with the gray level images to obtain image residual errors; obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors; the second image feature is determined based on the texture matrix.
According to the above alternative embodiment, after extracting the first image feature from the color image of the above-mentioned target object and the second image feature from the gray-scale image of the above-mentioned target object, the first image feature and the second image feature are fused.
In an alternative embodiment provided herein, the first image feature and the second image feature may be fused, but not limited to, by: and combining the first image feature and the second image feature to obtain the image feature.
Step S704, determining the type of the target image according to the image characteristics.
In the above step S704, as an alternative embodiment, the present application may determine the type of the above target image, but not limited to, by: analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image.
Optionally, the preset model includes: and integrating a plurality of classifiers to obtain a classification model. The classifier may be any type of classifier, including but not limited to an Ensemble classifier.
In an alternative embodiment, the image features of the sample image are image features obtained by fusing third image features and fourth image features, where the third image features are image features extracted from a color image of the sample object, and the fourth image features are image features extracted from a grayscale image of the sample object.
Wherein the third image feature is a color image texture feature extracted from a color image of the sample object, and the fourth image feature is a grayscale texture feature extracted from a grayscale image of the sample object.
In the above embodiment of the present application, the color image and the gray image of a portion of the sample object are selected in advance as the training image, for example, the third image feature may be extracted from the color image of the sample object in advance, the fourth image feature may be extracted from the gray image of the sample object, the image feature of the sample image and the type of the corresponding sample image may be obtained by fusing the third image feature and the fourth image feature, and the training of the preset model may be completed.
Furthermore, the image features of the target image may be used as input of the preset model, and the preset model may be used to analyze the image features to obtain the type of the target image.
Based on the scheme defined by the above embodiment, it can be known that the image features of the target image are obtained by analyzing the image features of the target image according to a preset model, where the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image.
Through the scheme provided by the embodiment of the application, the purpose of improving the accuracy and the practicability of identifying the paper flip image is achieved, so that the technical effect of enhancing the credibility of the digital image is achieved, and the technical problem that the accuracy and the practicability are lower when the existing image identification method is adopted to identify the paper flip image is solved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
It should be noted that, the preferred implementation manner of this embodiment may be referred to the related descriptions in embodiments 1 and 2, and will not be repeated here.
Example 4
According to an embodiment of the present application, there is further provided an embodiment of an image recognition apparatus for implementing the above image recognition method, and fig. 8 is a schematic structural diagram of an image recognition apparatus according to an embodiment of the present application, as shown in fig. 8, including: a first acquisition module 80, a first determination module 82, wherein:
a first obtaining module 80, configured to obtain an image feature of a target image, where the image feature is an image feature obtained by fusing a first image feature and a second image feature, the first image feature is an image feature extracted from a color image of the target object, and the second image feature is an image feature extracted from a gray-scale image of the target object; a first determining module 82, configured to analyze the image features by using a preset model to obtain a type of the target image, where the preset model is obtained by using multiple sets of data through machine learning training, and each set of data in the multiple sets of data includes: image features of the sample image and type of sample image.
Here, the first obtaining module 80 and the first determining module 82 correspond to steps S202 to S204 in embodiment 1, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above-described module may be operated as a part of the apparatus in the computer terminal 10 provided in embodiment 1.
It should be still noted that, the preferred implementation manner of this embodiment may be referred to the related descriptions in embodiments 1, 2, and 3, and will not be described herein again.
Example 5
According to an embodiment of the present application, there is further provided another embodiment of an image recognition apparatus for implementing the above image recognition method, and fig. 9 is a schematic structural diagram of another image recognition apparatus according to an embodiment of the present application, as shown in fig. 9, including: a second acquisition module 90, a second determination module 92, wherein:
a second obtaining module 90, configured to obtain an image feature of a target image, where the image feature is an image feature obtained by fusing a first image feature and a second image feature, the first image feature is an image feature extracted from a color image of the target object, and the second image feature is an image feature extracted from a gray-scale image of the target object; a second determining module 92, configured to analyze the image features by using a preset model to obtain a type of the target image, where the preset model is obtained by using multiple sets of data through machine learning training, and each set of data in the multiple sets of data includes: image features of the sample image and type of sample image; the image features of the sample image are obtained by fusing a third image feature extracted from a color image of the sample object and a fourth image feature extracted from a grayscale image of the target object.
Here, the second acquiring module 90 and the second determining module 92 correspond to steps S602 to S604 in embodiment 2, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 2. It should be noted that the above-described module may be operated as a part of the apparatus in the computer terminal 10 provided in embodiment 1.
It should be still noted that, the preferred implementation manner of this embodiment may be referred to the related descriptions in embodiments 1, 2, and 3, and will not be described herein again.
Example 6
There is further provided, according to an embodiment of the present application, an embodiment of an image recognition apparatus for implementing the above image recognition method, and fig. 10 is a schematic structural diagram of the image recognition apparatus according to the embodiment of the present application, as shown in fig. 10, including: a third acquisition module 101, a third determination module 103, wherein:
a third obtaining module 101, configured to obtain an image feature of a target image, where the image feature is an image feature obtained by fusing a first image feature and a second image feature, the first image feature is an image feature extracted from a color image of the target object, and the second image feature is an image feature extracted from a gray scale image of the target object; the third determining module 103 determines the type of the target image according to the image characteristics.
Here, the third obtaining module 101 and the third determining module 103 correspond to steps S702 to S704 in embodiment 3, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 3. It should be noted that the above-described module may be operated as a part of the apparatus in the computer terminal 10 provided in embodiment 1.
It should be still noted that, the preferred implementation manner of this embodiment may be referred to the related descriptions in embodiments 1, 2, and 3, and will not be described herein again.
Example 7
According to an embodiment of the present application, there is also provided an embodiment of a computer terminal, which may be any one of a group of computer terminals. Alternatively, in the present embodiment, the above-described computer terminal may be replaced with a terminal device such as a mobile terminal.
Alternatively, in this embodiment, the above-mentioned computer terminal may be located in at least one network device among a plurality of network devices of the computer network, for example, may be the computer terminal 10 as shown in fig. 1.
It should be noted herein that in some alternative embodiments, the computer terminal 10 shown in fig. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a specific example, and is intended to illustrate the types of components that may be present in the computer terminal 10 described above.
In this embodiment, the computer terminal may execute the program code of the following steps in the image recognition method of the application program: acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image.
Alternatively, as also shown in FIG. 1, the computer terminal 10 may include: one or more processors, memory, display devices, and the like.
Optionally, the above processor may further execute program code for: respectively and sequentially convolving a filter in a preset filter bank with an R channel component, a G channel component and a B channel component of the color image to obtain an image residual error; obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors; the first image feature is determined based on the texture matrix.
Optionally, the above processor may further execute program code for: analyzing the texture matrix by using a symbiotic matrix to obtain the symbiotic matrix; and performing dimension reduction processing on the symbiotic matrix to obtain the first image feature.
Optionally, the above processor may further execute program code for: judging whether the element values in the texture matrix belong to preset value intervals [ a, b ]; and carrying out the following processing on the element values of the texture matrix according to the judging result: retaining element values belonging to the preset value intervals [ a, b ]; and modifying an element value less than a to a and an element value greater than b to b.
Optionally, the above processor may further execute program code for: taking the points of the same position and different channel components in the three-dimensional matrix as a column, and taking a column of the same channel component as a row to obtain a plurality of two-dimensional matrices of the same channel component; for each two-dimensional matrix, selecting a neighborhood with a radius of m, and analyzing and processing the neighborhood by using an improved local binary pattern ILBP to obtain the texture matrix, wherein m is a constant.
Optionally, the above processor may further execute program code for: respectively and sequentially convolving filters in a preset filter bank with the gray level image of the target object to obtain an image residual error; obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors; the second image feature is determined based on the texture matrix.
Optionally, the above processor may further execute program code for: and acquiring correlation information among the R channel component, the G channel component and the B channel component of the color image, wherein the correlation information is used for indicating characteristic information among the R channel component, the G channel component and the B channel component.
Optionally, the above processor may further execute program code for: and carrying out graying treatment on the color image of the target object to obtain the gray image.
Optionally, the above processor may further execute program code for: and combining the first image feature and the second image feature to obtain the image feature.
In this embodiment, the computer terminal may execute the program code of the following steps in the image recognition method of the application program: acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image; the image features of the sample image are obtained by fusing a third image feature extracted from a color image of the sample object and a fourth image feature extracted from a grayscale image of the sample object.
In this embodiment, the computer terminal may execute the program code of the following steps in the image recognition method of the application program: acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; and determining the type of the target image according to the image characteristics.
By adopting the embodiment of the application, a scheme for identifying the image is provided. Obtaining image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the image characteristics of the sample image and the type of the sample image can achieve the aim of improving the accuracy and the practicability of identifying the paper flip image, and further solve the technical problems that the accuracy and the practicability are lower when the existing image identification method is adopted to identify the paper flip image.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is only illustrative, and the computer terminal may be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palm-phone computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 1 is not limited to the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
Example 8
According to an embodiment of the present application, there is also provided an embodiment of a storage medium. Alternatively, in the present embodiment, the above-described storage medium may be used to hold program code executed by the image recognition method provided in the above-described embodiments 1 to 3.
Alternatively, in this embodiment, the storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: respectively and sequentially convolving a filter in a preset filter bank with an R channel component, a G channel component and a B channel component of the color image to obtain an image residual error; obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors; the first image feature is determined based on the texture matrix.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: analyzing the texture matrix by using a symbiotic matrix to obtain the symbiotic matrix; and performing dimension reduction processing on the symbiotic matrix to obtain the first image feature.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: judging whether the element values in the texture matrix belong to preset value intervals [ a, b ]; and carrying out the following processing on the element values of the texture matrix according to the judging result: retaining element values belonging to the preset value intervals [ a, b ]; and modifying an element value less than a to a and an element value greater than b to b.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: taking the points of the same position and different channel components in the three-dimensional matrix as a column, and taking a column of the same channel component as a row to obtain a plurality of two-dimensional matrices of the same channel component; for each two-dimensional matrix, selecting a neighborhood with a radius of m, and analyzing and processing the neighborhood by using an improved local binary pattern ILBP to obtain the texture matrix, wherein m is a constant.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: respectively and sequentially convolving filters in a preset filter bank with the gray level image of the target object to obtain an image residual error; obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors; the second image feature is determined based on the texture matrix.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image; the image features of the sample image are obtained by fusing a third image feature extracted from a color image of the sample object and a fourth image feature extracted from a grayscale image of the sample object.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; and determining the type of the target image according to the image characteristics.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: and acquiring correlation information among the R channel component, the G channel component and the B channel component of the color image, wherein the correlation information is used for indicating characteristic information among the R channel component, the G channel component and the B channel component.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: and carrying out graying treatment on the color image of the target object to obtain the gray image.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: and combining the first image feature and the second image feature to obtain the image feature.
Example 9
According to an embodiment of the present application, there is provided an embodiment of an image recognition system including: a processor; and a memory, coupled to the processor, for providing instructions to the processor to process the steps of: acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object; analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image.
Wherein the processor may be, but is not limited to, the processor 102 in the computer terminal 10 as shown in FIG. 1; the memory may be, but is not limited to, the memory 104 in the computer terminal 10 as shown in fig. 1.
It should be still noted that, the preferred implementation manner of this embodiment may be referred to the related descriptions in embodiments 7 and 8, and will not be repeated here.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (15)

1. A method of recognizing an image, comprising:
acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object;
the first image features are obtained by performing dimension reduction on a symbiotic matrix, and the symbiotic matrix is obtained by analyzing a texture matrix by using the symbiotic matrix;
analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image.
2. The method of claim 1, wherein the texture matrix is determined by:
respectively and sequentially convolving a filter in a preset filter bank with an R channel component, a G channel component and a B channel component of the color image to obtain an image residual error;
and obtaining texture matrixes corresponding to the filters in the preset filter group one by one according to the image residual errors.
3. The method of claim 1, wherein the texture matrix is analyzed using a co-occurrence matrix, and wherein prior to obtaining the co-occurrence matrix, the method further comprises:
judging whether the element values in the texture matrix belong to preset value intervals [ a, b ];
and carrying out the following processing on the element values of the texture matrix according to the judging result: reserving element values belonging to the preset value interval [ a, b ]; and modifying an element value less than a to a and an element value greater than b to b.
4. The method of claim 2, wherein the texture matrix comprises: a three-dimensional matrix, wherein each dimension in the three-dimensional matrix corresponds to one of the R-channel component, G-channel component, and B-channel component; obtaining the texture matrix corresponding to the filters in the preset filter bank one by one according to the image residual, wherein the texture matrix comprises the following components:
Taking the points of the same position and different channel components in the three-dimensional matrix as a column, and taking a column of the same channel component as a row to obtain a plurality of two-dimensional matrices of the same channel component;
for each two-dimensional matrix, selecting a neighborhood with a radius of m, and analyzing and processing the neighborhood by using an improved local binary pattern ILBP to obtain the texture matrix, wherein m is a constant.
5. The method of claim 1, wherein the second image feature is determined by:
respectively and sequentially convolving filters in a preset filter bank with the gray level image of the target object to obtain an image residual error;
obtaining texture matrixes corresponding to the filters in the preset filter bank one by one according to the image residual errors;
the second image feature is determined based on the texture matrix.
6. The method of claim 1, wherein the first image feature is determined by:
and acquiring correlation information among an R channel component, a G channel component and a B channel component of the color image, wherein the correlation information is used for indicating characteristic information among the R channel component, the G channel component and the B channel component.
7. The method of claim 1, wherein the gray scale image of the target object is determined by:
and carrying out graying treatment on the color image of the target object to obtain the gray image.
8. The method of claim 1, wherein the first image feature and the second image feature are fused by:
and combining the first image feature and the second image feature to obtain the image feature.
9. The method according to any one of claims 1 to 8, wherein the pre-set model comprises: and integrating a plurality of classifiers to obtain a classification model.
10. The method according to any one of claims 1 to 8, wherein the image features of the sample image are image features obtained by fusing third image features extracted from a color image of a sample object and fourth image features extracted from a grayscale image of the sample object.
11. A method of recognizing an image, comprising:
acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object;
The first image features are obtained by performing dimension reduction on a symbiotic matrix, and the symbiotic matrix is obtained by analyzing a texture matrix by using the symbiotic matrix;
analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image; the image features of the sample image are obtained by fusing third image features and fourth image features, the third image features are extracted from the color image of the sample object, and the fourth image features are extracted from the gray image of the sample object.
12. A method of recognizing an image, comprising:
acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object;
The first image features are obtained by performing dimension reduction on a symbiotic matrix, and the symbiotic matrix is obtained by analyzing a texture matrix by using the symbiotic matrix;
and determining the type of the target image according to the image characteristics.
13. A storage medium comprising a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the method of recognizing an image according to any one of claims 1 to 12.
14. A processor for executing a program, wherein the program when executed performs the method of recognizing an image according to any one of claims 1 to 12.
15. An image recognition system, comprising:
a processor; and
a memory, coupled to the processor, for providing instructions to the processor to process the following processing steps:
acquiring image features of a target image, wherein the image features are obtained by fusing first image features and second image features, the first image features are extracted from a color image of the target object, and the second image features are extracted from a gray image of the target object;
The first image features are obtained by performing dimension reduction on a symbiotic matrix, and the symbiotic matrix is obtained by analyzing a texture matrix by using the symbiotic matrix;
analyzing the image features by using a preset model to obtain the type of the target image, wherein the preset model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: image features of the sample image and type of sample image.
CN201810457675.7A 2018-05-14 2018-05-14 Image recognition method and system, storage medium and processor Active CN110490214B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810457675.7A CN110490214B (en) 2018-05-14 2018-05-14 Image recognition method and system, storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810457675.7A CN110490214B (en) 2018-05-14 2018-05-14 Image recognition method and system, storage medium and processor

Publications (2)

Publication Number Publication Date
CN110490214A CN110490214A (en) 2019-11-22
CN110490214B true CN110490214B (en) 2023-05-02

Family

ID=68544887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810457675.7A Active CN110490214B (en) 2018-05-14 2018-05-14 Image recognition method and system, storage medium and processor

Country Status (1)

Country Link
CN (1) CN110490214B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145233A (en) * 2019-12-28 2020-05-12 镇江新一代信息技术产业研究院有限公司 Image resolution management system
CN111160376B (en) * 2019-12-31 2023-11-24 联想(北京)有限公司 Information processing method, device, electronic equipment and storage medium
CN111476729B (en) * 2020-03-31 2023-06-09 北京三快在线科技有限公司 Target identification method and device
CN111724376B (en) * 2020-06-22 2024-02-13 陕西科技大学 Paper disease detection method based on texture feature analysis
CN113068037B (en) * 2021-03-17 2022-12-06 上海哔哩哔哩科技有限公司 Method, apparatus, device, and medium for sample adaptive compensation
CN113435515B (en) * 2021-06-29 2023-12-19 青岛海尔科技有限公司 Picture identification method and device, storage medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914839A (en) * 2014-03-27 2014-07-09 中山大学 Image stitching and tampering detection method and device based on steganalysis
CN104504368A (en) * 2014-12-10 2015-04-08 成都品果科技有限公司 Image scene recognition method and image scene recognition system
CN104598933A (en) * 2014-11-13 2015-05-06 上海交通大学 Multi-feature fusion based image copying detection method
CN106446754A (en) * 2015-08-11 2017-02-22 阿里巴巴集团控股有限公司 Image identification method, metric learning method, image source identification method and devices
CN106683031A (en) * 2016-12-30 2017-05-17 深圳大学 Feature extraction method and extraction system for digital image steganalysis
CN106991451A (en) * 2017-04-14 2017-07-28 武汉神目信息技术有限公司 A kind of identifying system and method for certificate picture
CN108171689A (en) * 2017-12-21 2018-06-15 深圳大学 A kind of identification method, device and the storage medium of the reproduction of indicator screen image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9230192B2 (en) * 2013-11-15 2016-01-05 Adobe Systems Incorporated Image classification using images with separate grayscale and color channels

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914839A (en) * 2014-03-27 2014-07-09 中山大学 Image stitching and tampering detection method and device based on steganalysis
CN104598933A (en) * 2014-11-13 2015-05-06 上海交通大学 Multi-feature fusion based image copying detection method
CN104504368A (en) * 2014-12-10 2015-04-08 成都品果科技有限公司 Image scene recognition method and image scene recognition system
CN106446754A (en) * 2015-08-11 2017-02-22 阿里巴巴集团控股有限公司 Image identification method, metric learning method, image source identification method and devices
CN106683031A (en) * 2016-12-30 2017-05-17 深圳大学 Feature extraction method and extraction system for digital image steganalysis
CN106991451A (en) * 2017-04-14 2017-07-28 武汉神目信息技术有限公司 A kind of identifying system and method for certificate picture
CN108171689A (en) * 2017-12-21 2018-06-15 深圳大学 A kind of identification method, device and the storage medium of the reproduction of indicator screen image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Adaboost多特征融合的织物扫描图案识别;张诚等;《现代纺织技术》;20160910(第05期);全文 *

Also Published As

Publication number Publication date
CN110490214A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110490214B (en) Image recognition method and system, storage medium and processor
Bondi et al. Tampering detection and localization through clustering of camera-based CNN features
CN111488756B (en) Face recognition-based living body detection method, electronic device, and storage medium
Qureshi et al. A bibliography of pixel-based blind image forgery detection techniques
Hsu et al. Camera response functions for image forensics: an automatic algorithm for splicing detection
Bahrami et al. Blurred image splicing localization by exposing blur type inconsistency
Swaminathan et al. Digital image forensics via intrinsic fingerprints
Xu et al. Camera model identification using local binary patterns
Agarwal et al. A diverse large-scale dataset for evaluating rebroadcast attacks
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
US20230099984A1 (en) System and Method for Multimedia Analytic Processing and Display
CN108171689B (en) Identification method and device for copying display screen image and storage medium
Liu et al. Detect image splicing with artificial blurred boundary
Murali et al. Comparision and analysis of photo image forgery detection techniques
Thajeel et al. A Novel Approach for Detection of Copy Move Forgery using Completed Robust Local Binary Pattern.
CN110245573A (en) A kind of register method, apparatus and terminal device based on recognition of face
CN110969202A (en) Portrait collection environment verification method and system based on color component and perceptual hash algorithm
Julliand et al. Automated image splicing detection from noise estimation in raw images
KR102102403B1 (en) Code authentication method of counterfeit print image and its application system
Hadiprakoso Face anti-spoofing method with blinking eye and hsv texture analysis
JP3962517B2 (en) Face detection method and apparatus, and computer-readable medium
Mishra et al. Detection of clones in digital images
CN110858304A (en) Method and equipment for identifying identity card image
Abraham Digital image forgery detection approaches: A review and analysis
Joshi et al. Source printer identification from document images acquired using smartphone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40016346

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant