CN114723636A - Model generation method, device, equipment and storage medium based on multi-feature fusion - Google Patents

Model generation method, device, equipment and storage medium based on multi-feature fusion Download PDF

Info

Publication number
CN114723636A
CN114723636A CN202210438538.5A CN202210438538A CN114723636A CN 114723636 A CN114723636 A CN 114723636A CN 202210438538 A CN202210438538 A CN 202210438538A CN 114723636 A CN114723636 A CN 114723636A
Authority
CN
China
Prior art keywords
image
target
feature
feature extraction
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210438538.5A
Other languages
Chinese (zh)
Inventor
韩金城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Puhui Enterprise Management Co Ltd
Original Assignee
Ping An Puhui Enterprise Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Puhui Enterprise Management Co Ltd filed Critical Ping An Puhui Enterprise Management Co Ltd
Priority to CN202210438538.5A priority Critical patent/CN114723636A/en
Publication of CN114723636A publication Critical patent/CN114723636A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention relates to an artificial intelligence technology, and discloses a model generation method based on multi-feature fusion, which comprises the following steps: carrying out image enhancement on an image to be processed to obtain a training image; extracting target feature extraction methods one by one from the feature extraction method set, and extracting target image features of the training images by using the target feature extraction methods; selecting a target kernel function from a kernel function library of the classification model according to a feature extraction method, and performing high-dimensional mapping on target image features by using the target kernel function to obtain high-dimensional target features; performing feature fusion on the high-dimensional target features corresponding to each feature extraction method to obtain fusion features; and calculating the fusion characteristics by using the classification model to obtain a prediction classification result, and adjusting parameters of the classification model according to the real classification result and the prediction classification result corresponding to the training image to obtain a standard classification model. The invention also provides a model generation device, equipment and medium based on multi-feature fusion. The invention can improve the image classification precision.

Description

Model generation method, device, equipment and storage medium based on multi-feature fusion
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a model generation method and device based on multi-feature fusion, electronic equipment and a computer readable storage medium.
Background
In the process of classifying training data by the model, the single feature can only use one kernel to map the feature vector each time when a support vector machine and other training models are used, if the training needs to be performed by using different kernels for multiple times to obtain a better effect, multiple features can mutually compensate the defects of different features, the precision and the robustness are enhanced, and the complex scene can be dealt with. However, in the multi-feature fusion process, a plurality of short features are changed into a long feature and then input into a support vector machine for training and classification, which is not enough in that the classifier does not know the types of the features in the process of training the model, some features are linearly separable, some features are non-linearly separable, and at this time, a certain kernel is used to directly fuse the short features, so that the obtained fusion features are not accurate enough, and further, the image classification precision is not high enough.
Disclosure of Invention
The invention provides a model generation method and device based on multi-feature fusion and a computer readable storage medium, and mainly aims to solve the problem that the image classification precision is not high enough.
In order to achieve the above object, the present invention provides a model generation method based on multi-feature fusion, including:
acquiring an image to be processed, performing image enhancement on the image to be processed to obtain a training image, and acquiring a real classification result corresponding to the training image;
one feature extraction method is extracted from a preset feature extraction method set one by one to serve as a target feature extraction method, and target image features of the training image are extracted by the target feature extraction method;
selecting a target kernel function from a kernel function library of a preset classification model according to the feature extraction method, and performing high-dimensional mapping on the target image feature by using the target kernel function to obtain a high-dimensional target feature;
performing feature fusion on the high-dimensional target features corresponding to each feature extraction method in the feature extraction method set to obtain fusion features;
and calculating the fusion characteristics by using the classification model to obtain a prediction classification result, and adjusting parameters of the classification model according to the real classification result and the prediction classification result to obtain a standard classification model.
Optionally, the image enhancement on the image to be processed to obtain a training image includes:
carrying out basic shape processing on the image to be processed to obtain a first enhanced image;
carrying out color processing on the image to be processed to obtain a second enhanced image;
and denoising the first enhanced image and the second enhanced image to obtain a training image.
Optionally, the performing color processing on the image to be processed to obtain a second enhanced image includes:
converting a red gray value, a blue gray value and a green gray value of each pixel of the image to be processed, which are expressed by using an RGB color mode, into a hue value, a color saturation value and a brightness value of each pixel of the image to be processed, which are expressed by using an HSI color space;
and performing enhancement processing on the color saturation value of each pixel, and calculating a red gray value, a blue gray value and a green gray value of each pixel according to the brightness value, the hue value and a preset brightness threshold value of each pixel to obtain a second enhanced image.
Optionally, the denoising the first enhanced image and the second enhanced image to obtain a training image includes:
carrying out mean value removing and whitening preprocessing on the image to be processed to obtain a first processed image;
determining a separation matrix for separating each independent component in the first processed image according to an MMI algorithm and the first processed image;
separating non-Gaussian noise in the first processed image according to the separation matrix to obtain a second processed image only containing Gaussian noise;
and removing noise from the second processed image based on a preset algorithm for removing Gaussian noise to obtain a training image.
Optionally, the extracting, by using the target feature extraction method, the target image feature of the training image includes:
graying the training image, and standardizing the grayed image by adopting a Gamma correction method to obtain a standard image;
dividing the standard image into preset units, and performing weighted projection on each pixel in each unit in a histogram by using a gradient direction to obtain a gradient direction histogram of each unit;
connecting a preset number of units into blocks, and normalizing the gradient histogram of each unit in the blocks to obtain block characteristics;
and connecting all the block features in the training image in series to obtain the target image features.
Optionally, the extracting, by using the target feature extraction method, the target image feature of the training image includes:
dividing the training image into a plurality of regions with preset sizes, and comparing gray values according to each pixel in the regions and a preset number of pixels adjacent to the pixels;
binary marking is carried out on each pixel value according to the gray value comparison result, and a characteristic value of the region is generated according to the marking result;
calculating a corresponding histogram according to the characteristic value of each region, and performing normalization processing on the histograms;
and connecting the histograms subjected to the normalization processing in each area into a feature vector to obtain the target image features of the training image.
Optionally, the adjusting parameters of the classification model according to the real classification result and the predicted classification result to obtain a standard classification model includes:
when the real classification result is different from the prediction classification result, extracting all adjustable parameters in the classification model, and generating a parameter list according to the adjustable parameters;
combining and grid searching are carried out on the parameters in the parameter list, and a target hyperplane coefficient of which the target parameter combination and the fitting score accord with preset conditions is determined according to a grid searching result;
and updating parameters of the classification model according to the target parameter combination and the target hyperplane coefficient to obtain a standard classification model.
In order to solve the above problem, the present invention further provides a model generation apparatus based on multi-feature fusion, including:
the training image generation module is used for acquiring an image to be processed, performing image enhancement on the image to be processed to obtain a training image and acquiring a real classification result corresponding to the training image;
the target image feature extraction module is used for extracting one feature extraction method from a preset feature extraction method set one by one to serve as a target feature extraction method, and extracting the target image features of the training image by using the target feature extraction method;
the high-dimensional target feature generation module is used for selecting a target kernel function from a kernel function library of a preset classification model according to the feature extraction method, and performing high-dimensional mapping on the target image feature by using the target kernel function to obtain a high-dimensional target feature;
the fusion feature generation module is used for carrying out feature fusion on the high-dimensional target features corresponding to each feature extraction method in the feature extraction method set to obtain fusion features;
and the standard classification model generation module is used for calculating the fusion characteristics by using the classification model to obtain a prediction classification result, and adjusting parameters of the classification model according to the real classification result and the prediction classification result to obtain a standard classification model.
In order to solve the above problem, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the multi-feature fusion based model generation method described above.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the multi-feature fusion based model generation method described above.
According to the embodiment of the invention, the image to be processed is enhanced, so that the expansion of the image to be processed is realized, the number of the image to be processed is increased, and the extracted features and the finally obtained fusion features are ensured to be sufficiently attached to the image to be processed; the method comprises the steps of extracting target images one by utilizing a plurality of feature extraction methods to obtain different types of image features; by selecting the target kernel function from the preset kernel function library according to the feature extraction method, different kernel functions can be selected for mapping according to different image features, the accuracy of the generated high-dimensional features is improved, and the accuracy of the model for image classification is further improved. Therefore, the model generation method, the model generation device, the electronic equipment and the computer-readable storage medium based on multi-feature fusion can solve the problem that the image classification accuracy is not high enough.
Drawings
Fig. 1 is a schematic flowchart of a model generation method based on multi-feature fusion according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of image enhancement on an image to be processed according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a process of extracting target image features of a training image by using a target feature extraction method according to an embodiment of the present invention;
FIG. 4 is a functional block diagram of a model generation apparatus based on multi-feature fusion according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device implementing the model generation method based on multi-feature fusion according to an embodiment of the present invention.
The objects, features and advantages of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides a model generation method based on multi-feature fusion. The executing body of the model generating method based on multi-feature fusion includes, but is not limited to, at least one of electronic devices such as a server and a terminal, which can be configured to execute the method provided by the embodiments of the present application. In other words, the model generation method based on multi-feature fusion may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Fig. 1 is a schematic flow chart of a model generation method based on multi-feature fusion according to an embodiment of the present invention. In this embodiment, the multi-feature fusion-based model generation method includes the following steps S1-S5:
s1, obtaining an image to be processed, carrying out image enhancement on the image to be processed to obtain a training image, and obtaining a real classification result corresponding to the training image.
In an embodiment of the present invention, the image to be processed may be an image including multiple categories, for example, a brodat z image set, which includes 112 categories, and each category may include one or more pictures in TIFF format with 640 x 640.
In the embodiment of the invention, the real classification result corresponding to the training image is the real category marked according to the training image and is used for training the model.
In the embodiment of the invention, more image data are obtained from the image to be processed, which is beneficial to avoiding overfitting when a model is generated. When the training image is limited, the training set can be expanded by transforming the original image data through data augmentation (data augmentation) to generate new image data; when a large amount of image data is possessed, data enhancement can prevent the model from learning irrelevant modes in the learning process, so that the overall performance is fundamentally improved.
Referring to fig. 2, in the embodiment of the present invention, the image enhancement on the image to be processed to obtain the training image includes the following steps S21 to S23:
s21, performing basic shape processing on the image to be processed to obtain a first enhanced image;
s22, performing color processing on the image to be processed to obtain a second enhanced image;
s23, denoising the first enhanced image and the second enhanced image to obtain a training image.
Specifically, in the embodiment of the present invention, the basic shape processing includes, but is not limited to, image flipping (flip), image rotation (rotate), image scaling (re-scale), image cropping (crop), and image panning (pad).
Wherein the image turning is to turn the image horizontally or vertically; the dimension of the image after rotation in the image rotation can not be reserved, and if the image is a square image, the dimension of the image after 90-degree rotation can be saved; if the image is rectangular, the size of the image is also saved after the image is rotated by 180 degrees; rotating the image by a smaller angle will change the size of the final image. The image zooming is to enlarge or reduce the image, and most image processing frames cut the enlarged image according to the original size during the enlargement; when the size of the image is reduced, the size of the image is smaller than the original size; the image cropping is randomly sampling a part of the image to be processed and then adjusting the part of the image to the original image size. The image translation is to move the image along the X or Y axis or simultaneously along 2 directions.
Further, the color processing the image to be processed to obtain a second enhanced image includes:
converting a red gray value, a blue gray value and a green gray value of each pixel of the image to be processed, which are expressed by using an RGB color mode, into a hue value, a color saturation value and a brightness value of each pixel of the image to be processed, which are expressed by using an HSI color space;
and performing enhancement processing on the color saturation value of each pixel, and calculating a red gray value, a blue gray value and a green gray value of each pixel according to the brightness value, the hue value and a preset brightness threshold value of each pixel to obtain a second enhanced image.
In the embodiment of the present invention, the RGB color mode refers to that each pixel of an image is formed by overlapping red light, blue light, and green light, so that a red gray value, a blue gray value, and a green gray value can be used to represent the color of each pixel in the image to be processed; the RGB color mode facilitates quantization of the image display. The HSI color space represents the color of each pixel in the image to be processed by Hue value (Hue), Saturation value (Saturation) and brightness value (Intensity) from the human visual system.
In the embodiment of the invention, the color cast phenomenon is avoided by performing enhancement processing and correction processing on the color saturation value of the pixel.
In an embodiment of the present invention, the denoising the first enhanced image and the second enhanced image to obtain a training image includes:
carrying out mean value removing and whitening preprocessing on the image to be processed to obtain a first processed image;
determining a separation matrix for separating each independent component in the first processed image according to an MMI algorithm and the first processed image;
separating non-Gaussian noise in the first processed image according to the separation matrix to obtain a second processed image only containing Gaussian noise;
and removing noise from the second processed image based on a preset algorithm for removing Gaussian noise to obtain a training image.
Therefore, the MMI algorithm is utilized to separate the non-Gaussian noise from the image to be processed to obtain the image only containing the Gaussian noise, and then, the ideal denoising effect can be achieved only by removing the Gaussian noise in the image
According to the embodiment of the invention, the second processed image can be denoised by using a Volterra image filtering (VLMS) algorithm based on a least mean square algorithm, so that a training image without noise is obtained.
And S2, extracting one of the feature extraction methods from a preset feature extraction method set one by one to serve as a target feature extraction method, and extracting the target image features of the training image by using the target feature extraction method.
In the embodiment of the present invention, the preset feature extraction method collectively includes a plurality of feature extraction methods, where the feature extraction methods include, but are not limited to, an HOG method and an LBP method.
The HOG method extracts gradient features in all image blocks in an image, and the gradient features are combined to form a feature description of the image. For example, features can be extracted using the MATLAB existing toolkit, the commands are as follows: h ═ extrachogfeatauses (image). Where H is a feature vector, extrachogfeatuerauses () is a matlab own method function, and image is an input image.
The LBP method extracts local texture features of the image, and the local texture features of the image are used as image features. For example, the MATLAB existing toolkit extraction can also be used, with the commands as follows: g-extctlbpfeatres (image). G ═ extrachogfeatauses (image). H is the derived feature vector, explbpfeatres () is matlab's own method function, and image is the input image.
Referring to fig. 3, in the embodiment of the present invention, the extracting the target image feature of the training image by using the target feature extracting method includes the following steps S31-S34:
s31, graying the training image, and standardizing the grayed image by a Gamma correction method to obtain a standard image;
s32, dividing the standard image into preset units, and performing weighted projection on each pixel in each unit in a histogram by using a gradient direction to obtain a gradient direction histogram of each unit;
s33, connecting a preset number of units into blocks, and normalizing the gradient histogram of each unit in the blocks to obtain block characteristics;
and S34, connecting all the block features in the training image in series to obtain the target image feature.
In the embodiment of the present invention, graying the training image makes the training image be regarded as a three-dimensional image of x, y, z (gray level); the standardization of the color space of the grayed image can adjust the contrast of the image, reduce the influence caused by the local shadow and illumination change of the image and inhibit the interference of noise; the gradient of each pixel in the standard image is calculated to comprise the size and the direction, so that contour information can be captured, and meanwhile, the interference of illumination is further weakened; the gradient histograms of each cell in the block are normalized to further compress lighting, shadows, and edges.
In another optional embodiment of the present invention, the extracting the target image feature of the training image by using the target feature extraction method includes:
dividing the training image into a plurality of regions with preset sizes, and comparing gray values according to each pixel in the regions and a preset number of pixels adjacent to the pixels;
binary marking is carried out on each pixel value according to the gray value comparison result, and a characteristic value of the region is generated according to the marking result;
calculating a corresponding histogram according to the characteristic value of each region, and performing normalization processing on the histogram;
and connecting the histograms after normalization processing of each region into a feature vector to obtain the target image features of the training image.
Specifically, for one pixel in each region, the gray values of 8 adjacent pixels are compared with each other, if the values of the surrounding pixels are greater than the value of the central pixel, the position of the pixel is marked as 1, otherwise, the position is 0; further, 8 binary digits can be generated by comparing 8 points in the 3-by-3 neighborhood, and the characteristic value of the region is obtained.
In the embodiment of the present invention, the target image features of multiple training images may be extracted and obtained by different target feature extraction methods, for example, a feature vector H ═ H may be obtained by using an HOG method1,h2,…,hm]That is, the feature vector has m dimensions; the feature vector G ═ G can also be obtained by means of the LBP method1,g2,…,gn]I.e. a feature composition representing a feature vector with n dimensions. Wherein m and n are integers greater than 1, and m and n are independent.
S3, selecting a target kernel function from a kernel function library of a preset classification model according to the feature extraction method, and performing high-dimensional mapping on the target image feature by using the target kernel function to obtain a high-dimensional target feature.
In this embodiment of the present invention, the preset classification model may be a Support Vector Machines (SVM), which is a classification model, and a basic model of the classification model is a linear classifier defined in a feature space and having a maximum interval.
In the embodiment of the present invention, a kernel function library of the preset classification model may have kernel functions that perform high-dimensional mapping for different image features, where the kernel functions include, but are not limited to, linear kernel functions, polynomial kernel functions, and gaussian kernel functions.
In the embodiment of the invention, the kernel functions which are possibly applicable to the image features extracted by different feature extraction methods are different, so that the method can be used for selecting the appropriate target kernel function with the method for extracting the image features.
For example, the feature vector H ═ H is obtained by the HOG method1,h2,…,hm]It is better to use a gaussian kernel function, and to obtain the feature vector G ═ G using the LBP method1,g2,…,gn]It is better to use a polynomial kernel, whereby a gaussian kernel can be chosen for the feature vector H and a polynomial kernel can be chosen for the feature vector G.
In this embodiment of the present invention, the performing high-dimensional mapping on the target image feature by using the target kernel function to obtain a high-dimensional target feature includes:
adding a preset polynomial to the target image characteristics to obtain sample data;
and calculating the sample data by using the target kernel function to obtain high-dimensional target characteristics.
For example, if the target image feature is a one-dimensional feature, a feature x may be added to the feature2And distributing the obtained sample data in a two-dimensional plane, wherein the distribution position of the sample data in the X axis is unchanged.
Specifically, the sample data may be calculated using the following formula:
Figure BDA0003613993780000101
wherein, K (G)1,G2) Is the target kernel function, G1And G2Is the number of samplesAccording (as a low-dimensional feature vector).
And S4, performing feature fusion on the high-dimensional target features corresponding to each feature extraction method in the feature extraction method set to obtain fusion features.
The embodiment of the invention can adopt a series connection method, a zipper method, a user-defined weight method and the like to perform feature fusion on the high-dimensional target features to obtain fusion features. For example, in the feature fusion by using the serial method, a plurality of high-dimensional target features are connected from beginning to end according to a preset sequence, and a plurality of original short high-dimensional target features are changed into a long high-dimensional target feature, so that the fusion feature is obtained.
In the embodiment of the invention, different kernel functions are selected for different target image characteristics to carry out high-dimensional mapping, so that the vector representation in the final high-dimensional space is more accurate; meanwhile, the kernel functions are selected to perform high-dimensional mapping on the target image features to obtain respective high-dimensional target features, and then the high-dimensional target features are fused.
For example, assuming that there are three optional kernel functions, kernel functions are respectively selected for the feature vector H and the feature vector G, and feature fusion is performed on the results of respective kernel function calculations, so that there may be 9 combinations; if the feature vector H and the feature vector G are fused firstly, and then the kernel function is selected for high-dimensional mapping, only 3 combinations are available.
And S5, calculating the fusion characteristics by using the classification model to obtain a prediction classification result, and adjusting parameters of the classification model according to the real classification result and the prediction classification result to obtain a standard classification model.
The embodiment of the invention can adopt SVM to calculate the optimal hyperplane of the fusion characteristics in the characteristic space, and then carry out classification according to the optimal hyperplane to obtain the prediction classification result.
In an embodiment of the present invention, the adjusting parameters of the classification model according to the real classification result and the predicted classification result to obtain a standard classification model includes:
when the real classification result is different from the prediction classification result, extracting all adjustable parameters in the classification model, and generating a parameter list according to the adjustable parameters;
combining and grid searching are carried out on the parameters in the parameter list, and a target hyperplane coefficient of which the target parameter combination and the fitting score accord with preset conditions is determined according to a grid searching result;
and updating parameters of the classification model according to the target parameter combination and the target hyperplane coefficient to obtain a standard classification model.
In the embodiment of the invention, the adjustable parameters are hyper-parameters in kernel functions in the classification model.
According to the embodiment of the invention, the image to be processed is enhanced, so that the expansion of the image to be processed is realized, the number of the image to be processed is increased, and the extracted features and the finally obtained fusion features are ensured to be sufficiently attached to the image to be processed; extracting target images one by using a plurality of characteristic extraction methods to obtain different types of image characteristics; by selecting the target kernel function from the preset kernel function library according to the feature extraction method, different kernel functions can be selected for mapping according to different image features, the accuracy of the generated high-dimensional features is improved, and the accuracy of the model for image classification is further improved. Therefore, the model generation method based on multi-feature fusion can solve the problem that the image classification precision is not high enough.
Fig. 4 is a functional block diagram of a model generation apparatus based on multi-feature fusion according to an embodiment of the present invention.
The multi-feature fusion-based model generation apparatus 100 according to the present invention may be installed in an electronic device. According to the realized functions, the multi-feature fusion-based model generation apparatus 100 may include a training image generation module 101, a target image feature extraction module 102, a high-dimensional target feature generation module 103, a fusion feature generation module 104, and a standard classification model generation module 105. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the training image generation module 101 is configured to acquire an image to be processed, perform image enhancement on the image to be processed to obtain a training image, and acquire a real classification result corresponding to the training image;
the target image feature extraction module 102 is configured to extract one of feature extraction methods from a preset set of feature extraction methods one by one as a target feature extraction method, and extract a target image feature of the training image by using the target feature extraction method;
the high-dimensional target feature generation module 103 is configured to select a target kernel function from a kernel function library of a preset classification model according to the feature extraction method, and perform high-dimensional mapping on the target image features by using the target kernel function to obtain high-dimensional target features;
the fusion feature generation module 104 is configured to perform feature fusion on the high-dimensional target features corresponding to each feature extraction method in the feature extraction method set to obtain fusion features;
the standard classification model generation module 105 is configured to calculate the fusion features by using the classification model to obtain a predicted classification result, and adjust parameters of the classification model according to the real classification result and the predicted classification result to obtain a standard classification model.
In detail, when the modules in the multi-feature fusion-based model generation apparatus 100 according to the embodiment of the present invention are used, the same technical means as the multi-feature fusion-based model generation method described in fig. 1 to 3 are adopted, and the same technical effects can be produced, which is not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device for implementing a model generation method based on multi-feature fusion according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as a multi-feature fusion based model generation program, stored in the memory 11 and executable on the processor 10.
In some embodiments, the processor 10 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), a microprocessor, a digital Processing chip, a graphics processor, a combination of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (for example, executing a model generation program based on multi-feature fusion, etc.) stored in the memory 11 and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used to store not only application software installed in the electronic device and various types of data, such as codes of a model generation program based on multi-feature fusion, but also temporarily store data that has been output or is to be output.
The communication bus 12 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
The communication interface 13 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Fig. 5 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the embodiments described are illustrative only and are not to be construed as limiting the scope of the claims.
The multi-feature fusion based model generation program stored in the memory 11 of the electronic device 1 is a combination of a plurality of instructions, and when running in the processor 10, can realize:
acquiring an image to be processed, performing image enhancement on the image to be processed to obtain a training image, and acquiring a real classification result corresponding to the training image;
one feature extraction method is extracted from a preset feature extraction method set one by one to serve as a target feature extraction method, and target image features of the training image are extracted by the target feature extraction method;
selecting a target kernel function from a kernel function library of a preset classification model according to the feature extraction method, and performing high-dimensional mapping on the target image feature by using the target kernel function to obtain a high-dimensional target feature;
performing feature fusion on the high-dimensional target features corresponding to each feature extraction method in the feature extraction method set to obtain fusion features;
and calculating the fusion characteristics by using the classification model to obtain a prediction classification result, and adjusting parameters of the classification model according to the real classification result and the prediction classification result to obtain a standard classification model.
Specifically, the specific implementation method of the instruction by the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to the drawings, which is not described herein again.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
acquiring an image to be processed, performing image enhancement on the image to be processed to obtain a training image, and acquiring a real classification result corresponding to the training image;
one feature extraction method is extracted from a preset feature extraction method set one by one to serve as a target feature extraction method, and target image features of the training image are extracted by the target feature extraction method;
selecting a target kernel function from a kernel function library of a preset classification model according to the feature extraction method, and performing high-dimensional mapping on the target image feature by using the target kernel function to obtain a high-dimensional target feature;
performing feature fusion on the high-dimensional target features corresponding to each feature extraction method in the feature extraction method set to obtain fusion features;
and calculating the fusion characteristics by using the classification model to obtain a prediction classification result, and adjusting parameters of the classification model according to the real classification result and the prediction classification result to obtain a standard classification model.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A model generation method based on multi-feature fusion is characterized by comprising the following steps:
acquiring an image to be processed, performing image enhancement on the image to be processed to obtain a training image, and acquiring a real classification result corresponding to the training image;
one feature extraction method is extracted from a preset feature extraction method set one by one to serve as a target feature extraction method, and target image features of the training image are extracted by the target feature extraction method;
selecting a target kernel function from a kernel function library of a preset classification model according to the feature extraction method, and performing high-dimensional mapping on the target image feature by using the target kernel function to obtain a high-dimensional target feature;
performing feature fusion on the high-dimensional target features corresponding to each feature extraction method in the feature extraction method set to obtain fusion features;
and calculating the fusion characteristics by using the classification model to obtain a prediction classification result, and adjusting parameters of the classification model according to the real classification result and the prediction classification result to obtain a standard classification model.
2. The method for generating a model based on multi-feature fusion according to claim 1, wherein the image enhancement of the image to be processed to obtain a training image comprises:
carrying out basic shape processing on the image to be processed to obtain a first enhanced image;
carrying out color processing on the image to be processed to obtain a second enhanced image;
and denoising the first enhanced image and the second enhanced image to obtain a training image.
3. The method for generating a model based on multi-feature fusion as claimed in claim 2, wherein the color processing the image to be processed to obtain a second enhanced image comprises:
converting the red gray value, the blue gray value and the green gray value of each pixel of the image to be processed, which are expressed by using an RGB color mode, into a hue value, a color saturation value and a brightness value of each pixel of the image to be processed, which are expressed by using an HSI color space;
and performing enhancement processing on the color saturation value of each pixel, and calculating a red gray value, a blue gray value and a green gray value of each pixel according to the brightness value, the hue value and a preset brightness threshold value of each pixel to obtain a second enhanced image.
4. The method for generating a model based on multi-feature fusion as claimed in claim 2, wherein said denoising said first enhanced image and said second enhanced image to obtain a training image comprises:
carrying out mean value removing and whitening preprocessing on the image to be processed to obtain a first processed image;
determining a separation matrix for separating each independent component in the first processed image according to an MMI algorithm and the first processed image;
separating non-Gaussian noise in the first processed image according to the separation matrix to obtain a second processed image only containing Gaussian noise;
and removing noise from the second processed image based on a preset algorithm for removing Gaussian noise to obtain a training image.
5. The method for generating a model based on multi-feature fusion according to claim 1, wherein the extracting the target image feature of the training image by using the target feature extraction method comprises:
graying the training image, and standardizing the grayed image by adopting a Gamma correction method to obtain a standard image;
dividing the standard image into preset units, and performing weighted projection on each pixel in each unit in a histogram by using a gradient direction to obtain a gradient direction histogram of each unit;
connecting a preset number of units into blocks, and normalizing the gradient histogram of each unit in the blocks to obtain block characteristics;
and connecting all the block features in the training image in series to obtain the target image features.
6. The method for generating a model based on multi-feature fusion according to claim 1, wherein the extracting the target image feature of the training image by using the target feature extraction method comprises:
dividing the training image into a plurality of regions with preset sizes, and comparing gray values according to each pixel in the regions and a preset number of pixels adjacent to the pixels;
binary marking is carried out on each pixel value according to the gray value comparison result, and the characteristic value of the area is generated according to the marking result;
calculating a corresponding histogram according to the characteristic value of each region, and performing normalization processing on the histogram;
and connecting the histograms after normalization processing of each region into a feature vector to obtain the target image features of the training image.
7. The multi-feature fusion-based model generation method of any one of claims 1 to 6, wherein the adjusting parameters of the classification model according to the real classification result and the predicted classification result to obtain a standard classification model comprises:
when the real classification result is different from the prediction classification result, extracting all adjustable parameters in the classification model, and generating a parameter list according to the adjustable parameters;
combining and grid searching are carried out on the parameters in the parameter list, and a target hyperplane coefficient of which the target parameter combination and the fitting score accord with preset conditions is determined according to a grid searching result;
and updating parameters of the classification model according to the target parameter combination and the target hyperplane coefficient to obtain a standard classification model.
8. An apparatus for generating a model based on multi-feature fusion, the apparatus comprising:
the training image generation module is used for acquiring an image to be processed, performing image enhancement on the image to be processed to obtain a training image and acquiring a real classification result corresponding to the training image;
the target image feature extraction module is used for extracting one feature extraction method from a preset feature extraction method set one by one to serve as a target feature extraction method, and extracting the target image features of the training image by using the target feature extraction method;
the high-dimensional target feature generation module is used for selecting a target kernel function from a kernel function library of a preset classification model according to the feature extraction method, and performing high-dimensional mapping on the target image features by using the target kernel function to obtain high-dimensional target features;
the fusion feature generation module is used for carrying out feature fusion on the high-dimensional target features corresponding to each feature extraction method in the feature extraction method set to obtain fusion features;
and the standard classification model generation module is used for calculating the fusion characteristics by using the classification model to obtain a prediction classification result, and adjusting parameters of the classification model according to the real classification result and the prediction classification result to obtain a standard classification model.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the multi-feature fusion based model generation method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the multi-feature fusion based model generation method according to any one of claims 1 to 7.
CN202210438538.5A 2022-04-25 2022-04-25 Model generation method, device, equipment and storage medium based on multi-feature fusion Pending CN114723636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210438538.5A CN114723636A (en) 2022-04-25 2022-04-25 Model generation method, device, equipment and storage medium based on multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210438538.5A CN114723636A (en) 2022-04-25 2022-04-25 Model generation method, device, equipment and storage medium based on multi-feature fusion

Publications (1)

Publication Number Publication Date
CN114723636A true CN114723636A (en) 2022-07-08

Family

ID=82245885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210438538.5A Pending CN114723636A (en) 2022-04-25 2022-04-25 Model generation method, device, equipment and storage medium based on multi-feature fusion

Country Status (1)

Country Link
CN (1) CN114723636A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761239A (en) * 2023-01-09 2023-03-07 深圳思谋信息科技有限公司 Semantic segmentation method and related device
CN116958503A (en) * 2023-09-19 2023-10-27 广东新泰隆环保集团有限公司 Image processing-based sludge drying grade identification method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046723A (en) * 2015-08-31 2015-11-11 中国科学院自动化研究所 Multi-core fusion based in-video target tracking method
CN109816596A (en) * 2017-11-21 2019-05-28 中移(杭州)信息技术有限公司 A kind of image de-noising method and device
CN110490194A (en) * 2019-07-24 2019-11-22 广东工业大学 A kind of recognition methods of the multiple features segment fusion traffic sign of adaptive weight

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046723A (en) * 2015-08-31 2015-11-11 中国科学院自动化研究所 Multi-core fusion based in-video target tracking method
CN109816596A (en) * 2017-11-21 2019-05-28 中移(杭州)信息技术有限公司 A kind of image de-noising method and device
CN110490194A (en) * 2019-07-24 2019-11-22 广东工业大学 A kind of recognition methods of the multiple features segment fusion traffic sign of adaptive weight

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘国华主编: "《机器视觉技术》", vol. 1, 30 November 2021, 华中科技大学出版社, pages: 148 - 149 *
张训华: "《海洋地质调查技术》", vol. 1, 31 December 2017, 海洋出版社, pages: 56 *
栗苹 等: "《无线电引信抗干扰理论》", vol. 1, 30 June 2019, 北京理工大学出版社, pages: 237 - 239 *
白鑫 等: "《新编大学计算机》", vol. 1, 31 January 2022, 中国铁道出版社, pages: 264 - 265 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761239A (en) * 2023-01-09 2023-03-07 深圳思谋信息科技有限公司 Semantic segmentation method and related device
CN115761239B (en) * 2023-01-09 2023-04-28 深圳思谋信息科技有限公司 Semantic segmentation method and related device
CN116958503A (en) * 2023-09-19 2023-10-27 广东新泰隆环保集团有限公司 Image processing-based sludge drying grade identification method and system
CN116958503B (en) * 2023-09-19 2024-03-12 广东新泰隆环保集团有限公司 Image processing-based sludge drying grade identification method and system

Similar Documents

Publication Publication Date Title
JP6774137B2 (en) Systems and methods for verifying the authenticity of ID photos
CN111652845A (en) Abnormal cell automatic labeling method and device, electronic equipment and storage medium
CN112528863A (en) Identification method and device of table structure, electronic equipment and storage medium
WO2021189901A1 (en) Image segmentation method and apparatus, and electronic device and computer-readable storage medium
CN114723636A (en) Model generation method, device, equipment and storage medium based on multi-feature fusion
CN113283446B (en) Method and device for identifying object in image, electronic equipment and storage medium
CN113705462B (en) Face recognition method, device, electronic equipment and computer readable storage medium
CN111639704A (en) Target identification method, device and computer readable storage medium
CN114758249B (en) Target object monitoring method, device, equipment and medium based on field night environment
CN112507934A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN111695609A (en) Target damage degree determination method, target damage degree determination device, electronic device, and storage medium
CN112651953B (en) Picture similarity calculation method and device, computer equipment and storage medium
CN115294483A (en) Small target identification method and system for complex scene of power transmission line
CN114708461A (en) Multi-modal learning model-based classification method, device, equipment and storage medium
CN112132812A (en) Certificate checking method and device, electronic equipment and medium
CN112507923A (en) Certificate copying detection method and device, electronic equipment and medium
CN114842240A (en) Method for classifying images of leaves of MobileNet V2 crops by fusing ghost module and attention mechanism
CN112200189B (en) Vehicle type recognition method and device based on SPP-YOLOv and computer readable storage medium
CN114444565A (en) Image tampering detection method, terminal device and storage medium
CN112541902A (en) Similar area searching method, similar area searching device, electronic equipment and medium
CN113627394B (en) Face extraction method and device, electronic equipment and readable storage medium
CN114783042A (en) Face recognition method, device, equipment and storage medium based on multiple moving targets
CN114463685A (en) Behavior recognition method and device, electronic equipment and storage medium
CN113792671A (en) Method and device for detecting face synthetic image, electronic equipment and medium
CN113850208A (en) Picture information structuring method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination