CN112652004B - Image processing method, device, equipment and medium - Google Patents
Image processing method, device, equipment and medium Download PDFInfo
- Publication number
- CN112652004B CN112652004B CN202011618995.XA CN202011618995A CN112652004B CN 112652004 B CN112652004 B CN 112652004B CN 202011618995 A CN202011618995 A CN 202011618995A CN 112652004 B CN112652004 B CN 112652004B
- Authority
- CN
- China
- Prior art keywords
- image
- area
- rectangular frame
- processing
- frequency component
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 82
- 238000000034 method Methods 0.000 claims description 53
- 238000001514 detection method Methods 0.000 claims description 42
- 238000000605 extraction Methods 0.000 claims description 31
- 230000009466 transformation Effects 0.000 claims description 18
- 238000005260 corrosion Methods 0.000 claims description 16
- 230000007797 corrosion Effects 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 12
- 230000009467 reduction Effects 0.000 claims description 10
- 238000003860 storage Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 11
- 238000001914 filtration Methods 0.000 description 9
- 230000002146 bilateral effect Effects 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image processing method, an image processing device, image processing equipment and an image processing medium, which are used for solving the problem that the internal shape of a target area is similar to the whole shape and the target area cannot be accurately acquired when image processing is carried out in the prior art. In the embodiment of the invention, the first image is subjected to the graying treatment, the low-frequency component of the gray image is extracted after the graying treatment, the low-frequency component image is obtained, the texture feature of the low-frequency component image is obtained, the second image containing the texture feature is obtained, and after the second image is obtained, the region of interest is extracted from the obtained second image, so that the target region can be simply and accurately positioned.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, device, and medium.
Background
With the development of artificial intelligence and 5G technology, real-time information processing has become a serious difficulty in the field of image processing, in which it is becoming more and more important to quickly locate a target area in an image, for example, to locate a two-dimensional code area. The method of extracting information from the image and then processing the information cannot meet the requirement of real-time performance, so that the method of rapidly positioning the region of interest in the image is one way of improving the real-time performance.
The traditional positioning method is complex in model construction and poor in robustness, if a target area in an image is positioned based on a convolutional neural network, a large number of images are required to be trained, the requirement of an algorithm of the convolutional neural network on hardware is high, the internal shape of the target area is similar to the overall shape, and the target area is difficult to distinguish, so that the target area cannot be accurately acquired.
Disclosure of Invention
The invention provides an image processing method, an image processing device, image processing equipment and an image processing medium, which are used for solving the problem that the internal shape of a target area is similar to the whole shape and the target area cannot be accurately acquired when image processing is carried out in the prior art.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
graying treatment is carried out on the first image to obtain a gray image of the first image;
extracting a gray level image of the first image through wavelet transformation to obtain a low-frequency component image, and obtaining texture features of the low-frequency component image to obtain a second image containing the texture features;
and extracting the region of interest from the second image and positioning a target region.
Further, before the first image is subjected to graying processing to obtain a gray image of the first image, the method includes:
and carrying out noise reduction processing on the first image.
Further, the acquiring the texture feature of the low frequency component image includes:
and obtaining the texture characteristics of the low-frequency classified image through local field enhancement mode texture characteristic extraction.
Further, the extracting the region of interest from the second image, and locating the target region includes:
and carrying out straight line detection and corner detection on the second image, determining each rectangular frame in the second image, and positioning a target area according to the area of each rectangular frame.
Further, after the straight line detection and the corner detection are performed on the second image, before determining each rectangular frame in the second image, the method further includes:
and performing expansion treatment and corrosion treatment on the second image.
Further, the determining each rectangular box in the second image includes:
and carrying out contour searching on the images subjected to the expansion treatment and the corrosion treatment, and obtaining each rectangular frame in the second image.
Further, the positioning the target area according to the area of each rectangular frame includes:
according to the area of each rectangular frame, determining the area corresponding to the rectangular frames with the largest area and the set number of rectangular frames in the rectangular frames as a target area; or (b)
And determining the area corresponding to the rectangular frame with the area larger than the preset area threshold as a target area according to the area of each rectangular frame.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the processing module is used for carrying out graying processing on the first image to obtain a gray image of the first image;
the acquisition module is used for extracting the gray level image of the first image through wavelet transformation, acquiring a low-frequency component image, and acquiring texture features of the low-frequency component image to obtain a second image containing the texture features;
the processing module is further used for extracting the region of interest from the second image and locating the target region.
Further, the processing module is specifically configured to perform noise reduction processing on the first image.
Further, the processing module is specifically configured to obtain texture features of the low-frequency classified image through local domain enhancement mode texture feature extraction.
Further, the processing module is specifically configured to perform line detection and corner detection on the second image, determine each rectangular frame in the second image, and position a target area according to the area of each rectangular frame.
Further, the processing module is specifically configured to perform expansion processing and corrosion processing on the second image.
Further, the processing module is specifically configured to perform contour search on the image after the expansion processing and the corrosion processing, and obtain each rectangular frame in the second image.
Further, the processing module is specifically configured to determine, according to an area of each rectangular frame, a region corresponding to a set number of rectangular frames with a largest area in the rectangular frames as a target region; or determining the area corresponding to the rectangular frame with the area larger than the preset area threshold as the target area according to the area of each rectangular frame.
In a third aspect, an embodiment of the present invention provides an electronic device, including at least a processor and a memory, where the processor is configured to execute any of the steps of image processing described above when executing a computer program stored in the memory.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium storing a computer program which, when executed by a processor, performs any of the above-described steps of image processing.
In the embodiment of the invention, the first image is subjected to the graying treatment, the low-frequency component of the gray image is extracted after the graying treatment, the low-frequency component image is obtained, the texture feature of the low-frequency component image is obtained, the second image containing the texture feature is obtained, and after the second image is obtained, the region of interest is extracted from the obtained second image, so that the target region can be simply and accurately positioned.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an image processing procedure according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an implementation process of an LNIP texture feature extraction method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a process for determining an image obtained by transforming an input image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a detailed implementation of the image processing according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 6 is an electronic device provided in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
In order to accurately locate a target area, the embodiment of the invention provides an image processing method, an image processing device, image processing equipment and a medium.
Fig. 1 is a schematic diagram of an image processing process according to an embodiment of the present invention, where the process includes the following steps:
s101: and carrying out graying treatment on the first image to obtain a gray image of the first image.
The image processing method provided by the embodiment of the invention is applied to the electronic equipment, and the electronic equipment can be intelligent equipment such as image acquisition equipment, PC or a server side.
Because the gray image can better highlight the target area, in the embodiment of the invention, the gray image of the first image is obtained by performing the graying treatment on the first image.
When the first image is subjected to the graying process, the first image can be subjected to the graying process by adopting a power rate conversion (also called gamma conversion), wherein the power rate conversion adopts the following formula: s=cx γ Wherein S is a gray value obtained after gray processing, c is a fixed value for adjusting exponential transformation, and the specific value of c is not limited herein, x is the brightness value of any pixel point of the first image, and gamma is an index in power rate transformation, and when gamma is less than 0, the gray value has good gray effect on darker areas in the image, and when gamma is more than 0, the gray value has good effect on brighter areas in the image. Since the acquired tag picture may be affected by the brighter light, γ is selected to be 2 in the embodiment of the present invention, and of course, other values may be selected, and specific values may be flexibly selected according to needs.
S102: and extracting the gray level image of the first image through wavelet transformation, obtaining a low-frequency component image, and obtaining the texture characteristics of the low-frequency component image to obtain a second image containing the texture characteristics.
In order to accurately acquire texture information and target contour information in an image, in the embodiment of the invention, a low-frequency component in a gray level image is acquired by wavelet transformation, so that a low-frequency component image is obtained, and after the low-frequency component image is acquired, texture features in the low-frequency component image are extracted, so that a second image containing the texture features is generated.
In the embodiment of the invention, in order to accurately extract the low-frequency component of the gray image, the gray image can be extracted by using haar wavelet transformation, wherein the wavelet transformation can carry out microscopic operation on the frequency domain of the image, and four frequency components of the gray image can be obtained, and the specific implementation steps of obtaining the frequency components are as follows: cA, (cH, cV, cD) =dwt2 (img, 'haar'), where img is a gray image, cA, cH, cV, cD represents four frequency components, respectively, where cA is a low frequency component, and this formula represents that haar wavelet transformation is performed on the gray image img, so as to obtain a low frequency component cA of the four frequency components, and since the low frequency component represents the contour feature of the image, in the embodiment of the present invention, the low frequency component is extracted, and dwt2 refers to a function of the selected wavelet transformation.
In order to extract the texture features of the image, local area enhancement mode (Local Neighborhood Intensity Pattern, LNIP) texture feature extraction is used in embodiments of the present invention to extract the texture features of the acquired image.
S103: and extracting the region of interest from the second image and positioning a target region.
In order to accurately acquire the target region, in the embodiment of the present invention, the region of interest is extracted from the acquired second image, so that the target region can be acquired. Specifically, the region of interest extracts a target region that is circled by a set color. The target area can be accurately located.
In the embodiment of the invention, the first image is subjected to the graying treatment, the low-frequency component of the gray image is extracted after the graying treatment, the low-frequency component image is obtained, the texture feature of the low-frequency component image is obtained, the second image containing the texture feature is obtained, and after the second image is obtained, the region of interest is extracted from the obtained second image, so that the target region can be simply and accurately positioned.
Example 2:
in order to accurately realize positioning of a target area, in the embodiment of the present invention, before the first image is subjected to graying processing to obtain a gray image of the first image, the method includes:
and carrying out noise reduction processing on the first image.
Because the image acquisition equipment is affected by environments such as illumination and the like when the image acquisition equipment acquires images, the acquired images contain a large amount of noise, and noise reduction processing can be performed on the acquired images in order to improve the accuracy of target area positioning, wherein the noise reduction processing can reduce the noise by adopting a bilateral filtering method.
The noise reduction process of the bilateral filtering method comprises the following steps: for each pixel, a weighted average of the luminance values of the surrounding pixels is used to represent the intensity of the pixel, wherein the intensity refers to the luminance value obtained by using the weighted average. The specific bilateral filtering steps are as follows: invoking a bilateral filtering function in an opencv visual library, wherein img is a first image, d is the radius of a filtering frame, sigmaSpace is the standard deviation of a spatial Gaussian function, sigmaSpace is the standard deviation of a gray value similarity Gaussian function, d value determines the filtering effect, larger d value makes the filtering effect rough, some noise points are reserved, smaller d value makes the final result lose some boundary information, so d can be selected as 4 in the embodiment of the invention, when d is 1, the pixel block of 3*3 is represented, and the brightness value of surrounding pixel blocks is used for determining the intensity of a central pixel block. And when the sigmaColor is selected to be larger, the adjacent pixel points are fitted at a farther place, in the embodiment of the invention, the sigmaColor can be selected to be 75, and when the sigmaSpace is selected to be larger, the adjacent color can be influenced, and in the embodiment of the invention, the sigmaSpace can be selected to be 75. Specific d, sigmaColor and sigmaSpace values can be flexibly set according to requirements.
Example 3:
in order to accurately achieve the positioning of the target area, in the embodiment of the present invention, the extracting the region of interest from the second image includes:
and carrying out straight line detection and corner detection on the second image, determining each rectangular frame in the second image, and positioning a target area according to the area of each rectangular frame.
In the embodiment of the invention, the rectangular frame in the second image needs to be detected first, and in order to accurately determine the rectangular frame in the second image, the second image is detected first, and each rectangular frame in the second image is determined after detection.
In addition, in the embodiment of the invention, in order to effectively determine each rectangular frame in the second image, thereby more accurately realizing the positioning of the target area, the acquired second image is firstly subjected to straight line detection and corner detection in the embodiment of the invention.
In the image detection, the specific straight line detection implementation steps are as follows: the straight line detection function cv2.HoughLines (image, rho, theta, threshold) in the opencv library is called, the image is the second image input, rho is the precision of the distance in pixel units, and the precision is generally 1.theta is the precision of the angle. In general, the precision used is pi/180, which indicates that all possible angles are to be searched, threshold indicates a preset length threshold, indicates the length of the detected line segment, and the greater the value of threshold, the longer the detected line segment. Specifically, the length of the threshold is set to be a small value, and is not limited here.
In the embodiment of the invention, after the straight line is detected, the image is subjected to corner detection, and the specific corner detection steps are as follows: invoking a corner detection function cv2.corerharris (src, blacksize, ksize, k) in an opencv library, wherein src represents an input image; the blacksize is a neighborhood value in corner detection; ksize is the window size that is biased using the sobel function; k represents a corner parameter, typically 0.04-0.06, where the parameter k is typically between 0.04-0.05, and when k takes a smaller value, more corners are rounded. In the embodiment of the present invention, the k value may be selected to be 0.048, and the specific k value is not limited herein.
Example 4:
in order to quickly implement positioning of a target area, in the foregoing embodiments, after the performing of straight line detection and corner detection on the second image, before determining each rectangular frame in the second image, the method further includes:
and performing expansion treatment and corrosion treatment on the second image.
In the embodiment of the invention, after the straight line and the corner point in the second image are detected, the two-dimensional code comprises a plurality of small rectangular frames, so that the subsequent error in determining the target area is avoided, and the workload in determining the rectangular frames is reduced.
In order to achieve the target area positioning, on the basis of the foregoing embodiments, in an embodiment of the present invention, the determining each rectangular box in the second image includes:
and carrying out contour searching on the images subjected to the expansion treatment and the corrosion treatment, and obtaining each rectangular frame in the second image.
In order to accurately determine the rectangular frame detected by the straight line detection and the corner detection, in the embodiment of the invention, a lookup contour function can be selected to determine the rectangular frame in the image, and the specific implementation steps are as follows: invoking a function cv2.findcontours (src, al_app rox_simple, method, retr_external) in an opencv library, wherein src is an input image, al_app rox_simple is a determined contour retrieval mode, method is an approximation method of a contour, wherein the approximation method of the contour can be used for storing all contour points, and the pixel position difference of two adjacent points is not more than 1; the elements in the horizontal direction, the vertical direction and the diagonal direction can be compressed, and only the end point coordinates of the direction can be reserved; the particular approximation method chosen is not limited herein, and retr_external means that only the outer contour is detected.
In order to locate the target area, in the embodiment of the present invention, based on the area of each rectangular frame, locating the target area includes:
according to the area of each rectangular frame, determining the area corresponding to the rectangular frames with the largest area and the set number of rectangular frames in the rectangular frames as a target area; or (b)
And determining the area corresponding to the rectangular frame with the area larger than the preset area threshold as a target area according to the area of each rectangular frame.
In order to accurately realize the positioning of the target area, in the embodiment of the present invention, the area of each rectangular frame is determined based on the determined area, and since the area of the target area is larger in the embodiment of the present invention, several rectangular frames with the largest area among the rectangular frames can be determined, if the efficiency of positioning the target area is to be ensured, several rectangular frames can be selected less, if the accuracy of positioning the target area is to be improved, several rectangular frames can be selected more, for example, two rectangular frames with the largest area among the rectangular frames can be selected, and the area corresponding to the selected rectangular frames is determined as the target area. Specifically, the regions corresponding to the rectangular frames with the largest area among the rectangular frames are selected as the target regions, and the limitation is not made herein.
In the embodiment of the invention, an area threshold value can be stored in advance, the area corresponding to the rectangular frame with the determined area larger than the preset area threshold value is determined as the target area, if the efficiency of determining the target area is to be improved, the area threshold value can be set to be larger, and if the accuracy of determining the target area is to be improved, the area threshold value can be set to be smaller.
Example 5:
in order to accurately achieve the positioning of the target area, in the embodiments of the present invention, the obtaining texture features of the low-frequency component image includes:
and obtaining the texture characteristics of the low-frequency classified image through local field enhancement mode texture characteristic extraction.
The local area enhancement mode (LNIP) texture feature extraction is improved on the basis of local binary model (Local Binary Patterns, LBP) texture feature extraction, and when pixel values of pixel points are represented through LNIP texture feature extraction, for each pixel point, not only the sign of the intensity difference between the pixel value of the pixel point and the pixel value of the adjacent pixel point, that is, the magnitude relation between the pixel values of the pixel point, but also the sign of the difference between the pixel value of the pixel point and the pixel value of the adjacent pixel point and the sign of the difference between the pixel values of the adjacent pixel points of the pixel point are considered, so that the LNIP texture feature extraction is more resistant to an illumination mode, has stronger robustness and can better extract texture features.
In the LNIP texture feature extraction, for each pixel, the pixel value of the pixel and the pixel values of the rest of the 3*3 pixel blocks centering on the pixel are adopted, the pixel value of the pixel is updated through sign value extraction and amplitude variation value extraction, and when the pixel value of the pixel is updated, the number of pixels around the pixel is generally determined through 3*3 pixel blocks.
The determination method of the 3*3 pixel block is as follows: traversing from the upper left corner of the second image, finding a 3*3 pixel block (9 pixel points in total) in the up-down and left-right directions of the image, then translating the 3*3 pixel block left-right, then translating up-down, traversing the whole image once each time with the step length of 1.
When the sign value is extracted and the pixel value representing the central pixel point is determined, the central pixel point of the pixel block of 3*3 is used as I c The representation is that the other pixel points except the central pixel point are respectively represented by I 1 、I 2 ……I 8 And a display unit in which the pixel points whose subscripts are odd numbers are compared with the pixel values of the four adjacent pixel points, and the pixel points whose subscripts are even numbers are compared with the pixel values of the two adjacent pixel points. Specifically, the comparison process using the pixel values with the neighboring pixel points is:
introduction to I as 1, then I 1+mod(i+5,7) Where mod (i+5, 7) =mod (6, 7) =6, i.e. I is 1, I 1+mod(i+5,7) Is I 7 I is similar to the same 1+mod(i+6,9) Is I 8 ,I i+1 Is I 2 ,I 1+mod(i+2,8) Is I 4 The comparison between other pixels and adjacent pixels is not repeated here.
In the comparison, if the pixel value of the pixel point is larger than the pixel value of the adjacent pixel point, determining that the pixel point is 1, if the pixel value is smaller than the pixel value, determining that the pixel point is 0, comparing the pixel point adjacent to the pixel point with the pixel point at the center while comparing the pixel point with the adjacent pixel point of the pixel point, and obtaining a four-bit or two-bit binary number according to the comparison result. After each comparison result is obtained, carrying out exclusive or processing on any two binary numbers, wherein the specific implementation steps are as follows:
D i =XOR(B 1,i ,B 2,i )
wherein Di is binary number obtained after exclusive OR processing, B 1,i A binary number obtained by comparing the pixel point (the i-th pixel point) with the determined adjacent pixel point of the pixel point, B 2 , i Binary numbers obtained by comparing the central pixel point with the determined adjacent pixel points of the pixel point, and obtaining D i Thereafter, D is i The number of 1 s is compared with (1/2) M, M is 4 if i is odd, M is 2 if i is even, D i If the number of 1 is greater than (1/2) M, the value determined by the ith pixel point is 1, if D i If the number of 1 s is smaller than (1/2) M, the value determined by the ith pixel is 0, by this method, the pixel value of the central pixel represented by sign value extraction is determined, and description is made by taking the example that the value of the central pixel is determined by the first pixel, for example, the pixel value of the first pixel is compared with the pixel values (second, fourth, seventh and eighth pixels) of the adjacent pixels, the obtained four-bit binary number is 1100, if the pixel value of the central pixel is compared with the pixel value (second, fourth, seventh and eighth pixels) of the adjacent pixels of the first pixel, the obtained four-bit binary number is 1011, the binary number obtained by exclusive or is 0111, the number of 1 s in the obtained binary number is 3,3 is greater than (1/2) M, and the value determined by the first pixel is 1.
When the amplitude variation value is extracted and the pixel value representing the central pixel point is determined, the pixel value I of the neighborhood pixel point is calculated i (i=1, 2,., 8) and corresponding adjacent pixel points S i Is a pixel of (2)The value (here, the adjacent pixel is the adjacent pixel at the time of sign value extraction, and is denoted by the symbol S) i Representing the average deviation M of corresponding neighboring pixel points) i And the pixel value I of the neighborhood pixel point i Pixel value I relative to the center pixel point c Average deviation T of (2) c . Then average deviation M i And threshold T c (to T) c Set as threshold value) to determine the center pixel point I c The formula is as follows:
wherein M is i The average deviation determined for the ith pixel point is that M is 4 when i is odd, 2 when i is even, S i (k) The value of the kth adjacent pixel point of the ith pixel point is I i Is the value of the ith pixel point; t (T) c Is the average deviation of the center pixel point and the surrounding adjacent pixel points, I c A pixel value of the center pixel point, I i The pixel value of the i-th adjacent pixel point; LNIP M And extracting the determined value for the amplitude variation value. Wherein aign (M) i ,T c ) The determination mode of (a) is as follows:
in the embodiment of the invention, binary numbers representing pixel values, which are determined by sign value extraction and amplitude variation value extraction, are connected in series, and a texture feature histogram is represented by the result of the series connection, so that the extraction of texture features is realized.
Fig. 2 is a schematic diagram of an implementation process of LNIP texture feature extraction provided in an embodiment of the present invention, and is described by taking fig. 2 as an example, firstly traversing 3*3 pixel blocks in sequence based on a low-frequency component image, respectively performing sign value extraction and amplitude variation value extraction for each 3*3 pixel block to determine binary numbers representing pixel values, connecting the binary numbers representing the pixel values in series, representing a combined feature histogram by using a result of the series connection, and connecting each obtained combined feature histogram into a feature vector, that is, a texture feature vector of the whole image, thereby implementing texture feature extraction.
Referring to fig. 3 for illustration, fig. 3 is a schematic diagram illustrating a process from determining an input image to an image obtained by wavelet transforming the image according to an embodiment of the present invention.
First, the first image on the leftmost side (wherein, left and right refer to the left and right in the figure) in fig. 3 is the acquired first image, that is, the image to be processed; the second image in fig. 3 is an image obtained after preprocessing the image, wherein the preprocessing includes bilateral filtering and gray level transformation; the last image in fig. 3 is a low frequency component image obtained after wavelet transformation.
Example 6:
the image processing procedure provided by the embodiment of the present invention is described in detail below with reference to a specific embodiment.
Fig. 4 is a schematic diagram of a detailed implementation procedure of the image processing according to an embodiment of the present invention, where the procedure includes:
s401: a first image is acquired.
S402: and performing image preprocessing on the first image.
Wherein the image preprocessing includes: gamma conversion and bilateral filtering.
S403: and performing haar wavelet transformation on the image subjected to image preprocessing, extracting low-frequency components of the image subjected to image preprocessing, and obtaining a low-frequency component image.
S404: and obtaining the texture characteristics of the low-frequency classified images through LNIP texture characteristic extraction.
In an embodiment of the present invention, the LNIP texture feature extraction includes: and carrying out linear series connection on the results of sign value extraction and amplitude variation value extraction.
S405: and extracting the region of interest from the second image, and locating the target region.
Wherein the region of interest extraction comprises: straight line detection, corner detection and contour finding. In the embodiment of the invention, after straight line detection and corner detection, expansion treatment and corrosion treatment are carried out on the image.
Example 7:
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, where the apparatus includes:
a processing module 501, configured to perform graying processing on a first image to obtain a gray image of the first image;
an obtaining module 502, configured to extract a gray level image of the first image through wavelet transformation, obtain a low frequency component image, obtain texture features of the low frequency component image, and obtain a second image containing the texture features;
the processing module 501 is further configured to perform region of interest extraction on the second image, and locate a target region.
In a possible implementation manner, the processing module 501 is specifically configured to perform noise reduction processing on the first image.
In a possible implementation manner, the processing module 501 is specifically configured to obtain texture features of the low-frequency classified image through local area enhancement mode texture feature extraction.
In a possible implementation manner, the processing module 501 is specifically configured to perform straight line detection and corner detection on the second image, determine each rectangular frame in the second image, and locate the target area according to the area of each rectangular frame.
In a possible implementation, the processing module 501 is specifically configured to perform an expansion process and an erosion process on the second image.
In a possible implementation manner, the processing module 501 is specifically configured to perform contour searching on the image after the expansion process and the corrosion process, and obtain each rectangular frame in the second image.
In a possible implementation manner, the processing module 501 is specifically configured to determine, according to an area of each rectangular frame, a region corresponding to a set number of rectangular frames with a largest area among the rectangular frames as a target region; or determining the area corresponding to the rectangular frame with the area larger than the preset area threshold as the target area according to the area of each rectangular frame.
Example 8:
on the basis of the above embodiments, the embodiment of the present invention further provides an electronic device, as shown in fig. 6, including: processor 601, communication interface 602, memory 603 and communication bus 604, wherein processor 601, communication interface 602, memory 603 accomplish each other's communication through communication bus 604.
The memory 603 has stored therein a computer program which, when executed by the processor 601, causes the processor 601 to perform the steps of:
graying treatment is carried out on the first image to obtain a gray image of the first image;
extracting a gray level image of the first image through wavelet transformation to obtain a low-frequency component image, and obtaining texture features of the low-frequency component image to obtain a second image containing the texture features;
and extracting the region of interest from the second image and positioning a target region.
In one possible implementation manner, before the graying processing is performed on the first image to obtain a gray image of the first image, the method includes:
and carrying out noise reduction processing on the first image.
In a possible implementation manner, the acquiring the texture feature of the low-frequency component image includes:
and obtaining the texture characteristics of the low-frequency classified image through local field enhancement mode texture characteristic extraction.
In a possible implementation manner, the extracting the region of interest from the second image, and locating the target region includes:
and carrying out straight line detection and corner detection on the second image, determining each rectangular frame in the second image, and positioning a target area according to the area of each rectangular frame.
In a possible implementation manner, after the performing straight line detection and corner detection on the second image, before determining each rectangular box in the second image, the method further includes:
and performing expansion treatment and corrosion treatment on the second image.
In one possible implementation, the determining each rectangular box in the second image includes:
and carrying out contour searching on the images subjected to the expansion treatment and the corrosion treatment, and obtaining each rectangular frame in the second image.
In one possible implementation manner, the positioning the target area according to the area of each rectangular frame includes:
according to the area of each rectangular frame, determining the area corresponding to the rectangular frames with the largest area and the set number of rectangular frames in the rectangular frames as a target area; or (b)
And determining the area corresponding to the rectangular frame with the area larger than the preset area threshold as a target area according to the area of each rectangular frame.
Since the principle of the electronic device for solving the problem is similar to that of the communication method, the implementation of the electronic device can refer to the implementation of the method, and the repetition is omitted.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface 602 is used for communication between the electronic device and other devices described above.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit, a network processor (Network Processor, NP), etc.; but also digital instruction processors (Digital Signal Processing, DSP), application specific integrated circuits, field programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
Example 9:
on the basis of the above embodiments, the embodiments of the present invention further provide a computer readable storage medium having stored therein a computer program executable by a processor, which when run on the processor, causes the processor to perform the steps of:
graying treatment is carried out on the first image to obtain a gray image of the first image;
extracting a gray level image of the first image through wavelet transformation to obtain a low-frequency component image, and obtaining texture features of the low-frequency component image to obtain a second image containing the texture features;
and extracting the region of interest from the second image and positioning a target region.
In one possible implementation manner, before the graying processing is performed on the first image to obtain a gray image of the first image, the method includes:
and carrying out noise reduction processing on the first image.
In a possible implementation manner, the acquiring the texture feature of the low-frequency component image includes:
and obtaining the texture characteristics of the low-frequency classified image through local field enhancement mode texture characteristic extraction.
In a possible implementation manner, the extracting the region of interest from the second image, and locating the target region includes:
and carrying out straight line detection and corner detection on the second image, determining each rectangular frame in the second image, and positioning a target area according to the area of each rectangular frame.
In a possible implementation manner, after the performing straight line detection and corner detection on the second image, before determining each rectangular box in the second image, the method further includes:
and performing expansion treatment and corrosion treatment on the second image.
In one possible implementation, the determining each rectangular box in the second image includes:
and carrying out contour searching on the images subjected to the expansion treatment and the corrosion treatment, and obtaining each rectangular frame in the second image.
In one possible implementation manner, the positioning the target area according to the area of each rectangular frame includes:
according to the area of each rectangular frame, determining the area corresponding to the rectangular frames with the largest area and the set number of rectangular frames in the rectangular frames as a target area; or (b)
And determining the area corresponding to the rectangular frame with the area larger than the preset area threshold as a target area according to the area of each rectangular frame.
Since the principle of solving the problem with the computer readable medium provided above is similar to that of the communication method, the steps implemented after the processor executes the computer program in the computer readable medium can be referred to the other embodiments, and the repetition is omitted.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (12)
1. An image processing method, the method comprising:
graying treatment is carried out on the first image to obtain a gray image of the first image;
extracting a gray level image of the first image through wavelet transformation to obtain a low-frequency component image, and obtaining texture features of the low-frequency component image to obtain a second image containing the texture features;
extracting a region of interest from the second image, and positioning a target region;
wherein the extracting the region of interest from the second image, and locating the target region includes:
performing straight line detection and corner detection on the second image, determining each rectangular frame in the second image, and positioning a target area according to the area of each rectangular frame;
after the straight line detection and the corner detection are performed on the second image, before each rectangular frame in the second image is determined, the method further includes:
and performing expansion treatment and corrosion treatment on the second image.
2. The method of claim 1, wherein before graying the first image to obtain a gray image of the first image, the method comprises:
and carrying out noise reduction processing on the first image.
3. The method of claim 1, wherein the acquiring texture features of the low frequency component image comprises:
and extracting texture features through a local field enhancement mode, and acquiring the texture features of the low-frequency component image.
4. The method of claim 1, wherein the determining each rectangular box in the second image comprises:
and carrying out contour searching on the images subjected to the expansion treatment and the corrosion treatment, and obtaining each rectangular frame in the second image.
5. The method of claim 1, wherein locating the target area based on the area of each rectangular box comprises:
according to the area of each rectangular frame, determining the area corresponding to the rectangular frames with the largest area and the set number of rectangular frames in the rectangular frames as a target area; or (b)
And determining the area corresponding to the rectangular frame with the area larger than the preset area threshold as a target area according to the area of each rectangular frame.
6. An image processing apparatus, characterized in that the apparatus comprises:
the processing module is used for carrying out graying processing on the first image to obtain a gray image of the first image;
the acquisition module is used for extracting the gray level image of the first image through wavelet transformation, acquiring a low-frequency component image, and acquiring texture features of the low-frequency component image to obtain a second image containing the texture features;
the processing module is further used for extracting the region of interest from the second image and positioning a target region;
the processing module is specifically configured to perform line detection and corner detection on the second image, determine each rectangular frame in the second image, and position a target area according to the area of each rectangular frame;
the processing module is specifically used for performing expansion processing and corrosion processing on the second image.
7. The apparatus according to claim 6, wherein the processing module is configured to perform noise reduction processing on the first image.
8. The apparatus according to claim 6, wherein the processing module is configured to obtain texture features of the low frequency component image by performing texture feature extraction in a local area enhancement mode.
9. The apparatus according to claim 6, wherein the processing module is specifically configured to perform contour search on the image after the expansion process and the corrosion process, and obtain each rectangular frame in the second image.
10. The apparatus according to claim 6, wherein the processing module is specifically configured to determine, according to an area of each rectangular frame, a region corresponding to a set number of rectangular frames with a largest area among the rectangular frames as the target region; or determining the area corresponding to the rectangular frame with the area larger than the preset area threshold as the target area according to the area of each rectangular frame.
11. An electronic device comprising at least a processor and a memory, the processor being adapted to perform the method of image processing according to any of claims 1-5 when executing a computer program stored in the memory.
12. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, performs the method of image processing according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011618995.XA CN112652004B (en) | 2020-12-31 | 2020-12-31 | Image processing method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011618995.XA CN112652004B (en) | 2020-12-31 | 2020-12-31 | Image processing method, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112652004A CN112652004A (en) | 2021-04-13 |
CN112652004B true CN112652004B (en) | 2024-04-05 |
Family
ID=75366645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011618995.XA Active CN112652004B (en) | 2020-12-31 | 2020-12-31 | Image processing method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112652004B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101515366A (en) * | 2009-03-30 | 2009-08-26 | 西安电子科技大学 | Watershed SAR image segmentation method based on complex wavelet extraction mark |
CN103679173A (en) * | 2013-12-04 | 2014-03-26 | 清华大学深圳研究生院 | Method for detecting image salient region |
CN106529550A (en) * | 2016-10-25 | 2017-03-22 | 凌云光技术集团有限责任公司 | Multidimensional characteristic extraction method and device based on connected domain analysis |
CN108805018A (en) * | 2018-04-27 | 2018-11-13 | 淘然视界(杭州)科技有限公司 | Road signs detection recognition method, electronic equipment, storage medium and system |
CN108960012A (en) * | 2017-05-22 | 2018-12-07 | 中科创达软件股份有限公司 | Feature point detecting method, device and electronic equipment |
CN109711419A (en) * | 2018-12-14 | 2019-05-03 | 深圳壹账通智能科技有限公司 | Image processing method, device, computer equipment and storage medium |
CN110097046A (en) * | 2019-03-11 | 2019-08-06 | 上海肇观电子科技有限公司 | A kind of character detecting method and device, equipment and computer readable storage medium |
CN111079955A (en) * | 2019-12-05 | 2020-04-28 | 贵州电网有限责任公司 | GIS (geographic information System) equipment defect detection method based on X-ray imaging |
CN111445410A (en) * | 2020-03-26 | 2020-07-24 | 腾讯科技(深圳)有限公司 | Texture enhancement method, device and equipment based on texture image and storage medium |
CN111899292A (en) * | 2020-06-15 | 2020-11-06 | 北京三快在线科技有限公司 | Character recognition method and device, electronic equipment and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5980555B2 (en) * | 2012-04-23 | 2016-08-31 | オリンパス株式会社 | Image processing apparatus, operation method of image processing apparatus, and image processing program |
-
2020
- 2020-12-31 CN CN202011618995.XA patent/CN112652004B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101515366A (en) * | 2009-03-30 | 2009-08-26 | 西安电子科技大学 | Watershed SAR image segmentation method based on complex wavelet extraction mark |
CN103679173A (en) * | 2013-12-04 | 2014-03-26 | 清华大学深圳研究生院 | Method for detecting image salient region |
CN106529550A (en) * | 2016-10-25 | 2017-03-22 | 凌云光技术集团有限责任公司 | Multidimensional characteristic extraction method and device based on connected domain analysis |
CN108960012A (en) * | 2017-05-22 | 2018-12-07 | 中科创达软件股份有限公司 | Feature point detecting method, device and electronic equipment |
CN108805018A (en) * | 2018-04-27 | 2018-11-13 | 淘然视界(杭州)科技有限公司 | Road signs detection recognition method, electronic equipment, storage medium and system |
CN109711419A (en) * | 2018-12-14 | 2019-05-03 | 深圳壹账通智能科技有限公司 | Image processing method, device, computer equipment and storage medium |
CN110097046A (en) * | 2019-03-11 | 2019-08-06 | 上海肇观电子科技有限公司 | A kind of character detecting method and device, equipment and computer readable storage medium |
CN111079955A (en) * | 2019-12-05 | 2020-04-28 | 贵州电网有限责任公司 | GIS (geographic information System) equipment defect detection method based on X-ray imaging |
CN111445410A (en) * | 2020-03-26 | 2020-07-24 | 腾讯科技(深圳)有限公司 | Texture enhancement method, device and equipment based on texture image and storage medium |
CN111899292A (en) * | 2020-06-15 | 2020-11-06 | 北京三快在线科技有限公司 | Character recognition method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112652004A (en) | 2021-04-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108805023B (en) | Image detection method, device, computer equipment and storage medium | |
CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
CN108629343B (en) | License plate positioning method and system based on edge detection and improved Harris corner detection | |
CN112949767B (en) | Sample image increment, image detection model training and image detection method | |
CN107808161A (en) | A kind of Underwater targets recognition based on light vision | |
WO2021118463A1 (en) | Defect detection in image space | |
CN112085709A (en) | Image contrast method and equipment | |
CN111695373A (en) | Zebra crossing positioning method, system, medium and device | |
CN108229583B (en) | Method and device for fast template matching based on main direction difference characteristics | |
CN111696072A (en) | Color image line detection method, color image line detection device, electronic device, and storage medium | |
CN113255537A (en) | Image enhancement denoising method for identifying sailing ship | |
CN112649793A (en) | Sea surface target radar trace condensation method and device, electronic equipment and storage medium | |
CN117253150A (en) | Ship contour extraction method and system based on high-resolution remote sensing image | |
CN107710229B (en) | Method, device and equipment for recognizing shape in image and computer storage medium | |
CN117557565B (en) | Detection method and device for lithium battery pole piece | |
CN113191281B (en) | ORB (object oriented binary) feature extraction method based on region of interest and self-adaptive radius | |
Li et al. | A study of crack detection algorithm | |
CN116071625B (en) | Training method of deep learning model, target detection method and device | |
CN114550173A (en) | Image preprocessing method and device, electronic equipment and readable storage medium | |
CN112652004B (en) | Image processing method, device, equipment and medium | |
CN111178111A (en) | Two-dimensional code detection method, electronic device, storage medium and system | |
CN110633705A (en) | Low-illumination imaging license plate recognition method and device | |
CN115984712A (en) | Multi-scale feature-based remote sensing image small target detection method and system | |
CN111753573B (en) | Two-dimensional code image recognition method and device, electronic equipment and readable storage medium | |
CN114219831A (en) | Target tracking method and device, terminal equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |