CN106846353B - Density-based image processing method, image processing apparatus and device - Google Patents

Density-based image processing method, image processing apparatus and device Download PDF

Info

Publication number
CN106846353B
CN106846353B CN201710004197.XA CN201710004197A CN106846353B CN 106846353 B CN106846353 B CN 106846353B CN 201710004197 A CN201710004197 A CN 201710004197A CN 106846353 B CN106846353 B CN 106846353B
Authority
CN
China
Prior art keywords
image
edge
density
target
point space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710004197.XA
Other languages
Chinese (zh)
Other versions
CN106846353A (en
Inventor
刁梁
俞大海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Group Co Ltd
Guangdong Midea White Goods Technology Innovation Center Co Ltd
Original Assignee
Midea Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Midea Group Co Ltd filed Critical Midea Group Co Ltd
Priority to CN201710004197.XA priority Critical patent/CN106846353B/en
Publication of CN106846353A publication Critical patent/CN106846353A/en
Application granted granted Critical
Publication of CN106846353B publication Critical patent/CN106846353B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning

Abstract

The invention provides a density-based image processing method, an image processing device and a device with an image processing function, wherein the density-based image processing method comprises the following steps: carrying out edge extraction processing on an image to be processed; convolving the image subjected to the edge extraction processing by using a target convolution kernel according to convolution step length to obtain an edge density point space of the image subjected to the edge extraction processing; screening points in the edge density point space using a density threshold; acquiring a target connected domain in the screened edge density point space; and determining the boundary of the target object in the image according to the target connected domain. Through the technical scheme, the non-target objects in the image can be accurately removed, and the accuracy and speed of image recognition are improved, so that the accuracy and real-time performance of the image recognition are ensured.

Description

Density-based image processing method, image processing apparatus and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a density-based image processing method, a density-based image processing apparatus, and a device having an image processing function.
Background
In recent years, with the rapid development of artificial intelligence and big data technology, more and more products begin to develop towards intellectualization, and compared with non-intellectualized products, the intellectualized products have the characteristics of more powerful functions, more comfortable user experience and the like. In a plurality of intelligent directions, image recognition is a very important field in intellectualization, and a complete image recognition system takes an image as input information, recognizes objects in the image by different methods, and finally outputs a recognition result. However, in the image recognition, since there are many complicated backgrounds and non-target objects, which may cause great influence on the recognition result, before the image recognition, the image needs to be segmented to extract a clean image and remove the complicated backgrounds and the non-target objects.
At present, most of mainstream image segmentation methods adopt threshold segmentation, color gradient segmentation and edge segmentation, and a large number of segmentation methods with universality are derived according to the principle. Especially for images with complex and changeable textures and large color changes, the segmentation effect in the prior art is poor.
The current image segmentation method is mainly based on two modes, namely an edge-based segmentation method and a region-based segmentation method, wherein if complex and irregular cross textures exist in an image, non-target objects in the image cannot be accurately removed, and a region-based segmentation algorithm also often causes segmentation errors due to the diversity of image contents, and the segmentation time is too long and the real-time performance is poor, so that the non-target objects cannot be automatically removed.
Therefore, how to accurately remove the non-target object in the image and improve the accuracy and speed of image recognition, thereby ensuring the accuracy and real-time of image recognition becomes a technical problem to be solved urgently.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art or the related art.
To this end, it is an object of the invention to propose a density-based image processing method.
Another object of the present invention is to provide an image processing apparatus based on density.
It is a further object of the invention to propose a device with image processing functionality.
To achieve at least one of the above objects, according to an embodiment of a first aspect of the present invention, there is provided a density-based image processing method including: carrying out edge extraction processing on an image to be processed; performing convolution on the image subjected to the edge extraction processing by using a target convolution kernel according to convolution step length to obtain an edge density point space; screening points in the edge density point space using a density threshold; acquiring a target connected domain in the screened edge density point space; and determining the boundary of the target object in the image according to the target connected domain.
According to the density-based image processing method, the edge density point space of the image is obtained through the convolution principle, the density threshold is used as a main judgment basis, the points in the edge density point space are screened to determine the boundary of the target object in the image, even if complex and irregular cross textures or diversity of image contents exist in the image, for example, a food image, non-target objects in the image can be accurately removed, the image can be accurately segmented, and the accuracy of image recognition is improved. In addition, the algorithm of the scheme has high flexibility, and the speed and the effect of image processing can be adjusted by adjusting the target convolution kernel, the convolution step length and the density threshold, so that the accuracy and the real-time performance of image identification are ensured.
The density-based image processing method according to the above-described embodiment of the present invention may further have the following technical features:
according to an embodiment of the present invention, before the convolving the image subjected to the edge extraction processing, the method further includes: performing boundary extension processing on the image subjected to the edge extraction processing to convolve the image subjected to the boundary extension processing, wherein when the space of the image subjected to the edge extraction processing is p (x, y), the space of the image subjected to the boundary extension processing is H (x, y),
Figure BDA0001202580200000021
rows is the width of the image subjected to the edge extraction processing, and h.cols is the height of the image subjected to the edge extraction processing.
According to the density-based image processing method provided by the embodiment of the invention, by carrying out the boundary extension processing, the point beyond the boundary can be avoided from being convoluted when the image is convoluted, so that the accuracy and reliability of convolution are ensured.
According to one embodiment of the present invention, when the target convolution kernel is h (x, y), (0 < x < h.rows, 0 < y < h.cols), h.rows is the width of the target convolution kernel, and h.cols is the height of the target convolution kernel, the edge density point space is Con (x, y),
Figure BDA0001202580200000031
wherein s is the convolution step length, k is a positive integer, x belongs to (0, H.rows), and y belongs to (0, H.cols).
According to the density-based image processing method provided by the embodiment of the invention, the edge density point space conforms to the formula, the width and the height of the edge density point space obtained after convolution can be ensured to be consistent with those of an original image, and the boundary of a target object in the image can be further accurately determined. In addition, in order to ensure the speed of calculating the edge density point space, the calculation can be realized by setting a proper convolution step length, the time consumption is reduced when the convolution step length is larger, and the obtained density point space is more sparse.
According to an embodiment of the present invention, the edge density point space after screening is Cone (x, y):
Figure BDA0001202580200000032
where Con (x, y) is the edge density point space and e is the density threshold.
According to the density-based image processing method, the screened edge density point space is obtained by screening the points in the edge density point space, so that non-target objects in the image are removed, the identified image is ensured to be purer, and the accuracy of image identification is further ensured.
According to an embodiment of the present invention, the acquiring a target connected domain in a filtered edge density point space specifically includes: performing expansion processing on the screened edge density point space by using an expansion window so as to connect adjacent points in the screened edge density point space into a connected domain; and selecting the connected domain with the largest area as the target connected domain.
According to the density-based image processing method of the embodiment of the invention, since the area where the target object is located has the largest number of adjacent points, the boundary of the target object can be determined by connecting the adjacent points. In addition, the expansion processing is carried out on the screened edge density point space, so that non-target objects in the image are filtered more comprehensively, and the finally determined target objects are ensured to be purer.
According to one embodiment of the invention, the side length of the dilation window is greater than half the convolution step size.
According to the density-based image processing method provided by the embodiment of the invention, because the side length of the expansion window is greater than half of the convolution step length, adjacent points in the expanded density point space are connected into a connected domain, and at the moment, the connected domain with the largest area corresponds to the target object.
According to an embodiment of a second aspect of the present invention, there is provided a density-based image processing apparatus including: the edge extraction unit is used for carrying out edge extraction processing on the image to be processed; the convolution unit is used for performing convolution on the image subjected to the edge extraction processing by using a target convolution kernel according to the convolution step length to obtain an edge density point space; a screening unit for screening points in the edge density point space using a density threshold; the acquisition unit is used for acquiring a target connected domain in the edge density point space after screening; and the determining unit is used for determining the boundary of the target object in the image according to the target connected domain.
According to the density-based image processing device of the embodiment of the invention, the edge density point space of the image is obtained by the convolution principle, and the points in the edge density point space are screened by taking the density threshold value as the main judgment basis to determine the boundary of the target object in the image, so that even if complex and irregular cross textures or diversity of image contents exist in the image, such as a food image, non-target objects in the image can be accurately removed, the image can be accurately segmented, and the accuracy of image recognition is further improved. In addition, the algorithm of the scheme has high flexibility, and the speed and the effect of image processing can be adjusted by adjusting the target convolution kernel, the convolution step length and the density threshold, so that the accuracy and the real-time performance of image identification are ensured.
According to an embodiment of the present invention, further comprising: a boundary extension unit configured to perform boundary extension processing on the image subjected to the edge extraction processing to convolve the image subjected to the boundary extension processing, wherein when a space of the image subjected to the edge extraction processing is p (x, y), the space of the image subjected to the boundary extension processing is H (x, y),
Figure BDA0001202580200000041
rows is the width of the image subjected to the edge extraction processing, and h.cols is the height of the image subjected to the edge extraction processing.
According to the density-based image processing device of the embodiment of the invention, by carrying out the boundary extension processing, the point beyond the boundary can be avoided from being convoluted when the image is convoluted, so that the accuracy and the reliability of convolution are ensured.
According to one embodiment of the present invention, when the target convolution kernel is h (x, y), (0 < x < h.rows, 0 < y < h.cols), h.rows is the width of the target convolution kernel, and h.cols is the height of the target convolution kernel, the edge density point space is Con (x, y),
Figure BDA0001202580200000051
wherein s is the convolution step length, k is a positive integer, x belongs to (0, H.rows), and y belongs to (0, H.cols).
According to the density-based image processing device provided by the embodiment of the invention, the edge density point space conforms to the formula, the width and the height of the edge density point space obtained after convolution can be ensured to be consistent with those of an original image, and the boundary of a target object in the image can be further accurately determined. In addition, in order to ensure the speed of calculating the edge density point space, the calculation can be realized by setting a proper convolution step length, the time consumption is reduced when the convolution step length is larger, and the obtained density point space is more sparse.
According to an embodiment of the present invention, the edge density point space after screening is Cone (x, y):
Figure BDA0001202580200000052
where Con (x, y) is the edge density point space and e is the density threshold.
According to the density-based image processing device provided by the embodiment of the invention, the screened edge density point space is obtained by screening the points in the edge density point space, so that non-target objects in the image are removed, the identified image is ensured to be purer, and the accuracy of image identification is further ensured.
According to one embodiment of the present invention, the acquisition unit includes: the connecting subunit is used for performing expansion processing on the screened edge density point space by using an expansion window so as to connect adjacent points in the screened edge density point space into a connected domain; and the selecting subunit is used for selecting the connected domain with the largest area as the target connected domain.
According to the density-based image processing apparatus of the embodiment of the present invention, since the area where the target object is located has the largest number of adjacent points, the boundary of the target object can be determined by connecting the adjacent points. In addition, the expansion processing is carried out on the screened edge density point space, so that non-target objects in the image are filtered more comprehensively, and the finally determined target objects are ensured to be purer.
According to one embodiment of the invention, the side length of the dilation window is greater than half the convolution step size.
According to the density-based image processing device of the embodiment of the invention, since the side length of the expansion window is greater than half of the convolution step length, adjacent points in the density point space after expansion processing are connected into a connected domain, and at this time, the connected domain with the largest area corresponds to the target object.
According to an embodiment of the third aspect of the present invention, there is provided an apparatus having an image processing function, including the density-based image processing apparatus according to any one of the above-mentioned technical solutions, so that the apparatus has the same technical effects as the density-based image processing apparatus according to any one of the above-mentioned technical solutions, and details thereof are omitted here.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 shows a schematic flow diagram of a density-based image processing method according to an embodiment of the invention;
FIG. 2 shows a schematic flow diagram of a density-based image processing method according to another embodiment of the invention;
FIG. 3 shows a schematic block diagram of a density-based image processing apparatus according to an embodiment of the present invention;
fig. 4 shows a schematic block diagram of an apparatus having an image processing function according to an embodiment of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Example one
As shown in fig. 1, a density-based image processing method according to an embodiment of the present invention includes:
and 102, performing edge extraction processing on the image to be processed.
And 104, performing convolution on the image subjected to the edge extraction processing by using a target convolution kernel according to a convolution step size to obtain an edge density point space.
Wherein, when the target convolution kernel is h (x, y), (x is more than 0 and less than h.rows, y is more than 0 and less than h.cols), h.rows is the width of the target convolution kernel, h.cols is the height of the target convolution kernel, the edge density point space is Con (x, y),
Figure BDA0001202580200000071
p (x, y) represents the space of the image subjected to the edge extraction process, s is the convolution step, k is a positive integer, x belongs to (0, H.rows), and y belongs to (0, H.cols).
In order to ensure the speed of calculating the edge density point space, the calculation can be realized by setting a proper convolution step length, the time consumption is less when the convolution step length is larger, and the obtained density point space is more sparse.
For example, the target convolution kernel is a convolution kernel of all 1 s. Or the target convolution kernel may be a convolution kernel of another function, such as a gaussian function.
And 106, screening the points in the edge density point space by using a density threshold.
If the density threshold is e, the edge density point space Cone (x, y) after screening is:
Figure BDA0001202580200000072
the screened edge density point space is obtained by screening points in the edge density point space, so that non-target objects in the image are removed, the identified image is ensured to be purer, and the accuracy of image identification is ensured.
And step 108, acquiring a target connected domain in the screened edge density point space.
Preferably, step 108 specifically comprises: performing expansion processing on the screened edge density point space by using an expansion window so as to connect adjacent points in the screened edge density point space into a connected domain; and selecting the connected domain with the largest area as the target connected domain.
Since the area where the target object is located has the largest number of adjacent points, the boundary of the target object can be determined by connecting the adjacent points. In addition, the expansion processing is carried out on the screened edge density point space, so that non-target objects in the image are filtered more comprehensively, and the finally determined target objects are ensured to be purer.
Preferably, the side length of the dilation window is greater than half the convolution step.
Because the side length of the expansion window is larger than half of the convolution step length, adjacent points in the density point space after expansion processing are connected into a connected domain, and at the moment, the connected domain with the largest area corresponds to the target object.
And step 110, determining the boundary of the target object in the image according to the target connected domain.
In the technical scheme, an edge density point space of the image is obtained through a convolution principle, and points in the edge density point space are screened by taking a density threshold value as a main judgment basis to determine the boundary of a target object in the image. In addition, the algorithm of the scheme has high flexibility, and the speed and the effect of image processing can be adjusted by adjusting the target convolution kernel, the convolution step length and the density threshold, so that the accuracy and the real-time performance of image identification are ensured.
Example two
As shown in fig. 2, a density-based image processing method according to another embodiment of the present invention includes:
step 202, performing edge extraction processing on the image to be processed.
Step 202 specifically includes: and converting the image to be processed into a gray image, and performing edge extraction processing on the gray image by using an edge extraction algorithm.
For example, after loading an image to be processed (such as a food image), the image is converted into an 8-bit grayscale image, and the edge of the grayscale image is extracted using the canny algorithm.
And step 204, performing boundary extension processing on the image subjected to the edge extraction processing.
When the space of the image subjected to the edge extraction processing is p (x, y), the space of the image subjected to the boundary extension processing is H (x, y),
Figure BDA0001202580200000091
rows is the width of the image subjected to the edge extraction process, and h.cols is the height of the image subjected to the edge extraction process.
In step 206, the image subjected to the boundary extension processing is convolved with a convolution step size by using a target convolution kernel to obtain an edge density point space Con (x, y).
Preferably, the edge density point space Con (x, y) is:
Figure BDA0001202580200000092
wherein s is convolution step length, k is a positive integer, x belongs to (0, H.rows), and y belongs to (0, H.cols).
In addition, in order to ensure the speed of calculating the edge density point space, the calculation can be realized by setting a proper convolution step length, the time consumption is reduced when the convolution step length is larger, and the obtained density point space is more sparse.
At step 208, a density threshold (e.g., 40000) is used to screen points in the edge density point space.
If the density threshold is e, the edge density point space Cone (x, y) after screening is:
Figure BDA0001202580200000093
and step 210, acquiring a target connected domain in the screened edge density point space.
Preferably, step 210 specifically includes: performing expansion processing on the screened edge density point space by using an expansion window; connecting adjacent points in the edge density point space after expansion treatment to form a connected domain; and selecting the connected domain with the largest area as the target connected domain.
Preferably, the side length of the dilation window is greater than half the convolution step size. For example, the convolution step size is 15 and the dilation window side length is 8.
Step 212, determining the boundary of the target object in the image according to the target connected domain.
The rectangular boundary of the target connected domain is the boundary of the target object.
In the technical scheme, an edge density point space of the image is obtained through a convolution principle, and points in the edge density point space are screened by taking a density threshold value as a main judgment basis to determine the boundary of a target object in the image. In addition, the algorithm of the scheme has high flexibility, and the speed and the effect of image processing can be adjusted by adjusting the target convolution kernel, the convolution step length and the density threshold, so that the accuracy and the real-time performance of image identification are ensured.
Fig. 3 shows a schematic block diagram of a density-based image processing apparatus according to an embodiment of the present invention.
As shown in fig. 3, the density-based image processing apparatus 300 according to an embodiment of the present invention includes: an edge extraction unit 302, a convolution unit 304, a screening unit 306, an acquisition unit 308, and a determination unit 310.
An edge extraction unit 302, configured to perform edge extraction processing on an image to be processed; a convolution unit 304, configured to convolve the image subjected to the edge extraction processing by using a target convolution kernel with a convolution step size to obtain an edge density point space; a screening unit 306 for screening points in the edge density point space using a density threshold; an obtaining unit 308, configured to obtain a target connected domain in the screened edge density point space; a determining unit 310, configured to determine a boundary of the target object in the image according to the target connected component.
According to the density-based image processing device 300 of the embodiment of the invention, the edge density point space of the image is obtained by the convolution principle, and the points in the edge density point space are screened by taking the density threshold value as the main judgment basis to determine the boundary of the target object in the image, so that even if complex and irregular cross textures or diversity of image contents exist in the image, for example, a food image, non-target objects in the image can be accurately removed, the image can be accurately segmented, and the accuracy of image recognition is further improved. In addition, the algorithm of the scheme has high flexibility, and the speed and the effect of image processing can be adjusted by adjusting the target convolution kernel, the convolution step length and the density threshold, so that the real-time performance of image identification is ensured.
According to an embodiment of the present invention, further comprising: a boundary extension unit 312, configured to perform boundary extension processing on the image subjected to the edge extraction processing to perform convolution on the image subjected to the boundary extension processing, where when a space of the image subjected to the edge extraction processing is p (x, y), the space of the image subjected to the boundary extension processing is H (x, y),
Figure BDA0001202580200000111
rows is the width of the image subjected to the edge extraction processing, and h.cols is the height of the image subjected to the edge extraction processing.
According to the density-based image processing apparatus 300 of the embodiment of the present invention, by performing the boundary extension processing, it is possible to avoid the convolution to a point outside the boundary when the image is convolved, thereby ensuring the accuracy and reliability of the convolution.
According to one embodiment of the present invention, when the target convolution kernel is h (x, y), (0 < x < h.rows, 0 < y < h.cols), h.rows is the width of the target convolution kernel, and h.cols is the height of the target convolution kernel, the edge density point space is Con (x, y),
Figure BDA0001202580200000112
wherein s is the convolution step length, k is a positive integer, x belongs to (0, H.rows), and y belongs to (0, H.cols).
According to the density-based image processing apparatus 300 of the embodiment of the present invention, the edge density point space conforms to the above formula, and it is ensured that the width and height of the edge density point space obtained after convolution are consistent with those of the original image, and further the boundary of the target object in the image is accurately determined. In addition, in order to ensure the speed of calculating the edge density point space, the calculation can be realized by setting a proper convolution step length, the time consumption is reduced when the convolution step length is larger, and the obtained density point space is more sparse.
According to an embodiment of the present invention, the edge density point space after screening is Cone (x, y):
where Con (x, y) is the edge density point space and e is the density threshold.
According to the density-based image processing device 300 of the embodiment of the invention, the screened edge density point space is obtained by screening the points in the edge density point space, so that non-target objects in the image are removed, the identified image is ensured to be purer, and the accuracy of image identification is ensured.
According to an embodiment of the present invention, the obtaining unit 308 includes: a connection subunit 3082, configured to perform expansion processing on the screened edge density point space by using an expansion window, so as to connect adjacent points in the screened edge density point space into a connected domain; and the selecting subunit 3084 is used for selecting the connected domain with the largest area as the target connected domain.
According to the density-based image processing apparatus 300 of the embodiment of the present invention, since the area where the target object is located has the largest number of adjacent points, the boundary of the target object can be determined by connecting the adjacent points. In addition, the expansion processing is carried out on the screened edge density point space, so that non-target objects in the image are filtered more comprehensively, and the finally determined target objects are ensured to be purer.
According to one embodiment of the invention, the side length of the dilation window is greater than half the convolution step size.
According to the density-based image processing apparatus 300 of the embodiment of the present invention, since the side length of the expansion window is greater than half of the convolution step, adjacent points in the expanded density point space are connected as a connected domain, and at this time, the connected domain with the largest area corresponds to the target object.
Fig. 4 shows a schematic block diagram of an apparatus having an image processing function according to an embodiment of the present invention.
As shown in fig. 4, the apparatus 400 having an image processing function according to an embodiment of the present invention includes the density-based image processing apparatus 300 according to any one of the above-mentioned technical solutions, and therefore, the apparatus 400 has the same technical effects as the density-based image processing apparatus 300 according to any one of the above-mentioned technical solutions, and is not described herein again.
Among them, the apparatus 400 having the image processing function includes, but is not limited to: domestic appliance, server and terminal.
The technical scheme of the invention is explained in detail in the above with the help of the attached drawings, and through the technical scheme, the non-target objects in the image can be accurately removed, and the accuracy and the speed of image recognition are improved, so that the accuracy and the real-time performance of the image recognition are ensured.
In the present invention, the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance; the term "plurality" means two or more. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method of density-based image processing, comprising:
carrying out edge extraction processing on an image to be processed;
performing convolution on the image subjected to the edge extraction processing by using a target convolution kernel according to convolution step length to obtain an edge density point space;
screening points in the edge density point space using a density threshold;
acquiring a target connected domain in the screened edge density point space;
determining the boundary of a target object in the image according to the target connected domain; before the convolution is performed on the image subjected to the edge extraction processing, the method further includes:
performing boundary extension processing on the image subjected to the edge extraction processing to convolve the image subjected to the boundary extension processing,
wherein when the space of the image subjected to the edge extraction processing is p (x, y), the space of the image subjected to the boundary extension processing is H (x, y),
Figure FDA0002154664340000011
rows is the width of the image subjected to the edge extraction processing, and H.cols is the height of the image subjected to the edge extraction processing;
when the target convolution kernel is h (x, y), (x is more than 0 and less than h.rows, y is more than 0 and less than h.cols), h.rows is the width of the target convolution kernel, and h.cols is the height of the target convolution kernel, the edge density point space is Con (x, y),
Figure FDA0002154664340000012
wherein s is the convolution step length, k is a positive integer, x belongs to (0, H.rows), and y belongs to (0, H.cols).
2. The density-based image processing method according to claim 1,
the edge density point space after screening is Cone (x, y):
Figure FDA0002154664340000021
where Con (x, y) is the edge density point space and e is the density threshold.
3. The method according to claim 1 or 2, wherein the obtaining of the target connected component in the filtered edge density point space specifically comprises:
performing expansion processing on the screened edge density point space by using an expansion window so as to connect adjacent points in the screened edge density point space into a connected domain;
and selecting the connected domain with the largest area as the target connected domain.
4. The density-based image processing method according to claim 3,
the side length of the expansion window is greater than half of the convolution step.
5. A density-based image processing apparatus, comprising:
the edge extraction unit is used for carrying out edge extraction processing on the image to be processed;
the convolution unit is used for performing convolution on the image subjected to the edge extraction processing by using a target convolution kernel according to the convolution step length to obtain an edge density point space;
a screening unit for screening points in the edge density point space using a density threshold;
the acquisition unit is used for acquiring a target connected domain in the edge density point space after screening;
the determining unit is used for determining the boundary of the target object in the image according to the target connected domain;
a boundary extension unit for performing boundary extension processing on the image subjected to the edge extraction processing to convolve the image subjected to the boundary extension processing,
wherein when the space of the image subjected to the edge extraction processing is p (x, y), the space of the image subjected to the boundary extension processing is H (x, y),
Figure FDA0002154664340000022
rows is the width of the image subjected to the edge extraction processing, and H.cols is the height of the image subjected to the edge extraction processing;
when the target convolution kernel is h (x, y), (x is more than 0 and less than h.rows, y is more than 0 and less than h.cols), h.rows is the width of the target convolution kernel, and h.cols is the height of the target convolution kernel, the edge density point space is Con (x, y),
Figure FDA0002154664340000031
wherein s is the convolution step length, k is a positive integer, x belongs to (0, H.rows), and y belongs to (0, H.cols).
6. The density-based image processing apparatus according to claim 5,
the edge density point space after screening is Cone (x, y):
where Con (x, y) is the edge density point space and e is the density threshold.
7. The density-based image processing apparatus according to claim 5 or 6, wherein the acquisition unit includes:
the connecting subunit is used for performing expansion processing on the screened edge density point space by using an expansion window so as to connect adjacent points in the screened edge density point space into a connected domain;
and the selecting subunit is used for selecting the connected domain with the largest area as the target connected domain.
8. The density-based image processing apparatus according to claim 7,
the side length of the expansion window is greater than half of the convolution step.
9. An apparatus having an image processing function, characterized by comprising: the density-based image processing apparatus according to any one of claims 5 to 8.
CN201710004197.XA 2017-01-04 2017-01-04 Density-based image processing method, image processing apparatus and device Expired - Fee Related CN106846353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710004197.XA CN106846353B (en) 2017-01-04 2017-01-04 Density-based image processing method, image processing apparatus and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710004197.XA CN106846353B (en) 2017-01-04 2017-01-04 Density-based image processing method, image processing apparatus and device

Publications (2)

Publication Number Publication Date
CN106846353A CN106846353A (en) 2017-06-13
CN106846353B true CN106846353B (en) 2020-02-21

Family

ID=59116919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710004197.XA Expired - Fee Related CN106846353B (en) 2017-01-04 2017-01-04 Density-based image processing method, image processing apparatus and device

Country Status (1)

Country Link
CN (1) CN106846353B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503530B (en) * 2023-04-19 2023-12-05 钛玛科(北京)工业科技有限公司 Intermittent region extraction method based on image convolution

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745197A (en) * 2013-12-27 2014-04-23 深圳市捷顺科技实业股份有限公司 Detection method of license plate and device thereof
CA2932942A1 (en) * 2013-12-13 2015-06-18 Safran Non-intrusive measurement of the volume density of a phase in a part
CN105791824A (en) * 2016-03-09 2016-07-20 西安电子科技大学 Screen content coding predicting mode quick selection method based on edge point density
CN106204588A (en) * 2016-07-08 2016-12-07 腾讯科技(深圳)有限公司 A kind of image processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2932942A1 (en) * 2013-12-13 2015-06-18 Safran Non-intrusive measurement of the volume density of a phase in a part
CN103745197A (en) * 2013-12-27 2014-04-23 深圳市捷顺科技实业股份有限公司 Detection method of license plate and device thereof
CN105791824A (en) * 2016-03-09 2016-07-20 西安电子科技大学 Screen content coding predicting mode quick selection method based on edge point density
CN106204588A (en) * 2016-07-08 2016-12-07 腾讯科技(深圳)有限公司 A kind of image processing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Graph-Density-based visual word vocabulary for image retrieval;Lingyang chun,et al.;《Multimedia and Expo (ICME), 2014 IEEE International Conference》;20141231;第1-6页 *
基于灰度方差和边缘密度的车牌定位算法;张浩鹏 等;《仪器仪表学报》;20110531;第32卷(第5期);第1095-1102页 *

Also Published As

Publication number Publication date
CN106846353A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
US9471964B2 (en) Non-local mean-based video denoising method and apparatus
CN110033471B (en) Frame line detection method based on connected domain analysis and morphological operation
CN109272509B (en) Target detection method, device and equipment for continuous images and storage medium
CN110163219B (en) Target detection method based on image edge recognition
JP5701182B2 (en) Image processing apparatus, image processing method, and computer program
US9036905B2 (en) Training classifiers for deblurring images
CN107547803B (en) Video segmentation result edge optimization processing method and device and computing equipment
CN105741319B (en) Improvement visual background extracting method based on blindly more new strategy and foreground model
JP6338429B2 (en) Subject detection apparatus, subject detection method, and program
CN114049499A (en) Target object detection method, apparatus and storage medium for continuous contour
CN112991374A (en) Canny algorithm-based edge enhancement method, device, equipment and storage medium
CN111178445A (en) Image processing method and device
JP2017500662A (en) Method and system for correcting projection distortion
Shi et al. Image enhancement for degraded binary document images
CN106846353B (en) Density-based image processing method, image processing apparatus and device
US10176400B1 (en) Method and apparatus for locating dot text in an image
WO2024016632A1 (en) Bright spot location method, bright spot location apparatus, electronic device and storage medium
CN110728692A (en) Image edge detection method based on Scharr operator improvement
Tong et al. Document image binarization based on NFCM
CN111091122A (en) Training and detecting method and device for multi-scale feature convolutional neural network
CN110766614A (en) Image preprocessing method and system of wireless scanning pen
CN106469267B (en) Verification code sample collection method and system
US10176399B1 (en) Method and apparatus for optical character recognition of dot text in an image
CN109325489A (en) The recognition methods of image and device, storage medium, electronic device
JP3897306B2 (en) Method for supporting extraction of change region between geographic images and program capable of supporting extraction of change region between geographic images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201221

Address after: 528311 Global Innovation Center, 4th Building and 2nd Floor, Meimei Industrial Avenue, Beijiao Town, Shunde District, Foshan City, Guangdong Province

Patentee after: GUANGDONG MEIDI WHITE HOUSEHOLD ELECTRICAL APPLIANCE TECHNOLOGY INNOVATION CENTER Co.,Ltd.

Patentee after: MIDEA GROUP Co.,Ltd.

Address before: 528311, 26-28, B District, Mei headquarters building, 6 Mei Road, Beijiao Town, Shunde District, Foshan, Guangdong.

Patentee before: MIDEA GROUP Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200221