CN112149674A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN112149674A
CN112149674A CN202010908464.8A CN202010908464A CN112149674A CN 112149674 A CN112149674 A CN 112149674A CN 202010908464 A CN202010908464 A CN 202010908464A CN 112149674 A CN112149674 A CN 112149674A
Authority
CN
China
Prior art keywords
pixel
gray
image
value
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010908464.8A
Other languages
Chinese (zh)
Inventor
万成涛
陈彦宇
马雅奇
谭龙田
李春光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202010908464.8A priority Critical patent/CN112149674A/en
Publication of CN112149674A publication Critical patent/CN112149674A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Abstract

The embodiment of the invention discloses an image processing method and device, which are used for improving the image target extraction quality. The method comprises the following steps: determining a first pixel set, wherein the first pixel set is a set of all pixel points in a peripheral area on an image to be processed; determining a plurality of pixel paths from a first pixel point in a second pixel set to each pixel point in the first pixel set; the first pixel point is any one pixel point in the second pixel set; the second pixel set is a set of the remaining pixel points except the first pixel set in the image to be processed; determining a gray difference value corresponding to each of the plurality of pixel paths, wherein the gray difference value is a gray difference value between a pixel point with a maximum gray value and a pixel point with a minimum gray value on each pixel path; and determining the minimum gray difference value in the gray difference values corresponding to the pixel paths as the gray value of the first pixel point.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
Image object extraction refers to segmenting an image object of interest from a background from a single image or a sequence of images. The method has wide application requirements in application scenes such as human body detection, target tracking, action recognition, various image/video processing and the like. The target object in the image is extracted quickly and accurately, unnecessary background information is filtered for subsequent image analysis processing or video analysis processing, and interference is reduced. The current commonly used image target extraction methods include methods based on background modeling, frame difference and deep learning model training. However, current image target extraction methods are to be further optimized in terms of extraction quality.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, which are used for improving the image target extraction quality.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
determining a first pixel set, wherein the first pixel set is a set of all pixel points in a peripheral area on an image to be processed;
determining a plurality of pixel paths from a first pixel point in a second pixel set to each pixel point in the first pixel set; the first pixel point is any one pixel point in the second pixel set; the second pixel set is a set of the remaining pixel points except the first pixel set in the image to be processed;
determining a gray difference value corresponding to each of the plurality of pixel paths, wherein the gray difference value is a gray difference value between a pixel point with a maximum gray value and a pixel point with a minimum gray value on each pixel path;
and determining the minimum gray difference value in the gray difference values corresponding to the pixel paths as the gray value of the first pixel point.
Optionally, before determining the first set of pixels, the method further includes:
and converting an initial image into the image to be processed, wherein the initial image is a color image, the image to be processed is an n-level gray image, and n is less than or equal to 256.
Optionally, the peripheral area is: and the set of the pixel points positioned at the outermost layer on the image to be processed.
Optionally, the method further includes: determining a gray threshold;
setting the gray values of all pixel points with the gray values larger than the gray threshold value in the second pixel set as a first preset value;
setting the gray values of all pixel points in the second pixel set, of which the gray values are smaller than or equal to the gray threshold value, as a second preset value; the first preset value is greater than the second preset value.
Optionally, the first preset value is a gray value of 255; the second preset value is a gray value of 0.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the determining unit is used for determining the first pixel set, wherein the first pixel set is a set of all pixel points in the peripheral area on the image to be processed;
the determining unit is further configured to determine a plurality of pixel paths between a first pixel point in the second pixel set and each pixel point in the first pixel set; the first pixel point is any one pixel point in the second pixel set; the second pixel set is a set of the remaining pixel points except the first pixel set in the image to be processed;
the processing unit is used for determining a gray difference value corresponding to each of the plurality of pixel paths, wherein the gray difference value is a gray difference value between a pixel point with the maximum gray value and a pixel point with the minimum gray value on each pixel path;
the processing unit is further configured to determine a minimum gray difference value of the multiple gray difference values corresponding to the multiple pixel paths as the gray value of the first pixel point.
Optionally, the apparatus further includes a conversion unit, configured to:
and converting an initial image into the image to be processed, wherein the initial image is a color image, the image to be processed is an n-level gray image, and n is less than or equal to 256.
Optionally, the processor is further configured to:
determining a gray threshold;
setting the gray values of all pixel points with the gray values larger than the gray threshold value in the second pixel set as a first preset value;
setting the gray values of all pixel points in the second pixel set, of which the gray values are smaller than or equal to the gray threshold value, as a second preset value; the first preset value is greater than the second preset value.
Optionally, the first preset value is a gray value of 255; the second preset value is a gray value of 0.
In a third aspect, an embodiment of the present invention provides an image processing apparatus, including:
a memory for storing computer instructions;
a processor, coupled to the memory, for executing the computer instructions in the memory to perform the method as provided by the first aspect above when executing the computer instructions.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium storing computer instructions that, when executed on a computer, cause the computer to perform the method as provided in the first aspect.
In a fifth aspect, embodiments of the present invention provide a computer program product, which when run on a computer causes the computer to perform the method as provided in the first aspect above.
The invention has the beneficial effects that: the invention provides an image processing method and device, which comprises the steps of firstly determining a plurality of connecting paths between a certain pixel point in a gray image and all pixel points in a peripheral area, determining the difference value between the maximum gray value and the minimum gray value of all the pixel points on each path in the plurality of connecting paths, and then determining the minimum value in the plurality of difference values as the minimum gray difference value corresponding to the pixel point. The invention determines the connectivity of the pixel point and the pixel point in the image boundary area by determining the minimum gray difference value of the pixel point, and judges whether the pixel point belongs to the target object to be extracted in the image or not. By the method, the image target extraction precision can be accurate to the pixel level, so that the image target extraction quality is improved.
Drawings
Fig. 1 is a scene schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
fig. 3 is a diagram illustrating an image processing method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are some, not all embodiments of the solution of the invention. All other embodiments obtained by a person skilled in the art without any inventive work based on the embodiments described in the present application are within the scope of the protection of the technical solution of the present invention.
As described above, in image object extraction, background modeling, frame difference training, and deep learning model training are commonly used as methods. However, the extraction quality of the currently common image target extraction method needs to be further improved.
In view of this, an embodiment of the present invention provides an image processing method, including: determining a first pixel set, wherein the first pixel set is a set of all pixel points in a peripheral area on an image to be processed; determining a plurality of pixel paths from a first pixel point in a second pixel set to each pixel point in the first pixel set; the first pixel point is any one pixel point in the second pixel set; the second pixel set is a set of the remaining pixel points except the first pixel set in the image to be processed; determining a gray difference value corresponding to each of the plurality of pixel paths, wherein the gray difference value is a gray difference value between a pixel point with a maximum gray value and a pixel point with a minimum gray value on each pixel path; and determining the minimum gray difference value in the gray difference values corresponding to the pixel paths as the gray value of the first pixel point. The image processing method determines the connectivity of the pixel point and the pixel point in the image boundary region by determining the minimum gray difference value of the pixel point, and judges whether the pixel point belongs to a target object to be extracted in the image or not according to the connectivity, so that the image target extraction precision is accurate to the pixel level, and the image target extraction quality is improved.
The technical scheme provided by the embodiment of the invention is described in the following with the accompanying drawings of the specification.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application. The application scenario takes the example that the image processing device performs gray value processing on one pixel point M in one image. The set of pixel points in the dark region around the image in fig. 1 is the set of all pixel points in the peripheral region on the image to be processed. Another possible situation is that, in addition to taking the outermost layer of image pixels as a set of all pixels in the peripheral region, the outermost layer and pixels adjacent to the outermost layer may also be taken as a set of all pixels in the peripheral region, which is not limited in the present invention.
In fig. 1, the gray value of M point in the left image is 255 (white), the gray value of all pixel points in the peripheral dark region of the image is 50, and the left image is processed by the processor in the figure, and first, the connection path between the pixel point M and all pixel points in the peripheral region is determined, taking two pixel points a and b in the peripheral region as an example, the path connecting the pixel point M to the pixel point a has a plurality of paths, two of which are indicated in the figure and are respectively path 101 and path 102. Taking the path 101 as an example, if the maximum gray value of the pixel points on the path is 255 and the minimum gray value is 50, the difference between the maximum gray value and the minimum gray value on the path 101 is 205, and after the processor calculates the differences between the maximum gray value and the minimum gray value on all the connection paths from the pixel point M to the pixel point a, the minimum difference is 205 through comparison; obtaining the difference value between the maximum gray value and the minimum gray value of the pixel M and all the communication paths of all the pixels in the peripheral area by the same method, taking the minimum value as the minimum gray value of the pixel M, solving the minimum gray value corresponding to all the pixels in the same manner, and taking the minimum gray value as the gray value of each pixel corresponding to the processor output image, wherein the gray values of the pixels in the middle area outside the peripheral area in the graph 1 are 255, and the minimum gray value difference value of each pixel is 205, so that the pixel 205 is taken as the gray value of the pixel in the middle area outside the peripheral area to generate a second gray image.
It should be noted that the above-mentioned application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the present invention is not limited in any way in this respect. Rather, embodiments of the present invention may be applied in any scenario where applicable.
Referring to fig. 2, a schematic flow chart of an image processing method according to an embodiment of the present invention, which may be applied to the scenario shown in fig. 1, specifically, the method includes the following steps:
step 201: determining a first pixel set, wherein the first pixel set is a set of all pixel points in a peripheral area on an image to be processed.
Optionally, before determining the first set of pixels, the method further includes: and converting an initial image into the image to be processed, wherein the initial image is a color image, the image to be processed is an n-level gray image, and n is less than or equal to 256.
The initial color image is converted into a gray image, namely the RGB value of each pixel point in the image meets the following requirements: the graying method may be:
r ═ of the grayscale image (R of the color image + G of the color image + B of the color image)/3;
g ═ of the grayscale image (R of the color image + G of the color image + B of the color image)/3;
b of the grayscale image is (R of the color image + G of the color image + B of the color image)/3.
The most critical factor of a computer in recognizing a target object in an image is Gradient, and many Feature extraction algorithms, such as Scale Invariant Feature Transform (SIFI) and Histogram of Oriented Gradients (HOG), are essentially Gradient-based algorithms. To calculate the gradient of the image, a gray image is used, and since the color itself of the image is easily affected by factors such as illumination, the color itself is difficult to extract key information. Therefore, in the embodiment of the present invention, the original color image needs to be converted into a grayscale image, and after graying, useful information in the image remains, but the image becomes easy to process, and the operation speed of the computer is greatly increased when the computer performs operation processing on the image.
Optionally, the peripheral area is: and the set of the pixel points positioned at the outermost layer on the image to be processed.
The pixel points in the peripheral region refer to pixel points on the image boundary, and the extraction of the pixel points on the image boundary is to extract the pixel values of the pixel points in the background region of the image except for the target object, so as to separate the background of the image from the target object in the image in the subsequent image processing process. In a possible implementation manner, the peripheral region may be a set of pixels located on the outermost layer of the image to be processed, or a set of pixels located on the outermost layer of the image to be processed and pixels located adjacent to the pixels located on the outermost layer. Taking the scene in fig. 1 as an example, the peripheral area may be a set of all pixel points in the outermost dark area in fig. 1, or may be a set of all pixel points in the outermost two layers.
Step 202: determining a plurality of pixel paths from a first pixel point in a second pixel set to each pixel point in the first pixel set; the first pixel point is any one pixel point in the second pixel set; the second pixel set is a set of the remaining pixel points except the first pixel set in the image to be processed.
As can be seen from the above, the pixel points in the first pixel set are the pixel points in the image boundary region, and then the second pixel set is a set of all the remaining pixel points in the image except the pixel points in the first pixel set, that is, a set of all the pixel points in the middle region in the image except the pixel points in the image boundary region. The first pixel point is any one pixel point in the second pixel set.
Taking fig. 1 as an example, the M pixel in fig. 1 is one of the remaining pixels except for all pixels in the peripheral region, and a and b are two pixels in the set of all pixels in the peripheral region. There are multiple paths for M pixel points to connect to the pixel point a, in this embodiment, the path 101 and the path 102 are taken as examples, and similarly, there are multiple paths for M pixel points to connect to the pixel point b, in this embodiment, the path 103 and the path 104 are taken as examples. The M pixel points in fig. 1 are used as the first pixel points here, and there may be multiple paths from the M pixel point to one pixel point a in the image boundary region (first pixel set), and only two of them are listed in fig. 1, and similarly, there may also be multiple paths from the M pixel point to another pixel point b in the image boundary region (first pixel set), and two of them are listed here.
Step 203: and determining a gray difference value corresponding to each of the plurality of pixel paths, wherein the gray difference value is a gray difference value between a pixel point with the maximum gray value and a pixel point with the minimum gray value in each pixel path.
Taking fig. 1 as an example, in a path 101 in which an M pixel is connected to an a pixel, the maximum value is M and the minimum value is n among gray values of all pixels in the path; in a path 102 in which M pixel points are connected to a pixel point a, the maximum value is x, and the minimum value is y in the gray values of all the pixel points in the path; the path 101 corresponds to a gray scale difference of (m-n) and the path 102 corresponds to a gray scale difference of (x-y). Similarly, in the path 103 where the M pixel is connected to the b pixel, the maximum value is p and the minimum value is q among the gray values of all the pixels in the path; in a path 104 in which the M pixel point is connected to the b pixel point, the maximum value is s and the minimum value is w in the gray values of all the pixel points in the path; the path 101 corresponds to a gray scale difference value of (p-q) and the path 102 corresponds to a gray scale difference value of (s-w).
Step 204: and determining the minimum gray difference value in the gray difference values corresponding to the pixel paths as the gray value of the first pixel point.
The gray difference is used for evaluating the difficulty degree of a certain pixel point in the image being connected to a pixel point in the image boundary region through a path, when the gray difference is larger, the larger the gray difference is, the poorer the connectivity between the pixel point and the pixel point in the image boundary region is, namely, the poorer the similarity between the pixel point and the pixel point in the image boundary region is, namely, the higher the possibility that the pixel point is used as a target object to be extracted by the image is.
In an embodiment, as shown in fig. 1, the method for determining the gray scale value of the first pixel point may be that, in a path 101 in which the M pixel point is connected to the a pixel point, the maximum value of the gray scale values of all pixel points in the path is M, and the minimum value is n, where (M-n) is the gray scale difference value corresponding to the path 101. And similarly, determining the gray difference values corresponding to all paths of the M pixel point connected to the a pixel point, selecting the minimum gray difference value from the gray difference values corresponding to all paths to serve as the minimum gray difference value corresponding to the M pixel point connected to the a pixel point, determining the minimum gray difference value corresponding to the M pixel point connected to the b pixel point in the same way, and determining the minimum gray difference value corresponding to the M pixel point connected to all pixel points in the first pixel set. For example, if there are k pixel points in the first pixel set, it may be determined that k minimum gray scale differences correspond to M pixel points when the M pixel points are connected to all the k pixel points, and the minimum value is selected from the k minimum gray scale differences by comparison, that is, the minimum gray scale difference is obtained. And determining the minimum gray difference value of other pixel points in the second pixel set by the same method, and taking the calculated minimum gray difference value of each pixel point as the gray value of each pixel point to obtain a second gray image.
Specifically, the gray value is represented by I, and the minimum gray difference value of each pixel point I is calculated according to the following formula (1):
Figure BDA0002662353460000081
where p ═ < p (0), p (1), …, p (k) > is a connection path connecting the pixel point i to the pixel points in the first set S, and there are k pixel points on the path. In the embodiment of the invention, a connection path is only established between each pixel point and the adjacent pixel points on the upper, lower, left and right sides of each pixel point, wherein pi S, i is a set of all connection paths from the pixel point i to the pixel points in the first pixel set S, and F (p) is a calculation function of the corresponding minimum gray level difference value on the connection path p. F (p) is defined as follows:
f (p) max I (p (j)) min I (p (j)) where j is an integer from 0 to k [2]
I (p (j)) is the gray value of the jth pixel point in the k pixel points on the path p, i.e. the difference between the maximum value and the minimum value of the gray values corresponding to the k pixel points on the path connecting the pixel point I to the pixel point in the first pixel set S is taken. And after the difference between the maximum value and the minimum value in the gray values corresponding to the pixel point i and each connection path in all the connection paths in the first pixel set is calculated, taking the minimum value as the minimum gray difference value of the pixel point i.
In a possible implementation manner, before the calculated minimum gray difference value of each pixel point is used as the gray value of each pixel point, the calculated minimum gray difference value of each pixel point may be normalized and mapped to the interval [0,255], and then the value is assigned to the corresponding pixel point in the image, so as to obtain the second gray image.
The second gray image is consistent with the width and height of the image to be processed, the pixel positions correspond to each other one by one, but the gray values corresponding to the pixel points on the image to be processed are different, the gray value corresponding to each pixel point on the image to be processed is the gray value determined after the color image is converted into the gray image, and the gray value corresponding to each pixel point in the second gray image is the minimum gray difference value in the gray difference values corresponding to multiple paths from the pixel point to each pixel point in the first pixel set.
In another embodiment, the method for determining the minimum gray difference value corresponding to a pixel point may be as shown in fig. 3: the minimum gray difference value of the pixel points in the image can be updated in a forward scanning and reverse scanning alternating mode. Wherein, the forward scan refers to traversing the pixels in the image according to the sequence of "left > right > up > down", as shown in fig. 3 (a), and the backward scan refers to traversing the pixels in the image according to the sequence of "right > left > down > up", as shown in fig. 3 (b), and the specific implementation manner is as follows:
firstly, setting the minimum gray difference value corresponding to all pixel points of an image boundary region (a first pixel set) to be 0, setting the minimum gray difference value corresponding to all pixel points in a second pixel set to be infinite, and then performing traversal scanning on the image for k times, wherein the traversal scanning for the odd number times adopts forward scanning, the traversal scanning for the even number times adopts reverse scanning, and updating the minimum gray difference value of all the pixel points of the image once every time the traversal scanning is performed, wherein the updating process is described as follows;
during forward scanning, a path with three nodes is formed by the pixel i and two adjacent pixel points on the left and the upper sides of the pixel i, the difference value between the maximum gray value and the minimum gray value on the path is calculated according to a formula (2), and the minimum gray difference value D (i) of the pixel i is updated by the value;
during reverse scanning, a path with three nodes is formed by the pixel i and two adjacent pixels at the right and lower parts of the pixel i, the difference value between the maximum gray value and the minimum gray value on the path is calculated according to a formula (2), and the minimum gray difference value D (i) of the pixel i is updated by the value.
Step 205: determining a gray threshold; setting the gray values of all pixel points with the gray values larger than the gray threshold value in the second pixel set as a first preset value; setting the gray values of all pixel points in the second pixel set, of which the gray values are smaller than or equal to the gray threshold value, as a second preset value; the first preset value is greater than the second preset value.
In a possible implementation manner, a maximum inter-class difference method may be used to determine a binary segmentation threshold of the image, where the binary segmentation threshold is the above grayscale threshold, and in practical applications, other algorithms may also be used to determine the binary segmentation threshold of the image, such as: an iterative threshold method, a P-quantile method, a global threshold method based on a minimum error, and the like, which are not limited in any way in the embodiments of the present invention. Step 205 is an optional step.
Optionally, the first preset value is a gray value of 255; the second preset value is a gray value of 0.
And carrying out binarization processing on the image according to the binarization segmentation threshold, namely, enabling the gray value of each pixel point in the image to be 0 (black) or 255 (white). Specifically, when the gray value of a pixel point in the image is greater than the binarization segmentation threshold, R, G, and B in the RGB value of the pixel point in the image are 0, and the pixel point becomes black; when the gray value of a pixel point in the image is smaller than or equal to the binarization segmentation threshold value, the pixel point becomes white when R, G, and B in the RGB values of the pixel point in the image are 255, and 255. After each pixel point in the second gray scale image is subjected to the binarization processing, the second gray scale image is changed into a binary image, then the target object in the image to be processed is extracted according to the binary image as an index image, specifically, a pixel point with the gray value of 255 in the binary image is determined, and a pixel point corresponding to the target object in the image to be processed is determined according to the corresponding position of the pixel point with the gray value of 255 in the binary image, so that the target object is extracted from the image to be processed.
For example, a maximum inter-class difference method is adopted to determine that the gray threshold of an image is 150, binarization processing is performed on a gray image according to the gray threshold, that is, the gray value of a pixel point with the gray value larger than 150 in the gray image is 255, the gray value of a pixel point with the gray value smaller than or equal to 150 in the gray image is 0, all gray values in the gray image are processed according to the method, so that the gray value in the gray image is 0 or 255, that is, the image has only black and white colors, and is a binary image, the pixel point with the gray value of 255 in the gray image is determined, and the pixel point corresponding to a target object in the image to be processed is determined according to the corresponding position of the pixel point with the gray value of 255 in the binary image, so that the target object is extracted from the image to be.
Based on the same inventive concept, embodiments of the present invention provide an image processing apparatus capable of implementing functions corresponding to the aforementioned image processing methods. The image processing apparatus may be a hardware structure, a software module, or a hardware structure plus a software module. The image processing device can be realized by a chip system, and the chip system can be formed by a chip and can also comprise the chip and other discrete devices. Referring to fig. 4, the apparatus includes a determining unit 401 and a processing unit 402, wherein:
a determining unit 401, configured to determine the first pixel set, where the first pixel set is a set of all pixel points in an upper peripheral region of the image to be processed;
the determining unit 401 is further configured to determine multiple pixel paths between a first pixel point in the second pixel set and each pixel point in the first pixel set; the first pixel point is any one pixel point in the second pixel set; the second pixel set is a set of the remaining pixel points except the first pixel set in the image to be processed;
a processing unit 402, configured to determine a grayscale difference value corresponding to each of the multiple pixel paths, where the grayscale difference value is a grayscale difference value between a pixel point with a maximum grayscale value and a pixel point with a minimum grayscale value in each of the pixel paths;
the processing unit 402 is further configured to determine a minimum gray difference value of the multiple gray difference values corresponding to the multiple pixel paths as the gray value of the first pixel point.
In a possible embodiment, the apparatus further comprises a conversion unit configured to: and converting an initial image into the image to be processed, wherein the initial image is a color image, the image to be processed is an n-level gray image, and n is less than or equal to 256.
In one possible implementation, the processor is further configured to: determining a gray threshold; setting the gray values of all pixel points with the gray values larger than the gray threshold value in the second pixel set as a first preset value; setting the gray values of all pixel points in the second pixel set, of which the gray values are smaller than or equal to the gray threshold value, as a second preset value; the first preset value is greater than the second preset value.
In a possible embodiment, the first preset value is a gray value of 255; the second preset value is a gray value of 0.
All relevant contents of each step related to the foregoing embodiment of the image processing method can be cited to the functional description of the functional module corresponding to the image processing apparatus in the embodiment of the present application, and are not described herein again.
The division of the modules in the embodiments of the present invention is schematic, and only one logical function division is provided, and in actual implementation, there may be another division manner, and in addition, each functional module in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 5, based on the same inventive concept, an embodiment of the present invention provides an image processing apparatus, which includes at least one processor 501, where the processor 501 is configured to execute a computer program stored in a memory, and implement the steps of the image processing method shown in fig. 2 provided by the embodiment of the present invention.
Alternatively, the processor 501 may be a general-purpose processor, such as a Central Processing Unit (CPU), a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, which may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the image processing method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
Optionally, the image processing apparatus may further include a memory 502 connected to the at least one processor 501, the memory 502 stores instructions executable by the at least one processor 501, and the at least one processor 501 may execute the steps included in the foregoing image processing method by executing the instructions stored in the memory 502.
In this embodiment of the present invention, a specific connection medium between the processor 501 and the Memory 502 is not limited, and the Memory 502 may include at least one type of storage medium, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read-Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and the like. The memory 502 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 502 of embodiments of the present invention may also be circuitry or any other device capable of performing a storage function to store program instructions and/or data.
By programming the processor 501, the code corresponding to the image processing method described in the foregoing embodiment may be solidified in the chip, so that the chip can execute the steps of the image processing method when running, and how to program the processor 501 is a technique known by those skilled in the art, and is not described herein again. The entity devices corresponding to the determining unit 401 and the processing unit 402 may be the aforementioned processor 501. The image processing apparatus may be configured to perform the method provided by the embodiment shown in fig. 2. Therefore, regarding the functions that can be realized by each functional module in the device, reference may be made to the corresponding description in the embodiment shown in fig. 2, which is not repeated herein.
Based on the same inventive concept, embodiments of the present invention also provide a computer-readable storage medium storing computer instructions, which, when executed on a computer, cause the computer to perform the steps of the image processing method as described above.
In some possible embodiments, the aspects of the image processing method provided by the present application may also be implemented in the form of a program product, which includes program code for causing a detection apparatus to perform the steps in the image processing method according to various exemplary embodiments of the present application described above in this specification, when the program product is run on an electronic device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (12)

1. An image processing method, comprising:
determining a first pixel set, wherein the first pixel set is a set of all pixel points in a peripheral area on an image to be processed;
determining a plurality of pixel paths from a first pixel point in a second pixel set to each pixel point in the first pixel set; the first pixel point is any one pixel point in the second pixel set; the second pixel set is a set of the remaining pixel points except the first pixel set in the image to be processed;
determining a gray difference value corresponding to each of the plurality of pixel paths, wherein the gray difference value is a gray difference value between a pixel point with a maximum gray value and a pixel point with a minimum gray value on each pixel path;
and determining the minimum gray difference value in the gray difference values corresponding to the pixel paths as the gray value of the first pixel point.
2. The method of claim 1, wherein prior to said determining the first set of pixels, the method further comprises:
and converting an initial image into the image to be processed, wherein the initial image is a color image, the image to be processed is an n-level gray image, and n is less than or equal to 256.
3. The method of claim 1, wherein the peripheral region is: and the set of the pixel points positioned at the outermost layer on the image to be processed.
4. The method of claim 1, wherein the method further comprises:
determining a gray threshold;
setting the gray values of all pixel points with the gray values larger than the gray threshold value in the second pixel set as a first preset value;
setting the gray values of all pixel points in the second pixel set, of which the gray values are smaller than or equal to the gray threshold value, as a second preset value; the first preset value is greater than the second preset value.
5. The method of claim 4, wherein the first preset value is a gray value of 255; the second preset value is a gray value of 0.
6. An image processing apparatus characterized by comprising:
the determining unit is used for determining the first pixel set, wherein the first pixel set is a set of all pixel points in the peripheral area on the image to be processed;
the determining unit is further configured to determine a plurality of pixel paths between a first pixel point in the second pixel set and each pixel point in the first pixel set; the first pixel point is any one pixel point in the second pixel set; the second pixel set is a set of the remaining pixel points except the first pixel set in the image to be processed;
the processing unit is used for determining a gray difference value corresponding to each of the plurality of pixel paths, wherein the gray difference value is a gray difference value between a pixel point with the maximum gray value and a pixel point with the minimum gray value on each pixel path;
the processing unit is further configured to determine a minimum gray difference value of the multiple gray difference values corresponding to the multiple pixel paths as the gray value of the first pixel point.
7. The apparatus of claim 6, further comprising a conversion unit to:
and converting an initial image into the image to be processed, wherein the initial image is a color image, the image to be processed is an n-level gray image, and n is less than or equal to 256.
8. The apparatus of claim 6, wherein the processor is further configured to:
determining a gray threshold;
setting the gray values of all pixel points with the gray values larger than the gray threshold value in the second pixel set as a first preset value;
setting the gray values of all pixel points in the second pixel set, of which the gray values are smaller than or equal to the gray threshold value, as a second preset value; the first preset value is greater than the second preset value.
9. The apparatus of claim 8, wherein the first preset value is a gray value of 255; the second preset value is a gray value of 0.
10. An image processing apparatus, characterized in that the apparatus comprises: a processor and a memory;
the memory is for storing computer-executable instructions that, when executed by the processor, cause the image processing apparatus to perform the method of any of claims 1-5.
11. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-5.
12. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method according to any one of claims 1 to 5.
CN202010908464.8A 2020-09-02 2020-09-02 Image processing method and device Pending CN112149674A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010908464.8A CN112149674A (en) 2020-09-02 2020-09-02 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010908464.8A CN112149674A (en) 2020-09-02 2020-09-02 Image processing method and device

Publications (1)

Publication Number Publication Date
CN112149674A true CN112149674A (en) 2020-12-29

Family

ID=73889874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010908464.8A Pending CN112149674A (en) 2020-09-02 2020-09-02 Image processing method and device

Country Status (1)

Country Link
CN (1) CN112149674A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205497A (en) * 2021-04-30 2021-08-03 扬州能煜检测科技有限公司 Image processing method, device, equipment and medium for double-wire type image quality meter

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19990034030A (en) * 1997-10-28 1999-05-15 윤종용 One-dimensional local adaptive image binarization apparatus and method
KR20040092561A (en) * 2003-04-24 2004-11-04 주식회사신도리코 Method for segmenting Scan Image
WO2007006114A1 (en) * 2005-07-07 2007-01-18 Vx Technologies Inc. Methods for silhouette extraction
CN104010129A (en) * 2014-04-23 2014-08-27 小米科技有限责任公司 Image processing method, device and terminal
CN106530337A (en) * 2016-10-31 2017-03-22 武汉市工程科学技术研究院 Non local stereopair dense matching method based on image gray scale guiding
CN108921869A (en) * 2018-06-29 2018-11-30 新华三信息安全技术有限公司 A kind of image binaryzation method and device
CN108961291A (en) * 2018-08-10 2018-12-07 广东工业大学 A kind of method of Image Edge-Detection, system and associated component
CN109903294A (en) * 2019-01-25 2019-06-18 北京三快在线科技有限公司 Image processing method, device, electronic equipment and readable storage medium storing program for executing
CN111179182A (en) * 2019-11-21 2020-05-19 珠海格力智能装备有限公司 Image processing method and device, storage medium and processor

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19990034030A (en) * 1997-10-28 1999-05-15 윤종용 One-dimensional local adaptive image binarization apparatus and method
KR20040092561A (en) * 2003-04-24 2004-11-04 주식회사신도리코 Method for segmenting Scan Image
WO2007006114A1 (en) * 2005-07-07 2007-01-18 Vx Technologies Inc. Methods for silhouette extraction
CN104010129A (en) * 2014-04-23 2014-08-27 小米科技有限责任公司 Image processing method, device and terminal
CN106530337A (en) * 2016-10-31 2017-03-22 武汉市工程科学技术研究院 Non local stereopair dense matching method based on image gray scale guiding
CN108921869A (en) * 2018-06-29 2018-11-30 新华三信息安全技术有限公司 A kind of image binaryzation method and device
CN108961291A (en) * 2018-08-10 2018-12-07 广东工业大学 A kind of method of Image Edge-Detection, system and associated component
CN109903294A (en) * 2019-01-25 2019-06-18 北京三快在线科技有限公司 Image processing method, device, electronic equipment and readable storage medium storing program for executing
CN111179182A (en) * 2019-11-21 2020-05-19 珠海格力智能装备有限公司 Image processing method and device, storage medium and processor

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205497A (en) * 2021-04-30 2021-08-03 扬州能煜检测科技有限公司 Image processing method, device, equipment and medium for double-wire type image quality meter

Similar Documents

Publication Publication Date Title
CN110738207B (en) Character detection method for fusing character area edge information in character image
Bhargavi et al. A survey on threshold based segmentation technique in image processing
Moore et al. Superpixel lattices
Xiaofeng et al. Discriminatively trained sparse code gradients for contour detection
CN106599836A (en) Multi-face tracking method and tracking system
CN107944403B (en) Method and device for detecting pedestrian attribute in image
CN110378837B (en) Target detection method and device based on fish-eye camera and storage medium
Benbihi et al. Elf: Embedded localisation of features in pre-trained cnn
CN111191583A (en) Space target identification system and method based on convolutional neural network
CN110598788A (en) Target detection method and device, electronic equipment and storage medium
Xiang et al. Lightweight fully convolutional network for license plate detection
CN116168017B (en) Deep learning-based PCB element detection method, system and storage medium
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
Küçükmanisa et al. Real-time illumination and shadow invariant lane detection on mobile platform
CN111681198A (en) Morphological attribute filtering multimode fusion imaging method, system and medium
CN110599516A (en) Moving target detection method and device, storage medium and terminal equipment
Lan et al. MiniCrack: A simple but efficient convolutional neural network for pixel-level narrow crack detection
CN114359665A (en) Training method and device of full-task face recognition model and face recognition method
CN112149674A (en) Image processing method and device
CN113139544A (en) Saliency target detection method based on multi-scale feature dynamic fusion
WO2023160061A1 (en) Method and apparatus for determining moving object in image, electronic device, and storage medium
CN115439733A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
CN113850166A (en) Ship image identification method and system based on convolutional neural network
CN116824593A (en) Image local edge-based target segmentation method, device and medium
CN116030531A (en) Gait feature extraction method, gait recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination