CN112149674B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN112149674B
CN112149674B CN202010908464.8A CN202010908464A CN112149674B CN 112149674 B CN112149674 B CN 112149674B CN 202010908464 A CN202010908464 A CN 202010908464A CN 112149674 B CN112149674 B CN 112149674B
Authority
CN
China
Prior art keywords
pixel
gray
image
value
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010908464.8A
Other languages
Chinese (zh)
Other versions
CN112149674A (en
Inventor
万成涛
陈彦宇
马雅奇
谭龙田
李春光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202010908464.8A priority Critical patent/CN112149674B/en
Publication of CN112149674A publication Critical patent/CN112149674A/en
Application granted granted Critical
Publication of CN112149674B publication Critical patent/CN112149674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses an image processing method and device, which are used for improving the extraction quality of an image target. The method comprises the following steps: determining a first pixel set, wherein the first pixel set is a set of all pixel points in a peripheral area on an image to be processed; determining a plurality of pixel paths from a first pixel point in a second pixel set to each pixel point in the first pixel set; the first pixel point is any pixel point in the second pixel set; the second pixel set is a set of pixel points which are remained in the image to be processed except the first pixel set; determining a gray difference value corresponding to each of the plurality of pixel paths, wherein the gray difference value is a gray difference value between a pixel point with a maximum gray value and a pixel point with a minimum gray value on each pixel path; and determining the minimum gray level difference value in the gray level differences corresponding to the pixel paths as the gray level value of the first pixel point.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
Image object extraction refers to the separation of an image object of interest from the background in a single image or sequence of images. The method has wide application requirements in application scenes such as human body detection, target tracking, action recognition, various image/video processing and the like. The target object in the image is extracted rapidly and accurately, unnecessary background information is filtered for subsequent image analysis processing or video analysis processing, and interference is reduced. Currently, commonly used image target extraction methods based on background modeling, frame difference, deep learning model training and the like are adopted. However, the current image target extraction method is to be further optimized in terms of extraction quality.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, which are used for improving the extraction quality of an image target.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
Determining a first pixel set, wherein the first pixel set is a set of all pixel points in a peripheral area on an image to be processed;
determining a plurality of pixel paths from a first pixel point in a second pixel set to each pixel point in the first pixel set; the first pixel point is any pixel point in the second pixel set; the second pixel set is a set of pixel points which are remained in the image to be processed except the first pixel set;
determining a gray difference value corresponding to each of the plurality of pixel paths, wherein the gray difference value is a gray difference value between a pixel point with a maximum gray value and a pixel point with a minimum gray value on each pixel path;
And determining the minimum gray level difference value in the gray level differences corresponding to the pixel paths as the gray level value of the first pixel point.
Optionally, before the determining the first set of pixels, the method further includes:
And converting the initial image into the image to be processed, wherein the initial image is a color image, the image to be processed is an n-level gray scale image, and n is less than or equal to 256.
Optionally, the peripheral area is: and the set of the pixel points at the outermost layer on the image to be processed.
Optionally, the method further comprises: determining a gray threshold;
setting the gray values of all pixel points with gray values larger than the gray threshold value in the second pixel set as a first preset value;
setting the gray values of all pixel points with gray values smaller than or equal to the gray threshold value in the second pixel set as a second preset value; the first preset value is greater than the second preset value.
Optionally, the first preset value is a gray value 255; the second preset value is a gray value 0.
In a second aspect, an embodiment of the present invention provides an image processing apparatus including:
the determining unit is used for determining the first pixel set, wherein the first pixel set is a set of all pixel points in a peripheral area on an image to be processed;
the determining unit is further configured to determine a plurality of pixel paths between a first pixel point in the second pixel set and each pixel point in the first pixel set; the first pixel point is any pixel point in the second pixel set; the second pixel set is a set of pixel points which are remained in the image to be processed except the first pixel set;
The processing unit is used for determining a gray level difference value corresponding to each of the plurality of pixel paths, wherein the gray level difference value is a gray level difference value between a pixel point with a maximum gray level value and a pixel point with a minimum gray level value on each pixel path;
The processing unit is further configured to determine that a minimum gray difference value among the gray difference values corresponding to the pixel paths is a gray value of the first pixel point.
Optionally, the apparatus further comprises a conversion unit for:
And converting the initial image into the image to be processed, wherein the initial image is a color image, the image to be processed is an n-level gray scale image, and n is less than or equal to 256.
Optionally, the processor is further configured to:
determining a gray threshold;
setting the gray values of all pixel points with gray values larger than the gray threshold value in the second pixel set as a first preset value;
setting the gray values of all pixel points with gray values smaller than or equal to the gray threshold value in the second pixel set as a second preset value; the first preset value is greater than the second preset value.
Optionally, the first preset value is a gray value 255; the second preset value is a gray value 0.
In a third aspect, an embodiment of the present invention provides an image processing apparatus including:
a memory for storing computer instructions;
And a processor coupled to the memory for executing computer instructions in the memory to perform the method as provided in the first aspect above when the computer instructions are executed.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing computer instructions that, when run on a computer, cause the computer to perform a method as provided in the first aspect above.
In a fifth aspect, embodiments of the present invention provide a computer program product which, when run on a computer, causes the computer to perform the method as provided in the first aspect above.
The invention has the beneficial effects that: the invention provides an image processing method and device, which comprises the steps of firstly determining a plurality of connection paths between a certain pixel point in a gray level image and all pixel points in a peripheral area, determining the difference value between the maximum gray level value and the minimum gray level value of all the pixel points on each path in the plurality of connection paths, and then determining the minimum value in the plurality of difference values as the minimum gray level difference value corresponding to the pixel point. The connectivity of the pixel point and the pixel point of the image boundary area is determined by determining the minimum gray level difference value of the pixel point, and whether the pixel point belongs to a target object to be extracted in the image is determined according to the connectivity. By the method, the extraction precision of the image target can be accurate to the pixel level, so that the quality of the image target extraction is improved.
Drawings
Fig. 1 is a schematic view of a scenario of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an embodiment of an image processing method according to the present invention;
Fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It is apparent that the described embodiments are some embodiments of the technical solution of the present application, but not all embodiments. All other embodiments, based on the embodiments described in the present document, which can be obtained by a person skilled in the art without any creative effort, are within the scope of protection of the technical solutions of the present application.
As described above, in performing image target extraction, currently common methods include background modeling, frame difference training and deep learning model training. However, the extraction quality of the currently-used image target extraction method needs to be further improved.
In view of this, an embodiment of the present invention provides an image processing method, including: determining a first pixel set, wherein the first pixel set is a set of all pixel points in a peripheral area on an image to be processed; determining a plurality of pixel paths from a first pixel point in a second pixel set to each pixel point in the first pixel set; the first pixel point is any pixel point in the second pixel set; the second pixel set is a set of pixel points which are remained in the image to be processed except the first pixel set; determining a gray difference value corresponding to each of the plurality of pixel paths, wherein the gray difference value is a gray difference value between a pixel point with a maximum gray value and a pixel point with a minimum gray value on each pixel path; and determining the minimum gray level difference value in the gray level differences corresponding to the pixel paths as the gray level value of the first pixel point. According to the image processing method, the connectivity of the pixel point and the pixel point of the image boundary area is determined by determining the minimum gray level difference value of the pixel point, and whether the pixel point belongs to a target object to be extracted in the image is judged according to the connectivity, so that the extraction precision of the image target is accurate to the pixel level, and the extraction quality of the image target is improved.
The following describes the technical scheme provided by the embodiment of the invention with reference to the attached drawings.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application. The application scenario is exemplified by the processing of gray values by the image processing device for one of the pixels M in an image. The set of pixels in the dark area around the image in fig. 1 is the set of all pixels in the peripheral area on the image to be processed. Another possibility is that, instead of taking the outermost layer of the image pixels as the set of all the pixels in the peripheral area, the outermost layer and the pixels adjacent to the outermost layer may be taken as the set of all the pixels in the peripheral area, which is not limited in this way.
In fig. 1, the gray value of M point in the left image is 255 (white), the gray value of all pixels in the dark area around the image, that is, the peripheral area is 50, the processor in the image processes the left image, first determines the connection paths between the pixel point M and all pixels in the peripheral area, and takes two pixel points a and b in the peripheral area as an example, the paths between the pixel point M and the pixel point a are plural, two of which are marked in the figure, namely, the path 101 and the path 102. Taking the path 101 as an example, the maximum gray value in the pixel points on the path is 255, the minimum gray value is 50, the difference value between the maximum gray value and the minimum gray value on the path 101 is 205, and the processor calculates the difference value between the maximum gray value and the minimum gray value on all the connection paths from the pixel point M to the pixel point a, and then compares the calculated difference value to obtain the minimum difference value as 205; the same method is used for obtaining the difference value between the maximum gray value and the minimum gray value on all the communication paths of the pixel point M and all the pixel points in the peripheral area, the minimum value is taken as the minimum gray difference value of the pixel point M, the minimum gray difference value corresponding to all the pixel points is obtained in the same way, the minimum gray difference value is taken as the gray value of each pixel point corresponding to the output image of the processor, the gray values of the pixel points in the middle area outside the peripheral area in FIG. 1 are all 255, the minimum gray difference value of each pixel point is obtained, and therefore 205 is taken as the gray value of the pixel point in the middle area outside the peripheral area, and a second gray image is generated.
It should be noted that the above-mentioned application scenarios are only shown for facilitating understanding of the spirit and principles of the present invention, and the present invention examples are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
Referring to fig. 2, a flowchart of an image processing method according to an embodiment of the present invention may be applied to the scenario shown in fig. 1, and specifically, the method includes the following steps:
Step 201: and determining a first pixel set, wherein the first pixel set is a set of all pixel points in a peripheral area on the image to be processed.
Optionally, before determining the first set of pixels, the method further comprises: and converting the initial image into the image to be processed, wherein the initial image is a color image, the image to be processed is an n-level gray scale image, and n is less than or equal to 256.
Converting the original color image into a gray image, namely, making the RGB value of each pixel point in the image satisfy: r=g=b, and the graying method may be:
r= (R of color image+g of color image+b of color image)/3 of the gray image;
g= (R of color image+g of color image+b of color image)/3 of the gray image;
B= (R of color image+g of color image+b of color image)/3 of the gray image.
The most critical factor in computer recognition of target objects in images is gradient, and many feature extraction algorithms, such as scale-invariant feature transforms (SCALE INVARIANT Feature Transform, SIFI) and directional gradient histograms (Histogram of Oriented Gradient, HOG), are gradient-based in nature. To calculate the gradient of the image, a gray image is used, and since the color of the image is easily affected by factors such as illumination, it is difficult to extract key information from the color itself. Therefore, in the embodiment of the present invention, the original color image needs to be converted into the gray image, and after the gray image is changed, useful information in the image remains, but the image becomes easy to process, and the operation speed of the computer is greatly improved when the computer performs the operation process on the image.
Optionally, the peripheral area is: and the set of the pixel points at the outermost layer on the image to be processed.
The pixel points of the peripheral area refer to the pixel points on the image boundary, and the pixel points on the image boundary are extracted to extract the pixel values of the pixel points of the background area of the image except the target object so as to separate the background of the image from the target object in the subsequent image processing process. In one possible implementation, the peripheral area may be the set of the outermost pixel points on the image to be processed, or may be the set of the outermost pixel points on the image to be processed and the pixel points adjacent to the outermost pixel points. Taking the scenario in fig. 1 as an example, the peripheral area may be a set of all pixels of the darkest area at the outermost periphery in fig. 1, or may be a set of all pixels of the two outermost layers, which is not limited in any way in the embodiment of the present invention.
Step 202: determining a plurality of pixel paths from a first pixel point in a second pixel set to each pixel point in the first pixel set; the first pixel point is any pixel point in the second pixel set; the second pixel set is a set of pixel points which are remained in the image to be processed except the first pixel set.
As can be seen from the above, the pixels in the first pixel set are pixels of the image boundary region, and the second pixel set is a set of all remaining pixels in the image except the pixels in the first pixel set, that is, a set of all pixels in the middle region except the pixels of the image boundary region. The first pixel point is any pixel point in the second pixel set.
Taking fig. 1 as an example, M pixel points in fig. 1 are one pixel point of the remaining pixel points except all pixel points in the peripheral area, and a and b are two pixel points in the set of all pixel points in the peripheral area. In this embodiment, paths 101 and 102 are taken as examples, and similarly, paths for connecting M pixels to b pixels are taken as examples, and paths 103 and 104 are taken as examples. As the first pixel point, M pixel points in fig. 1 may have a plurality of paths from M pixel points to one pixel point a of the image boundary area (first pixel set), only two of which are listed in fig. 1, and likewise, there may be a plurality of paths from M pixel points to another pixel point b of the image boundary area (first pixel set), two of which are listed here.
Step 203: and determining a gray level difference value corresponding to each of the plurality of pixel paths, wherein the gray level difference value is the gray level difference value between the pixel point with the maximum gray level value and the pixel point with the minimum gray level value on each pixel path.
Taking fig. 1 as an example, in a path 101 where M pixel points are connected to a pixel point, the maximum value is M, and the minimum value is n in the gray values of all pixel points of the path; in the path 102 where the M pixel points are connected to the a pixel points, the maximum value is x and the minimum value is y in the gray values of all the pixel points of the path; the grayscale difference value corresponding to path 101 is (m-n) and the grayscale difference value corresponding to path 102 is (x-y). Similarly, in the path 103 where the M pixel points are connected to the b pixel points, the maximum value of the gray values of all the pixel points of the path is p, and the minimum value of the gray values is q; in the path 104 where the M pixel points are connected to the b pixel points, the maximum value of the gray values of all the pixel points of the path is s, and the minimum value of the gray values is w; the gray scale difference value corresponding to path 101 is (p-q) and the gray scale difference value corresponding to path 102 is (s-w).
Step 204: and determining the minimum gray level difference value in the gray level differences corresponding to the pixel paths as the gray level value of the first pixel point.
The gray level difference value is used for evaluating the difficulty that a pixel point at a certain position in an image is connected to a pixel point of an image boundary region through a path, and when the gray level difference value is larger, the connectivity between the pixel point and the pixel point of the image boundary region is poor, namely the similarity between the pixel point and the pixel point of the image boundary region is poor, namely the possibility that the pixel point is used as a target object to be extracted by the image is high.
In one embodiment, as shown in fig. 1, the method for determining the gray value of the first pixel may use M pixels as the first pixel, and in the path 101 where the M pixels are connected to the a pixels, the maximum value is M and the minimum value is n in the gray values of all pixels in the path, and then (M-n) is the gray difference value corresponding to the path 101. And similarly, determining the gray difference value corresponding to all paths of the M pixel point connected to the a pixel point, selecting the smallest gray difference value from the gray difference values corresponding to all paths as the smallest gray difference value corresponding to the M pixel point connected to the a pixel point, determining the smallest gray difference value corresponding to the M pixel point connected to the b pixel point in the same way, and determining the smallest gray difference value corresponding to the M pixel point connected to all pixel points in the first pixel set. For example, if there are k pixel points in the first pixel set, it may be determined that k minimum gray difference values corresponding to the M pixel points when the M pixel points are connected to all k pixel points, and the minimum value is selected by comparing the k minimum gray difference values, that is, the minimum gray difference value. And determining the minimum gray difference value of other pixel points in the second pixel set by the same method, and taking the calculated minimum gray difference value of each pixel point as the gray value of each pixel point to obtain a second gray image.
Specifically, the gray value is represented by I, and the minimum gray difference value of each pixel point I is calculated according to the following formula (1):
Wherein, p= < p (0), p (1), …, p (k) > are connection paths connecting the pixel point i to the pixel points in the first pixel set S, and there are k pixel points on the paths. In the embodiment of the invention, each pixel point only establishes a connection path with the upper, lower, left and right adjacent pixel points, pi S, i is a set of all connection paths connecting the pixel point i to the pixel points in the first pixel set S, and F (p) is a calculation function of the corresponding minimum gray level difference value on the connection path p. F (p) is defined as follows:
F (p) =maxI (p (j)) -min I (p (j)) (j is an integer from 0 to k) [2]
I (p (j)) is the gray value of the j-th pixel point in the k pixel points on the path p, namely, the difference value between the maximum value and the minimum value in the gray values corresponding to the k pixel points on the connecting path of the pixel point I to the pixel point in the first pixel set S is taken. And after the difference value between the maximum value and the minimum value in the gray values corresponding to each connecting path in the pixel point i and all the connecting paths in the first pixel set is calculated, taking the minimum value as the minimum gray difference value of the pixel point i.
In one possible implementation manner, before the calculated minimum gray difference value of each pixel point is used as the gray value of each pixel point, the calculated minimum gray difference value of each pixel point may be normalized and mapped to the interval [0,255], and then the assignment is performed for the corresponding pixel point in the image, so as to obtain the second gray image.
The second gray level image is consistent with the width and the height of the image to be processed, pixel points are in one-to-one correspondence, but gray levels corresponding to the pixel points are different, the gray level corresponding to the pixel points on the image to be processed is determined after the color image is converted into the gray level image, and the gray level corresponding to the pixel points in the second gray level image is the minimum gray level difference value of a plurality of gray level difference values corresponding to a plurality of paths from the pixel point to each pixel point in the first pixel set.
In another embodiment, the method for determining the minimum gray level difference corresponding to the pixel point may be as shown in fig. 3: the minimum gray scale difference value of the pixel point in the image can be updated by alternating forward scanning and reverse scanning. The forward scanning is shown in (a) of fig. 3, and refers to traversing the pixels in the image in the order of "left > right > up > down", the backward scanning is shown in (b) of fig. 3, and refers to traversing the pixels in the image in the order of "right > left > down > up", and the specific implementation is as follows:
Firstly, setting the minimum gray level difference value corresponding to all pixel points of an image boundary area (a first pixel set) as 0, setting the minimum gray level difference value corresponding to all pixel points in a second pixel set as infinity, and then performing k times of traversing scanning on the image, wherein the odd number of traversing scanning adopts forward scanning, the even number of traversing scanning adopts reverse scanning, and each time of traversing scanning, the minimum gray level difference value of all pixel points of the image is updated once, and the updating process is described as follows;
During forward scanning, a pixel i and two pixel points adjacent to the left and the upper part of the pixel i form a path with three nodes, the difference value between the maximum gray value and the minimum gray value on the path is calculated according to a formula (2), and the minimum gray difference value D (i) of the pixel point i is updated by the value;
during the reverse scanning, the pixel i and two adjacent pixel points on the right and the lower sides of the pixel i form a path with three nodes, the difference value between the maximum gray value and the minimum gray value on the path is calculated according to the formula (2), and the minimum gray difference value D (i) of the pixel point i is updated by the value.
Step 205: determining a gray threshold; setting the gray values of all pixel points with gray values larger than the gray threshold value in the second pixel set as a first preset value; setting the gray values of all pixel points with gray values smaller than or equal to the gray threshold value in the second pixel set as a second preset value; the first preset value is greater than the second preset value.
In a possible implementation manner, a maximum inter-class difference method may be used to determine a binary segmentation threshold of the image, where the binary segmentation threshold is the gray threshold, and in practical application, other algorithms may be used to determine the binary segmentation threshold of the image, for example: the embodiments of the present invention do not limit the iterative thresholding method, the P-bit method, the global thresholding method based on the minimum error, and the like. Step 205 is an optional step.
Optionally, the first preset value is a gray value 255; the second preset value is a gray value 0.
The image is binarized based on the binarized division threshold value, that is, the gradation value of each pixel in the image is changed to 0 (black) or 255 (white). Specifically, when the gray value of the pixel point in the image is greater than the binary segmentation threshold, r=0, g=0, and b=0 in the RGB values of the pixel point in the image, and the pixel point becomes black; when the gray value of the pixel in the image is smaller than or equal to the binary segmentation threshold, r=255, g=255, and b=255 in the RGB values of the pixel in the image are caused, and the pixel becomes white. After the binarization processing is performed on each pixel point in the second gray level image, the second gray level image is changed into a binary image, then a target object in the image to be processed is extracted according to the binary image as an index image, specifically, the pixel point with the gray level value of 255 in the binary image is determined, and the pixel point corresponding to the target object in the image to be processed is determined at the corresponding position of the pixel point with the gray level value of 255 in the binary image, so that the target object is extracted from the image to be processed.
For example, a maximum inter-class difference method is adopted to determine that the gray threshold value of an image is 150, binarization processing is carried out on the gray image according to the gray threshold value, namely, the gray value of a pixel with the gray value larger than 150 in the gray image is changed into 255, the gray value of a pixel with the gray value smaller than or equal to 150 in the gray image is changed into 0, all the gray values in the gray image are processed according to the method, so that the gray value in the gray image is 0 or 255, namely, the image has only two colors of black and white, and is a binary image, the pixel with the gray value of 255 in the binary image is determined, and the pixel corresponding to the target object in the image to be processed is determined at the corresponding position of the pixel with the gray value of 255 in the binary image, so that the target object is extracted from the image to be processed.
Based on the same inventive concept, the embodiment of the invention provides an image processing device, which can realize the functions corresponding to the image processing method. The image processing means may be a hardware structure, a software module, or a hardware structure plus a software module. The image processing device may be implemented by a chip system, which may be constituted by a chip, or may include a chip and other discrete devices. Referring to fig. 4, the apparatus includes a determining unit 401 and a processing unit 402, wherein:
A determining unit 401, configured to determine the first pixel set, where the first pixel set is a set of all pixel points in a peripheral area on an image to be processed;
The determining unit 401 is further configured to determine a plurality of pixel paths between a first pixel point in the second pixel set and each pixel point in the first pixel set; the first pixel point is any pixel point in the second pixel set; the second pixel set is a set of pixel points which are remained in the image to be processed except the first pixel set;
A processing unit 402, configured to determine a gray level difference value corresponding to each of the plurality of pixel paths, where the gray level difference value is a gray level difference value between a pixel point having a maximum gray level value and a pixel point having a minimum gray level value on each of the plurality of pixel paths;
The processing unit 402 is further configured to determine that a minimum gray level difference value among the gray level differences corresponding to the pixel paths is the gray level value of the first pixel point.
In a possible embodiment, the apparatus further comprises a conversion unit for: and converting the initial image into the image to be processed, wherein the initial image is a color image, the image to be processed is an n-level gray scale image, and n is less than or equal to 256.
In one possible implementation, the processor is further configured to: determining a gray threshold; setting the gray values of all pixel points with gray values larger than the gray threshold value in the second pixel set as a first preset value; setting the gray values of all pixel points with gray values smaller than or equal to the gray threshold value in the second pixel set as a second preset value; the first preset value is greater than the second preset value.
In a possible implementation manner, the first preset value is a gray value 255; the second preset value is a gray value 0.
All relevant contents of each step related to the foregoing embodiment of the image processing method may be cited in the functional description of the functional module corresponding to the image processing apparatus in the embodiment of the present application, which is not described herein.
The division of the modules in the embodiments of the present application is schematically only one logic function division, and there may be another division manner in actual implementation, and in addition, each functional module in each embodiment of the present application may be integrated in one processor, or may exist separately and physically, or two or more modules may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules.
Referring to fig. 5, based on the same inventive concept, an embodiment of the present invention provides an image processing apparatus, which includes at least one processor 501, where the processor 501 is configured to execute a computer program stored in a memory, to implement steps of the image processing method shown in fig. 2 according to the embodiment of the present invention.
In the alternative, processor 501 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the image processing method disclosed in connection with the embodiments of the present invention may be directly embodied as a hardware processor executing, or may be executed by a combination of hardware and software modules in the processor.
Optionally, the image processing apparatus may further include a memory 502 connected to the at least one processor 501, where the memory 502 stores instructions executable by the at least one processor 501, and where the at least one processor 501 may execute the steps included in the foregoing image processing method by executing the instructions stored in the memory 502.
The specific connection medium between the processor 501 and the Memory 502 is not limited in the embodiment of the present invention, and the Memory 502 may include at least one type of storage medium, for example, flash Memory, hard disk, multimedia card, card Memory, random access Memory (Random Access Memory, RAM), static random access Memory (Static Random Access Memory, SRAM), programmable Read-Only Memory (Programmable Read Only Memory, PROM), read-Only Memory (ROM), charged erasable programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), magnetic Memory, magnetic disk, optical disk, and the like. Memory 502 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 502 in embodiments of the present invention may also be circuitry or any other device capable of performing storage functions for storing program instructions and/or data.
The code corresponding to the image processing method described in the foregoing embodiment may be cured into the chip by programming the processor 501, so that the chip can execute the steps of the foregoing image processing method during operation, and how to program the processor 501 is a technology known to those skilled in the art is not described herein. The physical devices corresponding to the determining unit 401 and the processing unit 402 may be the aforementioned processor 501. The image processing apparatus may be used to perform the method provided by the embodiment shown in fig. 2. Therefore, for the functions that can be implemented by the functional modules in the device, reference may be made to the corresponding description in the embodiment shown in fig. 2, which is not repeated.
Based on the same inventive concept, embodiments of the present invention also provide a computer-readable storage medium storing computer instructions that, when run on a computer, cause the computer to perform the steps of the image processing method as described above.
In some possible embodiments, aspects of the image processing method provided by the present application may also be implemented in the form of a program product comprising program code for causing a detection device to carry out the steps of the image processing method according to the various exemplary embodiments of the application as described in the present specification, when the program product is run on an electronic device.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (12)

1. An image processing method, comprising:
Determining a first pixel set, wherein the first pixel set is a set of all pixel points in a peripheral area on an image to be processed;
determining a plurality of pixel paths from a first pixel point in a second pixel set to each pixel point in the first pixel set; the first pixel point is any pixel point in the second pixel set; the second pixel set is a set of pixel points which are remained in the image to be processed except the first pixel set;
determining a gray difference value corresponding to each pixel path in the plurality of pixel paths, wherein the gray difference value is a gray difference value between a pixel point with a maximum gray value and a pixel point with a minimum gray value on each pixel path;
And determining the minimum gray level difference value in the gray level differences corresponding to the pixel paths as the gray level value of the first pixel point.
2. The method of claim 1, wherein prior to the determining the first set of pixels, the method further comprises:
And converting the initial image into the image to be processed, wherein the initial image is a color image, the image to be processed is an n-level gray scale image, and n is less than or equal to 256.
3. The method of claim 1, wherein the peripheral region is: and the set of the pixel points at the outermost layer on the image to be processed.
4. The method of claim 1, wherein the method further comprises:
determining a gray threshold;
setting the gray values of all pixel points with gray values larger than the gray threshold value in the second pixel set as a first preset value;
setting the gray values of all pixel points with gray values smaller than or equal to the gray threshold value in the second pixel set as a second preset value; the first preset value is greater than the second preset value.
5. The method of claim 4, wherein the first preset value is a gray value 255; the second preset value is a gray value 0.
6. An image processing apparatus, comprising:
The determining unit is used for determining a first pixel set, wherein the first pixel set is a set of all pixel points in a peripheral area on an image to be processed;
The determining unit is further configured to determine a plurality of pixel paths between a first pixel point in the second pixel set and each pixel point in the first pixel set; the first pixel point is any pixel point in the second pixel set; the second pixel set is a set of pixel points which are remained in the image to be processed except the first pixel set;
The processing unit is used for determining a gray level difference value corresponding to each pixel path in the plurality of pixel paths, wherein the gray level difference value is a gray level difference value between a pixel point with the maximum gray level value and a pixel point with the minimum gray level value on each pixel path;
The processing unit is further configured to determine that a minimum gray difference value among the gray difference values corresponding to the pixel paths is a gray value of the first pixel point.
7. The apparatus of claim 6, further comprising a conversion unit to:
And converting the initial image into the image to be processed, wherein the initial image is a color image, the image to be processed is an n-level gray scale image, and n is less than or equal to 256.
8. The apparatus of claim 6, wherein the processing unit is further to:
determining a gray threshold;
setting the gray values of all pixel points with gray values larger than the gray threshold value in the second pixel set as a first preset value;
setting the gray values of all pixel points with gray values smaller than or equal to the gray threshold value in the second pixel set as a second preset value; the first preset value is greater than the second preset value.
9. The apparatus of claim 8, wherein the first preset value is a gray value 255; the second preset value is a gray value 0.
10. An image processing apparatus, characterized in that the apparatus comprises: a processor and a memory;
the memory is configured to store computer-executable instructions that, when executed by the processor, cause the image processing apparatus to perform the method of any of claims 1-5.
11. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method of any of claims 1-5.
12. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of claims 1-5.
CN202010908464.8A 2020-09-02 2020-09-02 Image processing method and device Active CN112149674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010908464.8A CN112149674B (en) 2020-09-02 2020-09-02 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010908464.8A CN112149674B (en) 2020-09-02 2020-09-02 Image processing method and device

Publications (2)

Publication Number Publication Date
CN112149674A CN112149674A (en) 2020-12-29
CN112149674B true CN112149674B (en) 2024-07-12

Family

ID=73889874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010908464.8A Active CN112149674B (en) 2020-09-02 2020-09-02 Image processing method and device

Country Status (1)

Country Link
CN (1) CN112149674B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205497B (en) * 2021-04-30 2022-06-07 江苏迪业检测科技有限公司 Image processing method, device, equipment and medium for double-wire type image quality meter

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104010129A (en) * 2014-04-23 2014-08-27 小米科技有限责任公司 Image processing method, device and terminal
CN106530337A (en) * 2016-10-31 2017-03-22 武汉市工程科学技术研究院 Non local stereopair dense matching method based on image gray scale guiding

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100254736B1 (en) * 1997-10-28 2000-05-01 윤종용 Apparatus and method for 1-dimension local adaptive image binarization
KR100537829B1 (en) * 2003-04-24 2005-12-19 주식회사신도리코 Method for segmenting Scan Image
US20080247649A1 (en) * 2005-07-07 2008-10-09 Chun Hing Cheng Methods For Silhouette Extraction
CN108921869B (en) * 2018-06-29 2021-05-25 新华三信息安全技术有限公司 Image binarization method and device
CN108961291A (en) * 2018-08-10 2018-12-07 广东工业大学 A kind of method of Image Edge-Detection, system and associated component
CN109903294B (en) * 2019-01-25 2020-05-29 北京三快在线科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN111179182B (en) * 2019-11-21 2023-10-03 珠海格力智能装备有限公司 Image processing method and device, storage medium and processor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104010129A (en) * 2014-04-23 2014-08-27 小米科技有限责任公司 Image processing method, device and terminal
CN106530337A (en) * 2016-10-31 2017-03-22 武汉市工程科学技术研究院 Non local stereopair dense matching method based on image gray scale guiding

Also Published As

Publication number Publication date
CN112149674A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN109117848B (en) Text line character recognition method, device, medium and electronic equipment
Bhargavi et al. A survey on threshold based segmentation technique in image processing
CN110991465B (en) Object identification method, device, computing equipment and storage medium
CN103366167B (en) System and method for processing image for identifying alphanumeric characters present in a series
US10423852B1 (en) Text image processing using word spacing equalization for ICR system employing artificial neural network
US8244044B2 (en) Feature selection and extraction
US9239948B2 (en) Feature descriptor for robust facial expression recognition
US8103058B2 (en) Detecting and tracking objects in digital images
CN105184763A (en) Image processing method and device
Benbihi et al. Elf: Embedded localisation of features in pre-trained cnn
CN116168017B (en) Deep learning-based PCB element detection method, system and storage medium
CN110599516A (en) Moving target detection method and device, storage medium and terminal equipment
CN111914668A (en) Pedestrian re-identification method, device and system based on image enhancement technology
CN112149674B (en) Image processing method and device
CN112347957A (en) Pedestrian re-identification method and device, computer equipment and storage medium
CN111325199B (en) Text inclination angle detection method and device
Gou et al. License plate recognition using MSER and HOG based on ELM
CN111199228A (en) License plate positioning method and device
CN112102353B (en) Moving object classification method, apparatus, device and storage medium
CN112132150B (en) Text string recognition method and device and electronic equipment
CN113850166A (en) Ship image identification method and system based on convolutional neural network
CN113807407A (en) Target detection model training method, model performance detection method and device
Ilas Improved binary HOG algorithm and possible applications in car detection
Sulimowicz et al. “Rapid” regions-of-interest detection in big histopathological images
CN117726805A (en) Method and device for identifying target object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant