CN112861873A - Method for processing image with cigarette case - Google Patents

Method for processing image with cigarette case Download PDF

Info

Publication number
CN112861873A
CN112861873A CN202110007810.XA CN202110007810A CN112861873A CN 112861873 A CN112861873 A CN 112861873A CN 202110007810 A CN202110007810 A CN 202110007810A CN 112861873 A CN112861873 A CN 112861873A
Authority
CN
China
Prior art keywords
edge
image
target object
target
tortuosity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110007810.XA
Other languages
Chinese (zh)
Other versions
CN112861873B (en
Inventor
刘刚
汪丹丹
刘强
黄金娜
高智敏
赵永江
叶展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110007810.XA priority Critical patent/CN112861873B/en
Publication of CN112861873A publication Critical patent/CN112861873A/en
Application granted granted Critical
Publication of CN112861873B publication Critical patent/CN112861873B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method for processing an image with a cigarette case, which comprises the following steps: acquiring a target image, wherein the target image comprises a target object; extracting a fine edge and a rough edge of the target object, wherein the fine edge is an edge of which the tortuosity of the target object is higher than a target tortuosity, and the rough edge is an edge of which the tortuosity of the target object is not higher than the target tortuosity; based on the fine edge and the coarse edge, edge tortuosity features of the target object are determined. Therefore, the electronic equipment can extract the fine edge and the rough edge of the target object, and further determine the edge tortuosity characteristic of the target object based on the fine edge and the rough edge, wherein the edge tortuosity characteristic can describe the edge shape of the target object, so that the determination of the edge tortuosity characteristic of the target object in the image is realized, and meanwhile, the subsequent image processing process is facilitated.

Description

Method for processing image with cigarette case
Technical Field
The invention relates to the technical field of image processing, in particular to a method for processing an image with a cigarette case.
Background
The edge tortuosity of the target object in the image can accurately describe the shape corresponding to the target, so the method can be used as the image feature in scenes such as image recognition, image retrieval and the like.
For example, as compared with the edge meandering degree of characters in the image of a genuine product shown in fig. 1(a), the edge meandering degree of characters in the image of a counterfeit product shown in fig. 1(b) is a main difference between the characters, and thus it is possible to identify the authenticity based on the edge meandering degree of characters.
Therefore, in order to facilitate image processing procedures such as image recognition and image retrieval, a method capable of extracting the edge tortuosity of a target object in an image is required.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a method for processing an image with a target object, so as to determine an edge tortuosity feature of the target object in the image. The specific technical scheme is as follows:
the embodiment of the invention provides a method for processing an image with a target object, which comprises the following steps:
acquiring a target image, wherein the target image comprises a target object;
extracting a fine edge and a rough edge of the target object, wherein the fine edge is an edge of which the tortuosity of the target object is higher than a target tortuosity, and the rough edge is an edge of which the tortuosity of the target object is not higher than the target tortuosity;
based on the fine edge and the coarse edge, edge tortuosity features of the target object are determined.
Optionally, the step of extracting the fine edge and the coarse edge of the target object includes:
randomly selecting one of the color components of the target image as a target color component, and extracting a fine edge and a rough edge of the target object based on the target color component; or the like, or, alternatively,
counting histograms of color components of the target image; determining a target color component from the color components based on the histogram of the color components; based on the target color component, a fine edge and a coarse edge of the target object are extracted.
Optionally, the step of determining a target color component from the color components based on the histogram of each color component includes:
for the histogram of each color component, dividing the pixel values represented by the histogram of the color component into a first type of pixel values and a second type of pixel values by adopting a maximum inter-class variance method;
respectively calculating the average values of the first pixel values and the second pixel values, and calculating the difference value between the average value of the first pixel values and the average value of the second pixel values;
and determining the color component with the maximum corresponding difference value as the target color component.
Optionally, the step of extracting the fine edge and the coarse edge of the target object based on the target color component includes:
carrying out binarization processing on the target color component to obtain a binarized image;
and according to a preset processing rule, carrying out corrosion processing and expansion processing on the binary image, and determining a fine edge and a rough edge of the target object based on the processed image.
Optionally, the step of performing erosion processing and dilation processing on the binarized image according to a preset processing rule, and determining a fine edge and a coarse edge of the target object based on the processed image includes:
carrying out corrosion treatment on the binary image and then carrying out expansion treatment on the binary image to obtain a reference image;
performing expansion processing on the reference image to obtain a first image, and determining a fine edge of the target object based on a difference value of pixel values of the first image and the reference image;
performing expansion processing on the reference image and then performing corrosion processing on the reference image to obtain a second image;
and performing expansion processing on the second image to obtain a third image, and determining the rough edge of the target object based on the difference value of the pixel values of the third image and the second image.
Optionally, the step of performing binarization processing on the target color component to obtain a binarized image includes:
respectively calculating the number of pixel points corresponding to the first type of pixel value and the second type of pixel value;
setting the pixel value of the pixel point with less quantity as a first pixel value, and setting the pixel values of the other pixel points as a second pixel value to obtain a binary image, wherein the first pixel value is one of 255 and 0, and the second pixel value is one of 255 and 0 different from the first pixel value.
Optionally, the step of determining the edge tortuosity feature of the target object based on the fine edge and the coarse edge includes:
respectively calculating the number of pixel points included in the fine edge and the coarse edge;
determining an edge tortuosity feature for the target object based on the quantity.
Optionally, the step of determining the edge tortuosity feature of the target object based on the number includes:
determining an edge tortuosity feature of the target object according to the following formula:
feature=k×max(0,sum1–sum2)/sum2
where k is a preset feature parameter, sum1 is the number of pixels included in the fine edge, and sum2 is the number of pixels included in the coarse edge.
Optionally, after the step of determining the edge tortuosity feature of the target object based on the fine edge and the coarse edge, the method further includes:
calculating the average value and the variance of the edge tortuosity characteristics corresponding to the target images;
under the condition that an image to be recognized is obtained, determining the edge tortuosity characteristic of a target object included in the image to be recognized as a characteristic to be recognized;
and identifying the target object included in the image to be identified based on the feature to be identified, the average value and the variance to obtain an identification result.
Optionally, the identifying the target object included in the image to be identified based on the feature to be identified, the average value, and the variance to obtain an identification result includes:
identifying a target object included in the image to be identified according to the following formula to obtain a probability value P, and determining an identification result according to the probability value P:
P=exp(-(temp_f1–mean)^2/K/var)
wherein temp _ f1 is the feature to be identified, mean is the average value, K is a preset probability parameter, and var is the variance.
The embodiment of the invention has the following beneficial effects:
in the scheme provided by the embodiment of the invention, the electronic equipment can acquire a target image, wherein the target image comprises a target object; extracting a fine edge and a rough edge of the target object, wherein the fine edge is an edge of which the tortuosity of the target object is higher than that of the target object, and the rough edge is an edge of which the tortuosity of the target object is not higher than that of the target object; based on the fine edges and the coarse edges, edge tortuosity features of the target object are determined. Therefore, the electronic equipment can extract the fine edge and the rough edge of the target object, and further determine the edge tortuosity characteristic of the target object based on the fine edge and the rough edge, wherein the edge tortuosity characteristic can describe the edge shape of the target object, so that the determination of the edge tortuosity characteristic of the target object in the image is realized, and meanwhile, the subsequent image processing process is facilitated. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
FIG. 1(a) is a schematic diagram of an image of a genuine article in the related art;
FIG. 1(b) is a schematic image of a counterfeit product in the related art;
FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a determination of a target color component based on the embodiment shown in FIG. 2;
FIG. 4(a) is a schematic diagram of an image corresponding to the color component Y based on the embodiment shown in FIG. 3;
FIG. 4(b) is a histogram of the color component Y shown in FIG. 4 (a);
FIG. 4(c) is a filtered histogram corresponding to the histogram shown in FIG. 4 (b);
FIG. 5 is a flow chart of the manner in which fine and coarse edges are extracted based on the embodiment shown in FIG. 2;
FIG. 6(a) is a schematic diagram of a binarized image based on the embodiment shown in FIG. 5;
FIG. 6(b) is a schematic diagram of a reference image corresponding to the binarized image shown in FIG. 6 (a);
FIG. 6(c) is a schematic diagram of the fine edges corresponding to the binarized image shown in FIG. 6 (a);
FIG. 6(d) is a schematic diagram of a second image corresponding to the reference image shown in FIG. 6 (b);
FIG. 6(e) is a schematic diagram of the corresponding rough edges of the binarized image shown in FIG. 6 (a);
FIG. 7 is a flow chart of an image recognition method according to the embodiment shown in FIG. 2;
fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to achieve the purpose of determining the edge tortuosity feature of a target object in an image, the embodiment of the invention provides a processing method and device for the image with the target object, an electronic device, a computer readable storage medium and a computer program product. First, a method for processing an image with a target object according to an embodiment of the present invention will be described. The embodiment of the invention provides a method for processing an image with a target object, namely a method for processing an image with a cigarette case.
The method for processing an image with a target object provided in the embodiments of the present invention may be applied to any electronic device that needs to perform image processing, for example, a processor, a computer, a mobile phone, and the like, and is not limited specifically herein. For clarity of description, the electronic device is referred to hereinafter.
As shown in fig. 2, a method of processing an image having a target object, the method comprising:
s201, acquiring a target image;
wherein the target image comprises a target object.
S202, extracting a fine edge and a rough edge of the target object;
wherein the fine edge is an edge of which the target object has a higher tortuosity than a target tortuosity, and the coarse edge is an edge of which the target object has a tortuosity not higher than the target tortuosity;
s203, determining the edge tortuosity feature of the target object based on the fine edge and the rough edge.
Therefore, in the scheme provided by the embodiment of the invention, the electronic equipment can acquire the target image, wherein the target image comprises the target object; extracting a fine edge and a rough edge of the target object, wherein the fine edge is an edge of which the tortuosity of the target object is higher than that of the target object, and the rough edge is an edge of which the tortuosity of the target object is not higher than that of the target object; based on the fine edges and the coarse edges, edge tortuosity features of the target object are determined. Therefore, the electronic equipment can extract the fine edge and the rough edge of the target object, and further determine the edge tortuosity characteristic of the target object based on the fine edge and the rough edge, wherein the edge tortuosity characteristic can describe the edge shape of the target object, so that the determination of the edge tortuosity characteristic of the target object in the image is realized, and meanwhile, the subsequent image processing process is facilitated.
When the edge tortuosity feature determination needs to be performed on the image, the electronic device may acquire the image, that is, perform step S201 described above. The target image is an image that needs to be subjected to edge tortuosity feature determination, and may be an image of a product package, a monitoring image, an image that needs to be retrieved, and the like, which is not specifically limited herein.
The target image comprises a target object, and the target object can be an object with the edge having the tortuosity, such as characters, patterns and the like, so that the edge tortuosity characteristic of the target object can be used as the characteristic for identifying the target image, and various subsequent processing, such as the identification of the authenticity of a product, can be conveniently carried out.
The processing method of the image with the target object provided by the embodiment of the invention can be used for identifying the authenticity of articles such as tobacco, cosmetics, drinks and the like. When the method for processing an image with a target object according to an embodiment of the present invention is applied, the article to which the method is applied may include: the target image is an image of the cigarette case, the cosmetic packaging box, the wine packaging box, and the like, and the target object included therein is an object with a zigzag edge such as a character, a pattern, and the like on the cigarette case, the cosmetic packaging box, the wine packaging box, and the like, and the embodiment of the present invention is not limited specifically herein.
After acquiring the target object, the electronic device may perform the step S202, i.e., extracting the fine edge and the coarse edge of the target object. The fine edge is an edge with the target object having the higher tortuosity than the target object, and the coarse edge is an edge with the target object having the not higher tortuosity than the target object.
That is, the electronic device may extract edges of the target object in the target image by using an image edge extraction method, where one edge may be an edge with a higher tortuosity than the target tortuosity and be used as a fine edge. The other edge may be an edge having a curvature not higher than the target curvature as a rough edge.
The fine edge and the coarse edge present two edges with different tortuosity, and the edge characteristics of the target object can be identified from two degrees, and further, the electronic device can determine the edge tortuosity characteristics of the target object based on the fine edge and the coarse edge, that is, perform step S203. In one embodiment, the electronic device may perform a fusion process on the fine edge and the coarse edge, and use a result of the fusion process as an edge tortuosity feature of the target object.
As an implementation manner of the embodiment of the present invention, the step of extracting the fine edge and the coarse edge of the target object includes:
randomly selecting one of the color components of the target image as a target color component, and extracting a fine edge and a rough edge of the target object based on the target color component; or, counting the histogram of each color component of the target image; determining a target color component from the color components based on the histogram of the color components; based on the target color component, a fine edge and a coarse edge of the target object are extracted.
As an embodiment, for convenience of calculation, the electronic device may randomly select one of the color components of the target image as the target color component, for example, may randomly select one of Y, U, V color components of the target image as the target color component, which may be a Y component, a U component, or a V component. Further, based on the target color component, a fine edge and a coarse edge of the target object are extracted.
As another embodiment, since the difference between different color components corresponding to the pixel points is different, the degree of the edge tortuosity represented by the target object is different, and the accuracy of the determined edge tortuosity feature of the target object is also different, the electronic device may count the histograms of the color components of the target image.
The histogram is an array with a length of 256, the abscissa of the histogram is the value range [0, 255] of the color component, and the ordinate of the histogram is the number of times that the color component value is the number of times that the pixel point with each sequence number appears in the image. The histogram may represent the number distribution of pixel points corresponding to values of each color component in the image. That is, the difference between the values of the color components of the respective pixel points can be seen. Further, based on the histogram of each color component, the electronic device may determine a target color component from each color component, and extract a fine edge and a coarse edge of the target object based on the target color component.
Therefore, in this embodiment, the electronic device may randomly select one of the color components of the target image as the target color component; or, counting the histogram of each color component of the target image; a target color component is determined from the color components based on the histogram of each color component, and a fine edge and a coarse edge of the target object are extracted based on the target color component. By adopting any mode, the calculation efficiency can be improved, and the fine edge and the rough edge of the target object can be extracted quickly.
As an implementation manner of the embodiment of the present invention, as shown in fig. 3, the step of determining the target color component from the color components based on the histogram of the color components may include:
s301, aiming at the histogram of each color component, dividing the pixel values represented by the histogram of the color component into a first type pixel value and a second type pixel value by adopting a maximum inter-class variance method;
to determine the difference between each color component of the pixel points of the target image, the electronic device may divide the pixel values represented by the histogram of each color component into a first type of pixel values and a second type of pixel values using a maximum inter-class variance method.
Specifically, for the color component Y, the electronic device may determine a segmentation threshold, for example thr1, with a value range of 0 to 255, and then determine a pixel point with a Y value smaller than thr as a type, and calculate a mean square error var1 of the Y value of the type of pixel point. Y value greater than thr is another type, and mean square deviation var2 of Y value of the pixel points is calculated. Then, according to a scoring formula scores1[ thr ] ═ var1+ var2, a score1 is calculated when thr is 0 to 255 in sequence, and a score1 with a length of 256, that is, 256 scores1 is obtained. Further, score1 with a length of 256 may be mean filtered by a predetermined radius to obtain a filtered score 2.
If the electronic device can find the unique minimum value in the 256 scores score2, and if the unique minimum value can be found, the threshold value thrn corresponding to the unique minimum value can be recorded; if there are multiple minimum values, the average value thrn of the threshold values corresponding to the multiple minimum values can be calculated and recorded. Furthermore, the electronic device may classify the pixel points into 2 classes according to the threshold value thrn, that is, the pixel points whose pixel values (color components Y) are greater than the threshold value thrn are classified into one class, and the pixel points whose pixel values are not greater than the threshold value thrn are classified into another class. Then the pixel values (color component Y) corresponding to these two types of pixel points are the first type of pixel value and the second type of pixel value.
For the histograms of the color component U and the color component V, the manner of dividing the histogram into the first type pixel value and the second type pixel value is the same as the manner corresponding to the histogram of the color component Y, except that the pixel value of the pixel is the color component U or the color component V of the pixel, which is not described herein again.
S302, calculating the average values of the first type of pixel values and the second type of pixel values respectively, and calculating the difference value between the average value of the first type of pixel values and the average value of the second type of pixel values;
after determining the first type of pixel value and the second type of pixel value, the electronic device may calculate an average value of the first type of pixel value and the second type of pixel value, which may be denoted as mean1 and mean2, respectively. Further, the electronic device may calculate a difference between the average value of the first type of pixel values and the average value of the second type of pixel values, and may obtain differences dis _ y, dis _ u, and dis _ v corresponding to the color component Y, U, V, respectively.
S303, determine the color component with the largest difference as the target color component.
The larger the difference between the first-class pixel value and the average value of the second-class pixel value obtained by dividing by the maximum inter-class variance method is, the larger the difference between the pixel values of the target image under the color component is, the more obvious and accurate the degree of the edge tortuosity expressed by the target object is, so that in order to improve the accuracy of the edge tortuosity feature of the target object, the electronic device may determine the color component corresponding to the maximum value among dis _ y, dis _ u, and dis _ v as the target color component.
For example, if the largest value among dis _ Y, dis _ u, and dis _ v is dis _ Y, the color component Y may be determined as the target color component.
As can be seen, in this embodiment, for the histogram of each color component, the electronic device may divide the pixel values represented by the histogram of the color component into the first-class pixel value and the second-class pixel value by using the maximum inter-class variance method, calculate the average values of the first-class pixel value and the second-class pixel value, respectively, calculate the difference between the average value of the first-class pixel value and the average value of the second-class pixel value, and determine the color component with the largest difference as the target color component, so that the accuracy of the subsequently determined edge tortuosity feature of the target object may be improved.
As an implementation manner of the embodiment of the present invention, in order to further improve the accuracy of the difference, after obtaining the histogram of each color component of the target image, filtering processing may be performed on the histogram of each color component to obtain a filtered histogram, and the filtered histogram may be used to determine the target color component.
In one embodiment, the histogram of each color component may be filtered by a one-dimensional filtering method with a radius of 2. For example, as shown in fig. 4(b), the electronic device may perform one-dimensional filtering with a radius of 2 on the histogram shown in fig. 4(b) to obtain a filtered histogram shown in fig. 4(c) by counting the histogram of the color component Y for the image corresponding to the color component Y shown in fig. 4 (a).
As an implementation manner of the embodiment of the present invention, the step of extracting the fine edge and the coarse edge of the target object based on the target color component may include:
carrying out binarization processing on the target color component to obtain a binarized image; and according to a preset processing rule, carrying out corrosion processing and expansion processing on the binary image, and determining a fine edge and a rough edge of the target object based on the processed image.
Because the binarized image is composed of black and white colors, and the color contrast is the highest, the edge tortuosity of the target object can be represented most obviously, so that the electronic equipment can perform binarization processing on the target color component to obtain the binarized image.
The specific mode of the binarization processing may be any binarization processing mode in the field of image processing, and is not specifically limited and described herein.
After obtaining the binarized image, the electronic device may perform erosion processing and expansion processing on the binarized image according to a preset processing rule. The erosion processing is to calculate a minimum value of pixel values in a square region of (2 × r +1) × (2 × r +1) in the image, and replace the pixel values of the pixel points at the center of the square with the minimum value, where r is an erosion radius. The dilation process is to calculate a maximum value of pixel values in a square region of (2 × r0+1) × (2 × r0+1) in the image, and replace the pixel values of the pixels at the center of the square with the maximum value, where r0 is a dilation radius.
Therefore, the lines in the image can be thinned by the corrosion treatment, and the edge tortuosity can be more obvious. On the contrary, the expansion processing can make lines in the image thicker, and the edge tortuosity is less obvious, so that the processing rules can be preset to determine the sequence and the corresponding radius of the erosion processing and the expansion processing, and the edge tortuosity of the target object in the binary image can be reasonably adjusted. Further, the electronic device may determine a fine edge and a coarse edge of the target object based on the processed image.
As an implementation manner of the embodiment of the present invention, as shown in fig. 5, the above-mentioned step of performing erosion processing and dilation processing on the binarized image according to a preset processing rule, and determining the fine edge and the coarse edge of the target object based on the processed image may include:
s501, carrying out corrosion treatment on the binary image and then carrying out expansion treatment on the binary image to obtain a reference image;
after obtaining the binarized image, in order to accurately extract the fine edge and the coarse edge of the target object, the electronic device may perform erosion processing on the binarized image, and then perform expansion processing, and the processed image may be used as a reference image.
Because the pixel values of the pixel points in the binary image are only 0 and 255, in the corrosion treatment process, as long as the pixel point with the pixel value of 0 exists in the square area, the pixel value of the pixel point at the central position can be replaced by 0, so that the target object in the binary image after the corrosion treatment can become thin visually, namely the white part can be reduced, and the edge tortuosity of the target object can be more obvious.
On the contrary, in the expansion processing process, as long as a pixel point with a pixel value of 255 exists in the square region, the pixel value of the pixel point at the central position of the square region is replaced by 255, so that the target object in the binarized image after the expansion processing becomes "fat" visually, that is, the black part becomes less, and the edge tortuosity of the target object is not obvious.
The erosion processing is performed on the binarized image, and then the expansion processing is performed, so that the edge tortuosity of the target object can be relatively balanced, and the obtained image can be used as a reference image. The erosion radius and the expansion radius may be determined according to empirical values, processing requirements, and other factors, for example, the erosion radius may be 1, the expansion radius may be 2, and the like, which is not limited herein.
The binarized image shown in fig. 6(a) is subjected to erosion processing with an erosion radius of 1 and then to expansion processing with an expansion radius of 2, whereby a reference image shown in fig. 6(b) can be obtained.
S502, performing expansion processing on the reference image to obtain a first image, and determining a fine edge of the target object based on a difference value of pixel values of the first image and the reference image;
furthermore, the electronic device may perform dilation processing on the reference image to obtain the first image. As is apparent from the above description, in this case, the edge of the target object in the reference image can be expanded outward by performing the expansion processing on the reference image, and therefore, the edge of the target object in the first image obtained by performing the expansion processing on the reference image is expanded outward by one turn.
The electronics can determine a fine edge of the target object based on the difference in pixel values of the first image and the reference image. Specifically, the electronic device may calculate a difference between pixel values of pixel points corresponding to positions in the first image and the reference image, and may obtain a fine edge of the target object by using the difference as the pixel value of the pixel point.
For example, the electronic device may perform dilation processing on the reference image shown in fig. 6(b) with a dilation radius of 1 to obtain a first image, further calculate a difference between pixel values of pixel points corresponding to positions in the first image and the reference image, and obtain a fine edge of the target object by using the difference as the pixel value of the pixel point, as shown in fig. 6 (c).
S503, performing expansion processing and corrosion processing on the reference image to obtain a second image;
in order to determine the rough edge of the target object, the electronic device may perform an expansion process on the reference image and then perform an erosion process on the reference image to obtain a second image. Since the rough edge of the target object is to be determined, the electronic device may first perform dilation processing on the reference image with a larger dilation radius, so that the edge of the target object may be dilated outward by a larger circle. The expansion radius may be 5, 4, 6, etc., and is not particularly limited herein.
In order to prevent the edge tortuosity of the target object after the expansion processing from changing greatly due to the excessive expansion of the edge, and the accuracy of the finally determined edge tortuosity feature of the target object is affected, the electronic device may perform erosion processing on the image after the expansion processing again, so as to obtain the second image.
In one embodiment, the expanded image may be further processed by erosion with a smaller erosion radius to avoid too large erosion, wherein the erosion radius may be 1, 2, etc., which is reasonable.
For example, the electronic device may perform an expansion process with an expansion radius of 5 on the reference image shown in fig. 6(b), and then perform an erosion process with an erosion radius of 1, thereby obtaining a second image shown in fig. 6 (d).
S504, performing expansion processing on the second image to obtain a third image, and determining a rough edge of the target object based on a difference value of pixel values of the third image and the second image.
After obtaining the second image, in order to obtain a more accurate rough edge, the electronic device may further perform a dilation process with a smaller dilation radius on the second image, so as to obtain a third image. The expansion radius may be 1, 2, etc., and is not particularly limited herein.
Next, the electronic device may determine a rough edge of the target object based on a difference in pixel values of the third image and the second image. Specifically, the electronic device may calculate a difference between pixel values of pixel points corresponding to positions in the third image and the second image, and may obtain the rough edge of the target object by using the difference as the pixel value of the pixel point.
For example, the electronic device may perform dilation processing on the second image shown in fig. 6(d) with a dilation radius of 1 to obtain a third image, further calculate a difference between pixel values of pixel points corresponding to positions in the third image and the second image, and obtain a rough edge of the target object by using the difference as the pixel value of the pixel point, as shown in fig. 6 (e).
As can be seen, in this embodiment, the electronic device performs binarization processing on the target color component to obtain a binarized image, performs dilation processing on the binarized image after performing erosion processing to obtain a reference image, performs dilation processing on the reference image to obtain a first image, and determines the fine edge of the target object based on the difference between the pixel values of the first image and the reference image. And performing expansion processing on the reference image, performing corrosion processing on the reference image to obtain a second image, performing expansion processing on the second image to obtain a third image, and determining the rough edge of the target object based on the difference value of the pixel values of the third image and the second image. Therefore, the electronic equipment can determine accurate fine edges and rough edges through reasonable arrangement of corrosion treatment and expansion treatment, and further guarantee that accurate edge tortuosity characteristics of the target object are obtained.
As an implementation manner of the embodiment of the present invention, the step of performing binarization processing on the target color component to obtain a binarized image may include:
respectively calculating the number of pixel points corresponding to the first type of pixel value and the second type of pixel value; and setting the pixel values of the pixel points with less quantity as first pixel values, and setting the pixel values of the other pixel points as second pixel values to obtain a binary image, thereby obtaining the binary image.
When binarization processing is performed, in order to ensure accuracy of the binarization processing, the electronic device may calculate the number of the pixel points corresponding to the first type of pixel value and the second type of pixel value, respectively, and then set the pixel value of the pixel point with the smaller number as the first pixel value, and set the pixel values of the other pixel points as the second pixel value, so as to obtain a binarized image.
Wherein the first pixel value is one of 255 and 0, and the second pixel value is one of 255 and 0 different from the first pixel value. In an embodiment, the electronic device may set the pixel value of the pixel point with the smaller number to 255, and set the pixel values of the remaining pixel points to 0, so as to obtain the binary image. In another embodiment, it is reasonable that the electronic device sets the pixel value of the pixel point with the smaller number to 0, and sets the pixel values of the remaining pixel points to 255 to obtain the binarized image.
Therefore, in this embodiment, the electronic device may calculate the number of the pixel points corresponding to the first type of pixel value and the second type of pixel value respectively, and then set the pixel values of the pixel points with a smaller number as the first pixel value, and set the pixel values of the remaining pixel points as the second pixel value, so as to obtain the binary image. Therefore, the binary image can be conveniently and accurately obtained.
As an implementation manner of the embodiment of the present invention, the step of determining the edge tortuosity feature of the target object based on the fine edge and the coarse edge may include:
respectively calculating the number of pixel points included in the fine edge and the coarse edge; determining an edge tortuosity feature for the target object based on the quantity.
After the fine edge and the coarse edge of the target object are obtained, since the fine edge and the coarse edge can represent the edge tortuosity characteristic of the target object, and the fine edge and the coarse edge are both composed of pixel points, in an implementation manner, the edge tortuosity characteristic of the target object can be identified by using the number of the pixel points included in the fine edge and the coarse edge.
Therefore, the electronic device may calculate the number of pixel points included in the fine edge and the coarse edge of the target object, respectively, and determine the edge tortuosity feature of the target object based on the number. For example, it is reasonable to use the difference between the numbers of the pixel points included in the fine edge and the coarse edge as the edge tortuosity feature of the target object, and to use the product of the difference between the numbers of the pixel points included in the fine edge and the coarse edge and the preset parameter as the edge tortuosity feature of the target object.
As can be seen, in this embodiment, the electronic device may calculate the number of pixel points included in the fine edge and the coarse edge, respectively, and determine the edge tortuosity feature of the target object based on the number. In this way, the electronic device can identify the edge tortuosity feature of the target object using the number of pixel points included in the fine edge and the coarse edge.
As an implementation manner of the embodiment of the present invention, the step of determining the edge tortuosity feature of the target object based on the number may include:
determining an edge tortuosity feature of the target object according to the following formula:
feature=k×max(0,sum1–sum2)/sum2
where k is a preset feature parameter, sum1 is the number of pixels included in the fine edge, and sum2 is the number of pixels included in the coarse edge. The preset characteristic parameter k may be set according to an empirical value, and may be, for example, 2000, 2500, 3000, etc., and is not specifically limited herein.
For example, if the number of pixels included in the fine edge is 2700, the number of pixels included in the coarse edge is 2500, and k is 2000, the edge tortuosity feature of the target object is 2000 × max (0, 2700-.
Therefore, in this embodiment, the electronic device may calculate the edge tortuosity feature according to the above formula, may quickly determine a numerical value of the edge tortuosity feature that can identify the target object, and facilitates a subsequent further image processing process based on the numerical value.
As an implementation manner of the embodiment of the present invention, as shown in fig. 7, after the step of determining the edge tortuosity feature of the target object based on the fine edge and the coarse edge, the method may further include:
s701, calculating the average value and the variance of the edge tortuosity characteristics corresponding to a plurality of target images;
in order to facilitate subsequent product authenticity identification, image retrieval and the like, the electronic device may obtain a plurality of target images, and then determine the edge tortuosity feature corresponding to each target image by using the method described in any of the above embodiments, and further calculate the average value and variance of the edge tortuosity features corresponding to the plurality of target images. In one case, the plurality of target images may be images of packages of a plurality of genuine products if applied in a product authentication scenario.
Since the edge tortuosity of the target object may be different even if the target image includes the same target object due to different printing, shooting angles, light rays, and the like, the size of the obtained edge tortuosity feature value may be different. For example, when a plurality of target images are captured on the front surfaces of a plurality of genuine products, the edge meandering degree of characters in the plurality of target images may be different due to an error in the printing technique of the package.
The electronic device may calculate an average value of the edge tortuosity features corresponding to the plurality of target images, and the average value may identify an average condition of the edge tortuosity features corresponding to the target images. Calculating the variance of the edge tortuosity features corresponding to the target images, wherein the variance can identify the fluctuation size of the differences of the edge tortuosity features corresponding to the target images, wherein the larger the variance is, the larger the fluctuation of the differences of the edge tortuosity features corresponding to the target images is, and conversely, the smaller the variance is, the smaller the fluctuation of the differences of the edge tortuosity features corresponding to the target images is.
In one embodiment, the electronic device may store the calculated mean and variance of the edge tortuosity features in a preset database for subsequent use.
S702, under the condition that an image to be recognized is obtained, determining the edge tortuosity characteristic of a target object included in the image to be recognized as a characteristic to be recognized;
when the image to be recognized needs to be recognized, the electronic device may acquire the image to be recognized, and then determine the edge tortuosity feature of the target object included in the image to be recognized as the feature to be recognized in the manner described in any of the above embodiments.
For example, when the authenticity of the product a needs to be identified, an image of the package of the product a may be collected as an image to be identified, and then the edge tortuosity feature of the target object included in the image to be identified is determined as the feature to be identified by the manner described in any of the above embodiments.
And S703, identifying the target object included in the image to be identified based on the feature to be identified, the average value and the variance to obtain an identification result.
After the features to be recognized of the target objects included in the image to be recognized are determined, the electronic device may recognize the target objects included in the image to be recognized based on the features to be recognized and the average value and the variance of the edge tortuosity features corresponding to the plurality of target images, and further obtain a recognition result.
For example, if the feature to be identified is an edge tortuosity feature of a text in an image of a package of a product a to be identified, the electronic device may obtain an average value and a variance of the edge tortuosity feature of the text in the image of a plurality of genuine products packages stored in advance from the preset database. And then the authenticity of the product A can be determined by comparing the features to be identified with the average value and the variance.
For another example, if the feature to be recognized is an edge tortuosity feature of the character T in the image to be recognized, it needs to be determined whether the character T in the image to be recognized is a target font, for example, the target font is a regular font. The electronic device may obtain the average and variance of the edge tortuosity features of the character T with the regular font in the pre-stored plurality of images from the preset database. And then, by comparing the features to be recognized with the average value and the variance, whether the characters T in the image to be recognized are regular characters can be determined.
As can be seen, in this embodiment, the electronic device may calculate an average value and a variance of edge tortuosity features corresponding to a plurality of target images, determine, when an image to be recognized is obtained, the edge tortuosity features of a target object included in the image to be recognized as features to be recognized, and further recognize, based on the features to be recognized, the average value and the variance, the target object included in the image to be recognized, so as to obtain a recognition result. Therefore, when the image to be recognized needs to be recognized, the recognition result can be obtained according to the edge tortuosity features of the target object included in the image to be recognized and the edge tortuosity features corresponding to the plurality of target images.
As an implementation manner of the embodiment of the present invention, the step of identifying the target object included in the image to be identified based on the feature to be identified, the average value, and the variance to obtain an identification result may include:
identifying a target object included in the image to be identified according to the following formula to obtain a probability value P, and determining an identification result according to the probability value P:
P=exp(-(temp_f1–mean)^2/K/var)
wherein temp _ f1 is a feature to be identified, mean is an average value of edge tortuosity features corresponding to the target images, K is a preset probability parameter, and var is a variance of the edge tortuosity features corresponding to the target images. The specific value of K may be set according to an empirical value or the like, and may be, for example, 15, 16, 20, or the like, which is not particularly limited herein.
After determining the probability value P according to the formula, the electronic device may compare the probability value P with a preset probability threshold, thereby determining the recognition result. For example, if the preset probability threshold is 60%, in a scenario of authenticating authenticity, when the probability value P is higher than the preset probability threshold, the identification result may be determined to be a genuine product, otherwise, the identification result may be determined to be a counterfeit product. For another example, if the preset probability threshold is 80%, in a scene where the recognition result is the same target object, when the probability value P is higher than the preset probability threshold, the recognition result may be determined to be the same target object, otherwise, the recognition result may be determined to be not the same target object.
Therefore, in this embodiment, the electronic device may identify the target object included in the image to be identified according to the formula to obtain the probability value P, and determine the identification result according to the probability value P, so that the electronic device may quickly and accurately determine the probability value P, and then determine the accurate identification result based on the probability value P.
Corresponding to the processing method of the image with the target object, the embodiment of the invention also provides a processing device of the image with the target object. The following describes an apparatus for processing an image with a target object according to an embodiment of the present invention.
As shown in fig. 8, an apparatus for processing an image having a target object, the apparatus comprising:
a target image obtaining module 810, configured to obtain a target image;
wherein the target image comprises a target object.
An edge extraction module 820 for extracting fine and coarse edges of the target object;
wherein the fine edge is an edge of which the target object has a higher tortuosity than a target tortuosity, and the coarse edge is an edge of which the target object has a tortuosity not higher than the target tortuosity.
A tortuosity feature determination module 830 for determining an edge tortuosity feature of the target object based on the fine edge and the coarse edge.
Therefore, in the scheme provided by the embodiment of the invention, the electronic equipment can acquire the target image, wherein the target image comprises the target object; extracting a fine edge and a rough edge of the target object, wherein the fine edge is an edge of which the tortuosity of the target object is higher than that of the target object, and the rough edge is an edge of which the tortuosity of the target object is not higher than that of the target object; based on the fine edges and the coarse edges, edge tortuosity features of the target object are determined. Therefore, the electronic equipment can extract the fine edge and the rough edge of the target object, and further determine the edge tortuosity characteristic of the target object based on the fine edge and the rough edge, wherein the edge tortuosity characteristic can describe the edge shape of the target object, so that the determination of the edge tortuosity characteristic of the target object in the image is realized, and meanwhile, the subsequent image processing process is facilitated.
As an implementation manner of the embodiment of the present invention, the edge extraction module 820 may include:
a color component determination unit, configured to randomly select one of the color components of the target image as a target color component, and extract a fine edge and a coarse edge of the target object based on the target color component; or the like, or, alternatively,
a histogram for counting each color component of the target image; determining a target color component from the color components based on the histogram of the color components; based on the target color component, a fine edge and a coarse edge of the target object are extracted.
As an implementation manner of the embodiment of the present invention, the color component determination unit may include:
the pixel dividing subunit is used for dividing the pixel values represented by the histograms of the color components into a first type of pixel values and a second type of pixel values by adopting a maximum inter-class variance method aiming at the histogram of each color component;
the difference value calculating subunit is used for calculating the average values of the first type of pixel values and the second type of pixel values respectively and calculating the difference value between the average value of the first type of pixel values and the average value of the second type of pixel values;
and a color component determination subunit, configured to determine, as the target color component, the color component with the largest corresponding difference.
As an implementation manner of the embodiment of the present invention, the color component determination unit may include:
a binarization subunit, configured to perform binarization processing on the target color component to obtain a binarized image;
and the image processing subunit is used for carrying out erosion processing and expansion processing on the binary image according to a preset processing rule, and determining the fine edge and the rough edge of the target object based on the processed image.
As an implementation manner of the embodiment of the present invention, the image processing subunit may include:
the first processing subunit is used for carrying out expansion processing on the binary image after carrying out corrosion processing on the binary image to obtain a reference image;
a fine edge determining subunit, configured to perform dilation processing on the reference image to obtain a first image, and determine a fine edge of the target object based on a difference between pixel values of the first image and the reference image;
the second processing subunit is used for performing expansion processing on the reference image and then performing corrosion processing on the reference image to obtain a second image;
and the rough edge determining subunit is configured to perform dilation processing on the second image to obtain a third image, and determine a rough edge of the target object based on a difference between pixel values of the third image and the second image.
As an implementation manner of the embodiment of the present invention, the binary subunit may include:
the quantity calculating subunit is used for respectively calculating the quantity of the pixel points corresponding to the first type of pixel value and the second type of pixel value;
and the assignment subunit is used for setting the pixel values of the fewer pixel points as first pixel values and setting the pixel values of the other pixel points as second pixel values to obtain a binary image.
Wherein the first pixel value is one of 255 and 0, and the second pixel value is one of 255 and 0 different from the first pixel value.
As an implementation manner of the embodiment of the present invention, the tortuosity characteristic determining module 830 may include:
a number calculating unit, configured to calculate the number of pixel points included in the fine edge and the coarse edge respectively;
a tortuosity feature determination unit for determining an edge tortuosity feature of the target object based on the number.
As an implementation manner of the embodiment of the present invention, the tortuosity characteristic determination unit may include:
a tortuosity feature determination subunit, configured to determine an edge tortuosity feature of the target object according to the following formula:
feature=k×max(0,sum1–sum2)/sum2
where k is a preset feature parameter, sum1 is the number of pixels included in the fine edge, and sum2 is the number of pixels included in the coarse edge.
As an implementation manner of the embodiment of the present invention, the apparatus may further include:
a mean variance calculation module, configured to calculate a mean and a variance of edge tortuosity features corresponding to a plurality of target images after the edge tortuosity features of the target object are determined based on the fine edge and the coarse edge;
the device comprises a to-be-identified characteristic acquisition module, a to-be-identified characteristic acquisition module and a to-be-identified characteristic acquisition module, wherein the to-be-identified characteristic acquisition module is used for determining the edge tortuosity characteristic of a target object included in an image to be identified as the to-be-identified characteristic under the condition that the image to be identified is acquired;
and the image identification module is used for identifying the target object included in the image to be identified based on the feature to be identified, the average value and the variance to obtain an identification result.
As an implementation manner of the embodiment of the present invention, the image recognition module may include:
the image identification unit is used for identifying the target object included in the image to be identified according to the following formula to obtain a probability value P, and determining an identification result according to the probability value P:
P=exp(-(temp_f1–mean)^2/K/var)
wherein temp _ f1 is the feature to be identified, mean is the average value, K is a preset probability parameter, and var is the variance.
An embodiment of the present invention further provides an electronic device, as shown in fig. 9, which includes a processor 901, a communication interface 902, a memory 903, and a communication bus 904, where the processor 901, the communication interface 902, and the memory 903 complete mutual communication through the communication bus 904,
a memory 903 for storing computer programs;
the processor 901 is configured to implement the steps of the method for processing an image with a target object according to any one of the embodiments described above when executing the program stored in the memory 903.
Therefore, in the scheme provided by the embodiment of the invention, the electronic equipment can acquire the target image, wherein the target image comprises the target object; extracting a fine edge and a rough edge of the target object, wherein the fine edge is an edge of which the tortuosity of the target object is higher than that of the target object, and the rough edge is an edge of which the tortuosity of the target object is not higher than that of the target object; based on the fine edges and the coarse edges, edge tortuosity features of the target object are determined. Therefore, the electronic equipment can extract the fine edge and the rough edge of the target object, and further determine the edge tortuosity characteristic of the target object based on the fine edge and the rough edge, wherein the edge tortuosity characteristic can describe the edge shape of the target object, so that the determination of the edge tortuosity characteristic of the target object in the image is realized, and meanwhile, the subsequent image processing process is facilitated.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In a further embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the method for processing an image with a target object according to any of the above embodiments.
As can be seen, in the solution provided by the embodiment of the present invention, when being executed by a processor, a computer program may obtain a target image, where the target image includes a target object; extracting a fine edge and a rough edge of the target object, wherein the fine edge is an edge of which the tortuosity of the target object is higher than that of the target object, and the rough edge is an edge of which the tortuosity of the target object is not higher than that of the target object; based on the fine edges and the coarse edges, edge tortuosity features of the target object are determined. Therefore, the electronic equipment can extract the fine edge and the rough edge of the target object, and further determine the edge tortuosity characteristic of the target object based on the fine edge and the rough edge, wherein the edge tortuosity characteristic can describe the edge shape of the target object, so that the determination of the edge tortuosity characteristic of the target object in the image is realized, and meanwhile, the subsequent image processing process is facilitated.
In a further embodiment of the present invention, there is also provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method steps of processing an image with a target object as described in any of the above embodiments.
In the solution provided by the embodiment of the present invention, when running on a computer, a computer program product may obtain a target image, where the target image includes a target object; extracting a fine edge and a rough edge of the target object, wherein the fine edge is an edge of which the tortuosity of the target object is higher than that of the target object, and the rough edge is an edge of which the tortuosity of the target object is not higher than that of the target object; based on the fine edges and the coarse edges, edge tortuosity features of the target object are determined. Therefore, the electronic equipment can extract the fine edge and the rough edge of the target object, and further determine the edge tortuosity characteristic of the target object based on the fine edge and the rough edge, wherein the edge tortuosity characteristic can describe the edge shape of the target object, so that the determination of the edge tortuosity characteristic of the target object in the image is realized, and meanwhile, the subsequent image processing process is facilitated.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, electronic device, computer storage medium and computer program product embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A method of processing an image having a target object, the method comprising:
acquiring a target image, wherein the target image comprises a target object;
extracting a fine edge and a rough edge of the target object, wherein the fine edge is an edge of which the tortuosity of the target object is higher than a target tortuosity, and the rough edge is an edge of which the tortuosity of the target object is not higher than the target tortuosity;
based on the fine edge and the coarse edge, edge tortuosity features of the target object are determined.
2. The method of claim 1, wherein the step of extracting the fine edge and the coarse edge of the target object comprises:
randomly selecting one of the color components of the target image as a target color component, and extracting a fine edge and a rough edge of the target object based on the target color component; or the like, or, alternatively,
counting histograms of color components of the target image; determining a target color component from the color components based on the histogram of the color components; based on the target color component, a fine edge and a coarse edge of the target object are extracted.
3. The method according to claim 2, wherein the step of determining a target color component from the color components based on the histogram of the color components comprises:
for the histogram of each color component, dividing the pixel values represented by the histogram of the color component into a first type of pixel values and a second type of pixel values by adopting a maximum inter-class variance method;
respectively calculating the average values of the first pixel values and the second pixel values, and calculating the difference value between the average value of the first pixel values and the average value of the second pixel values;
and determining the color component with the maximum corresponding difference value as the target color component.
4. The method of claim 3, wherein the step of extracting the fine edge and the coarse edge of the target object based on the target color component comprises:
carrying out binarization processing on the target color component to obtain a binarized image;
and according to a preset processing rule, carrying out corrosion processing and expansion processing on the binary image, and determining a fine edge and a rough edge of the target object based on the processed image.
5. The method according to claim 4, wherein the step of performing erosion processing and dilation processing on the binarized image according to a preset processing rule, and determining the fine edge and the coarse edge of the target object based on the processed image comprises:
carrying out corrosion treatment on the binary image and then carrying out expansion treatment on the binary image to obtain a reference image;
performing expansion processing on the reference image to obtain a first image, and determining a fine edge of the target object based on a difference value of pixel values of the first image and the reference image;
performing expansion processing on the reference image and then performing corrosion processing on the reference image to obtain a second image;
and performing expansion processing on the second image to obtain a third image, and determining the rough edge of the target object based on the difference value of the pixel values of the third image and the second image.
6. The method according to claim 4, wherein the step of binarizing the target color component to obtain a binarized image comprises:
respectively calculating the number of pixel points corresponding to the first type of pixel value and the second type of pixel value;
setting the pixel value of the pixel point with less quantity as a first pixel value, and setting the pixel values of the other pixel points as a second pixel value to obtain a binary image, wherein the first pixel value is one of 255 and 0, and the second pixel value is one of 255 and 0 different from the first pixel value.
7. The method of any of claims 1-6, wherein the step of determining an edge tortuosity feature of the target object based on the fine edge and the coarse edge comprises:
respectively calculating the number of pixel points included in the fine edge and the coarse edge;
determining an edge tortuosity feature for the target object based on the quantity.
8. The method of claim 7, wherein the step of determining the edge tortuosity feature of the target object based on the number comprises:
determining an edge tortuosity feature of the target object according to the following formula:
feature=k×max(0,sum1–sum2)/sum2
where k is a preset feature parameter, sum1 is the number of pixels included in the fine edge, and sum2 is the number of pixels included in the coarse edge.
9. The method of any of claims 1-6, wherein after the step of determining an edge tortuosity feature of the target object based on the fine edge and the coarse edge, the method further comprises:
calculating the average value and the variance of the edge tortuosity characteristics corresponding to the target images;
under the condition that an image to be recognized is obtained, determining the edge tortuosity characteristic of a target object included in the image to be recognized as a characteristic to be recognized;
and identifying the target object included in the image to be identified based on the feature to be identified, the average value and the variance to obtain an identification result.
10. The method according to claim 9, wherein the step of identifying the target object included in the image to be identified based on the feature to be identified, the average value and the variance to obtain an identification result comprises:
identifying a target object included in the image to be identified according to the following formula to obtain a probability value P, and determining an identification result according to the probability value P:
P=exp(-(temp_f1–mean)^2/K/var)
wherein temp _ f1 is the feature to be identified, mean is the average value, K is a preset probability parameter, and var is the variance.
CN202110007810.XA 2021-01-05 2021-01-05 Method for processing image with cigarette case Active CN112861873B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110007810.XA CN112861873B (en) 2021-01-05 2021-01-05 Method for processing image with cigarette case

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110007810.XA CN112861873B (en) 2021-01-05 2021-01-05 Method for processing image with cigarette case

Publications (2)

Publication Number Publication Date
CN112861873A true CN112861873A (en) 2021-05-28
CN112861873B CN112861873B (en) 2022-08-05

Family

ID=76001800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110007810.XA Active CN112861873B (en) 2021-01-05 2021-01-05 Method for processing image with cigarette case

Country Status (1)

Country Link
CN (1) CN112861873B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5754709A (en) * 1994-11-10 1998-05-19 Matsushita Electric Industrial Co., Ltd. Method and apparatus for gradation correction and image edge extraction
JP2006292503A (en) * 2005-04-08 2006-10-26 Sumitomo Electric Ind Ltd Method and device for flaw inspection
CN1885317A (en) * 2006-07-06 2006-12-27 上海交通大学 Adaptive edge detection method based on morphology and information entropy
CN1885314A (en) * 2006-07-11 2006-12-27 电子科技大学 Pre-processing method for iris image
CN1904545A (en) * 2004-07-30 2007-01-31 株式会社米姿托约 Method of measuring occluded features for high precision machine vision metrology
US20110164284A1 (en) * 2010-01-06 2011-07-07 Canon Kabushiki Kaisha Image processing apparatus and method
JP2011133954A (en) * 2009-12-22 2011-07-07 3D Media Co Ltd Edge extraction method and edge extraction device
CN105160661A (en) * 2015-08-19 2015-12-16 西安电子科技大学 Color image edge extraction method based on center pixel similarity weights
CN106204526A (en) * 2016-06-28 2016-12-07 华北电力大学(保定) A kind of condensation based on MATLAB image procossing is easily sent out position and is determined method
CN107945177A (en) * 2017-12-15 2018-04-20 日照职业技术学院 A kind of method that material is judged for robotic vision system detection
CN107977972A (en) * 2017-12-01 2018-05-01 浙江科技学院 A kind of image partition method
CN111353957A (en) * 2020-02-28 2020-06-30 北京三快在线科技有限公司 Image processing method, image processing device, storage medium and electronic equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5754709A (en) * 1994-11-10 1998-05-19 Matsushita Electric Industrial Co., Ltd. Method and apparatus for gradation correction and image edge extraction
CN1904545A (en) * 2004-07-30 2007-01-31 株式会社米姿托约 Method of measuring occluded features for high precision machine vision metrology
JP2006292503A (en) * 2005-04-08 2006-10-26 Sumitomo Electric Ind Ltd Method and device for flaw inspection
CN1885317A (en) * 2006-07-06 2006-12-27 上海交通大学 Adaptive edge detection method based on morphology and information entropy
CN1885314A (en) * 2006-07-11 2006-12-27 电子科技大学 Pre-processing method for iris image
JP2011133954A (en) * 2009-12-22 2011-07-07 3D Media Co Ltd Edge extraction method and edge extraction device
US20110164284A1 (en) * 2010-01-06 2011-07-07 Canon Kabushiki Kaisha Image processing apparatus and method
CN105160661A (en) * 2015-08-19 2015-12-16 西安电子科技大学 Color image edge extraction method based on center pixel similarity weights
CN106204526A (en) * 2016-06-28 2016-12-07 华北电力大学(保定) A kind of condensation based on MATLAB image procossing is easily sent out position and is determined method
CN107977972A (en) * 2017-12-01 2018-05-01 浙江科技学院 A kind of image partition method
CN107945177A (en) * 2017-12-15 2018-04-20 日照职业技术学院 A kind of method that material is judged for robotic vision system detection
CN111353957A (en) * 2020-02-28 2020-06-30 北京三快在线科技有限公司 Image processing method, image processing device, storage medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
柳青等: "亚像素边缘提取方法在半导体芯片定位中的应用", 《现代信息科技》 *
郑秋梅等: "基于内外边缘颜色特征的图像检索算法", 《工程图学学报》 *

Also Published As

Publication number Publication date
CN112861873B (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN106156766B (en) Method and device for generating text line classifier
CN106446816B (en) Face recognition method and device
WO2017148377A1 (en) Automatic extraction method, device and system for driving licence expiration date, and storage medium
CN109426785B (en) Human body target identity recognition method and device
CN109740606B (en) Image identification method and device
CN110490190B (en) Structured image character recognition method and system
CN108091033B (en) Paper money identification method and device, terminal equipment and storage medium
CN111429359B (en) Small-area fingerprint image splicing method, device, equipment and storage medium
CN109214229B (en) Bar code scanning method and device and electronic equipment
CN110738222B (en) Image matching method and device, computer equipment and storage medium
CN112948612B (en) Human body cover generation method and device, electronic equipment and storage medium
CN112597978B (en) Fingerprint matching method and device, electronic equipment and storage medium
CN110738236A (en) Image matching method and device, computer equipment and storage medium
US11900664B2 (en) Reading system, reading device, reading method, and storage medium
JP2018060453A (en) Currency classification device and currency classification method
CN108205657A (en) Method, storage medium and the mobile terminal of video lens segmentation
CN112836661A (en) Face recognition method and device, electronic equipment and storage medium
CN109978903B (en) Identification point identification method and device, electronic equipment and storage medium
CN109614858B (en) Pupil center detection method and device
CN112861873B (en) Method for processing image with cigarette case
CN111428064B (en) Small-area fingerprint image fast indexing method, device, equipment and storage medium
CN117612179A (en) Method and device for recognizing characters in image, electronic equipment and storage medium
CN111325199B (en) Text inclination angle detection method and device
CN113239738B (en) Image blurring detection method and blurring detection device
CN110895849A (en) Method and device for cutting and positioning crown word number, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant