WO2019233264A1 - 图像处理方法、计算机可读存储介质和电子设备 - Google Patents

图像处理方法、计算机可读存储介质和电子设备 Download PDF

Info

Publication number
WO2019233264A1
WO2019233264A1 PCT/CN2019/087585 CN2019087585W WO2019233264A1 WO 2019233264 A1 WO2019233264 A1 WO 2019233264A1 CN 2019087585 W CN2019087585 W CN 2019087585W WO 2019233264 A1 WO2019233264 A1 WO 2019233264A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processed
pixels
edge
pixel
Prior art date
Application number
PCT/CN2019/087585
Other languages
English (en)
French (fr)
Inventor
陈岩
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP19815682.0A priority Critical patent/EP3783564A4/en
Publication of WO2019233264A1 publication Critical patent/WO2019233264A1/zh
Priority to US17/026,220 priority patent/US11430103B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image processing method, a computer-readable storage medium, and an electronic device.
  • the light sensor of the camera When a smart device captures an image, the light sensor of the camera has a spatial frequency. If the subject has the same spatial distribution law, the spatial distribution of the photosensitive element and the subject will be closer, which will cause moiré that will disturb the image. Usually, to detect moiré in an image, the image needs to be spatially transformed, so the detection efficiency is relatively low.
  • an image processing method a computer-readable storage medium, and an electronic device are provided.
  • An image processing method includes:
  • the number of the first pixels is greater than a first number threshold, it is determined that a moiré exists in the image to be processed.
  • An image processing device includes:
  • An image acquisition module configured to acquire an image to be processed
  • An edge detection module configured to perform edge detection on the image to be processed, and determine edge pixel points included in the image to be processed
  • a number counting module configured to count the number of edge pixels included in the image to be processed as the first number of pixels
  • Moiré determining module is configured to determine that moiré exists in the image to be processed if the number of first pixels is greater than a first number threshold.
  • a computer-readable storage medium on which a computer program is stored is characterized in that, when the computer program is executed by a processor, the following operations are performed:
  • the number of the first pixels is greater than a first number threshold, it is determined that a moiré exists in the image to be processed.
  • An electronic device includes a memory and a processor.
  • the memory stores computer-readable instructions.
  • the processor causes the processor to perform the following operations:
  • the number of the first pixels is greater than a first number threshold, it is determined that a moiré exists in the image to be processed.
  • the image processing method, the computer-readable storage medium, and the electronic device can perform edge detection on an image to be processed, and determine edge pixel points included in the image to be processed. Then, the first pixel number of the edge pixel points in the image to be processed is counted, and it is determined whether the moire exists in the image to be processed according to the first pixel number. In this way, there is no need to perform spatial conversion on the image to be processed, and the moiré in the image can be detected quickly, which improves the efficiency of image processing.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment.
  • FIG. 2 is a schematic diagram showing a moiré pattern in an image in an embodiment.
  • FIG. 3 is a flowchart of an image processing method in another embodiment.
  • FIG. 4 is a flowchart of an image processing method according to another embodiment.
  • FIG. 5 is a flowchart of an image processing method according to another embodiment.
  • FIG. 6 is a schematic diagram showing an edge line in an embodiment.
  • FIG. 7 is a flowchart of an image processing method according to another embodiment.
  • FIG. 8 is a schematic diagram of the smallest rectangular area in one embodiment.
  • FIG. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment.
  • FIG. 10 is a schematic structural diagram of an image processing apparatus in another embodiment.
  • FIG. 11 is a schematic diagram of an image processing circuit in an embodiment.
  • first the terms “first”, “second”, and the like used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element.
  • the first client may be referred to as the second client, and similarly, the second client may be referred to as the first client. Both the first client and the second client are clients, but they are not the same client.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment. As shown in FIG. 1, the image processing method includes operations 102 to 108. among them:
  • cameras may be installed on the electronic device internally or externally, and the positions and number of the cameras are not limited. For example, you can install one camera on the front of the phone and two cameras on the back of the phone.
  • the process of capturing an image is generally divided into two phases: a preview phase and a shooting phase.
  • the camera will capture images at a certain interval.
  • the captured images will not be stored, but will be displayed for users to view.
  • the user adjusts the shooting angle, light and other parameters according to the displayed image.
  • the shooting phase is entered, and the electronic device stores the next frame image after receiving the shooting instruction as the final captured image.
  • the electronic device can store the captured image and send it to the server or other electronic devices. It can be understood that the image to be processed obtained in this embodiment is not limited to being taken by the electronic device itself, but may also be sent by other electronic devices or downloaded through the network. After obtaining the images, the electronic device can process the images immediately or store the images in a folder in a unified manner. After the images stored in the folder reach a certain number, the stored images are processed in a unified manner. For example, the electronic device may store the acquired images in an album, and when the number of images stored in the album is greater than a certain number, trigger the processing of the images in the album.
  • Operation 104 Perform edge detection on the image to be processed, and determine edge pixel points included in the image to be processed.
  • Edge detection refers to the process of detecting points with obvious changes in brightness in the image. These points with obvious changes in brightness are edge pixels.
  • the edges in the image are generally divided into three types: stepped edges, roof-shaped edges, and linear edges. It can be understood that the image to be processed is composed of several pixels, and the brightness value of the edge changes relatively. Therefore, a derivative operation can be performed on the pixels in the image. Pixels with relatively large changes in brightness can be detected according to the derivative operation, and the edges in the image can be detected according to the result of the derivative operation to determine the edge pixel points.
  • edge line when photographing a landscape, an edge line will be formed where the beach and seawater meet, and the edge line at the boundary between the beach and seawater can be detected by edge detection, and the pixels on the edgeline are determined as edge pixels.
  • the edge pixels can be detected, but not limited to, based on algorithms such as Canny, Roberts, Prewitt, and Sobel.
  • Operation 106 Count the number of edge pixels included in the image to be processed as the first number of pixels.
  • edge pixel points included in the edges in the image to be processed can be detected.
  • the detected edge pixels can be labeled, and the edge pixels in the image to be processed can be found according to the marks, and the number of labeled edge pixels is counted as the first number of pixels.
  • the brightness value of the edge pixel is marked as a specific value, so that when counting the number of the first pixels, the pixels whose brightness value is the specific value can be directly counted.
  • Moiré refers to the phenomenon that high-frequency interference occurs when a photosensitive element is imaged, causing irregular stripes in the image. These irregular stripes are generally densely distributed in the image, so the area occupied by moiré in the image is also relatively wide. For example, when shooting a mobile phone or computer screen, there will be some irregular stripes in the image that are not on the screen itself. Such stripes are called moiré.
  • the edge lines of these irregular stripes can be detected after edge detection, and then the number of pixels on the edge lines is counted. If the number of counted pixels exceeds a certain value, it means that these edge stripes are distributed in the image. More dense, these edge stripes can be considered as moiré in the image.
  • the foregoing first number threshold may be acquired according to a ratio of the total number of pixels included in the images to be processed.
  • the first number threshold may be 50% of the total number of pixels included in the image to be processed.
  • the first number threshold can also be set to a fixed number threshold, and the to-be-processed images can be uniformly scaled to a fixed size before edge detection, and then the number of edge pixels is compared with the fixed number threshold.
  • FIG. 2 is a schematic diagram showing a moiré pattern in an image in an embodiment.
  • the image 202 includes irregularly distributed stripes, and these irregular stripes are moiré 204.
  • the electronic device may first detect an edge in the image 202, and determine whether there is a moiré 204 according to the number of edge pixels counted.
  • the image processing method provided in the foregoing embodiment may perform edge detection on an image to be processed, and determine edge pixel points included in the image to be processed. Then, the first pixel number of the edge pixel points in the image to be processed is counted, and it is determined whether the moire exists in the image to be processed according to the first pixel number. In this way, there is no need to perform spatial conversion on the image to be processed, and the moiré in the image can be detected quickly, which improves the efficiency of image processing.
  • FIG. 3 is a flowchart of an image processing method in another embodiment. As shown in FIG. 3, the image processing method includes operations 302 to 320. among them:
  • the electronic device may batch process the images to be processed. Assuming that there are a large number of images to be processed, the terminal may upload the images to be processed to the server first, and then use the server Perform batch processing on the images to be processed, and then return the processing results to the terminal. The terminal can also perform batch processing on the processed images when it is in the standby state, so as not to occupy terminal resources and affect user use.
  • Operation 304 Perform edge detection on the image to be processed, and determine edge pixel points included in the image to be processed.
  • the electronic device can process batches of images to be processed.
  • the images to be processed can be uniformly compressed to a fixed size, and then the compressed to-be-processed images are processed.
  • the image undergoes edge detection to determine the edge pixel points contained in the compressed to-be-processed image. Then, by counting the edge pixels included in the compressed to-be-processed image, it is determined whether the to-be-processed image contains moiré.
  • Operation 306 Extract the smallest rectangular region containing all edge pixels in the image to be processed, and count the number of pixels contained in the smallest matrix region as the number of regional pixels.
  • moiré may be generated by a certain part of the object, that is to say, moiré may only be generated in certain areas in the image, and not moiré may be generated in the entire image. If the density of edges is confirmed based on the entire image, a certain error may occur. Therefore, when determining whether there is moiré, you can only confirm the denseness of the edges based on the area where the edges appear, so that the accuracy will be higher.
  • the smallest rectangular area is the smallest rectangle that can contain all the edge pixels. After detecting the edge pixels, the minimum rectangular area is determined according to the position of the edge pixels. After the minimum rectangular area is determined, the number of pixels contained in the minimum rectangular area is counted as the number of area pixels. The larger the number of area pixels, the more pixels are contained in the smallest rectangular area.
  • Operation 308 Obtain a first number threshold according to the number of pixels in the area.
  • the presence or absence of moiré may be determined according to the number of edge pixels included in the smallest rectangular region.
  • the first number threshold can be determined according to the ratio of the number of pixels in the region, and then the number of edge pixels is compared with the first number threshold to determine the density of the edge lines included in the smallest rectangular region to determine whether Moiré is present. For example, if the minimum rectangular area contains 10,000 pixels, 40% of the number of pixels in the area can be used as the first number threshold, that is, the first number threshold is 4,000 pixels.
  • Operation 310 Count the number of edge pixels included in the image to be processed as the first number of pixels.
  • the first pixel counted is the number of edge pixels contained in the smallest rectangular area.
  • the electronic device can traverse the pixels in the image to be processed and count the first pixel number of the edge pixels, or it can traverse the pixels in the smallest rectangular area and count the first number of pixels of the edge pixels, which is not limited here.
  • the edge pixels in the image to be processed may be marked.
  • the counter is incremented by one. Until the counting of all pixels in the image to be processed is completed, the counter stops counting, and the value obtained by the counter is used as the first pixel number obtained by the statistics.
  • the edge line is densely distributed.
  • the first pixel count is less than or equal to the first number threshold, This shows that the distribution of the edge lines is relatively scattered.
  • the edge line distribution is relatively dense, it is considered that there may be moiré in the image to be processed.
  • the moiré patterns in the image generally have a certain color change rule, so when counting the number of edge pixels, only the edge pixels of a specified color can be counted.
  • the edge pixels of the specified color are used as the pixels arranged on the moire. Then count the number of edge pixels of the specified color as the second number of pixels.
  • the first number threshold is obtained according to the number of regional pixels. To ensure that the second number threshold is less than or equal to the first number threshold, the second number threshold needs to be adjusted accordingly according to the first number threshold.
  • the counted first pixel number is less than or equal to the first number threshold, it means that the density of edge pixel points in the image to be processed is relatively small, and it can be determined that there is no moiré in the image to be processed. If the counted first pixel number is greater than the first number threshold, and the second pixel number is less than or equal to the second number threshold, it may also be explained that the density of edge pixel points is relatively small, and it is determined that moiré does not exist in the image to be processed.
  • Operation 318 Identify the image to be processed, and obtain an image classification label corresponding to the image to be processed.
  • the image classification label is used to mark the classification of the image to be processed.
  • the identification of the image to be processed specifically refers to identifying the shooting scene of the image to be processed, and the corresponding image classification label can be obtained by identifying the image to be processed.
  • the shooting scene of an image can be divided into beach, snow, night, blue sky, indoor and other scenes. Assuming that the shooting scene of the image is recognized as a snow scene, a corresponding image classification label "scene-snow scene" is generated.
  • the image to be processed is identified, if there is moiré in the image to be processed, the identification of the image to be processed is not accurate. Therefore, when the moiré is detected in the image to be processed, the image to be processed may not be recognized, or the moiré may be eliminated after being processed. When it is detected that the moire does not exist in the image to be processed, the image to be processed is recognized again to obtain an image classification label.
  • a foreground object and a background region in the image to be processed may be detected first.
  • the foreground target refers to the more prominent main target in the image, which is the object that the user is more concerned about.
  • the area in the image other than the foreground target is the background area. For example, in a captured image, a person can be seen as a foreground target, and a beach can be seen as a background area.
  • the detected foreground target is composed of some or all pixels in the image to be processed.
  • the number of pixels contained in the area where the foreground target is located can be counted, and the target area occupied by the foreground target can be calculated based on the counted number of pixels.
  • the target area may be directly expressed by the number of pixels included in the foreground target, or may be expressed by a ratio of the number of pixels included in the foreground target to the number of pixels included in the image to be processed. The larger the number of pixels contained in the foreground target, the larger the corresponding target area.
  • the electronic device obtains the target area of the foreground target after detecting the foreground target. If the target area is greater than the area threshold, the foreground target is considered too large and the corresponding background area is relatively small. When the background area is too small, the recognition of the background is not accurate. At this time, image classification can be performed according to the foreground target. For example, when the foreground object occupies more than 1/2 of the area of the image to be processed, an image classification label is generated according to the recognition result of the foreground object. When the area occupied by the foreground target is less than 1/2 of the image to be processed, an image classification label is generated according to the recognition result of the background region. Specifically, after the foreground target is detected, the target area of the foreground target can be obtained. When the target area is larger than the area threshold, the foreground target can be identified to obtain the image classification label; when the target area is less than or equal to the area threshold, the background area is identified to obtain the image classification label.
  • the operation of counting the number of first pixels in the image to be processed may specifically include:
  • Operation 402 Binarize the image to be processed to obtain a binary image.
  • the binary image includes a first color pixel point and a second color pixel point.
  • the first color pixel point corresponds to an edge pixel in the image to be processed.
  • Point, the second color pixel point corresponds to other pixel points in the image to be processed except for the edge pixel point.
  • the image to be processed may be binarized to obtain a binarized image.
  • the binarized image contains only two color pixels. All the edge pixel points in the image to be processed are converted into the first color, and all other pixel points except the edge pixel points are converted into the second color. For example, the gray values of the edge pixels are all set to 255, and the gray values of all pixels other than the edge pixels are set to 0, so that the edge pixels and other pixels can be distinguished by color.
  • Operation 404 Count the number of pixels of the first color in the binarized image as the number of first pixels.
  • the binary image contains only two color pixels, and the edge pixels can be quickly identified based on the color of the pixels. Specifically, the number of pixels corresponding to the first color in the binarized image may be counted as the first number of pixels. For example, if the gray values of the edge pixels are set to 255 and the gray values of other pixels are set to 0, then the gray value can be directly counted as 255, that is, the number of white pixels as the first pixel number.
  • the operation of counting the number of second pixels may specifically include:
  • the edge pixels of the specified color are clustered, and each type of edge pixels obtained after the clustering processing forms a continuous edge line.
  • the edge lines forming the moiré are generally continuous. Therefore, after the edge pixels are determined, the edge pixels can be clustered.
  • Each type of edge pixels obtained after the clustering processing forms a continuous edge line. That is, the edge pixels that can form a continuous edge line are classified into one category, so that the length of each continuous edge line can be counted.
  • edge pixels of a specified color may be clustered, and each type of edge pixels after clustering may form a continuous edge line.
  • Operation 504 Use the edge line whose edge length exceeds the length threshold as the target edge line, and count the total number of edge pixel points included in the target edge line as the second pixel number.
  • each type of edge pixels can form a continuous edge line. Then calculate the edge length of each edge line. It can be considered that the edge line whose edge length exceeds the length threshold is the edge line forming the moire.
  • the length of the edge line can be represented by the number of edge pixels included in the edge line. The more the number of edge pixels included, the longer the length of the edge line.
  • the edge line whose edge length exceeds the length threshold is taken as the target edge line, that is, the target edge line may be considered as the edge line forming the moire pattern. Then, the edge lengths of the respective target edge lines are added, that is, the number of edge pixel points included in each target edge line is added, and the total number of edge pixel points obtained is used as the second pixel number. Then, it is determined whether a moiré exists in the image to be processed according to the number of second pixels.
  • FIG. 6 is a schematic diagram showing an edge line in an embodiment.
  • the image 60 to be processed includes a plurality of edge pixels, and the plurality of pixels are subjected to cluster processing.
  • Each type of edge pixel point forms a different continuous edge line, which includes an edge line 602, an edge line 604, and an edge line 606, respectively.
  • the target edge line can be determined according to the edge length of each edge line. For example, assuming that the edge length of the edge line 602 exceeds the edge threshold, and the edge length of the edge line 604 and the edge line 606 is less than the edge threshold, the edge 602 can be used as the target edge line.
  • the operation of determining the smallest rectangular area specifically includes:
  • Operation 702 Establish a coordinate system according to the image to be processed, and obtain pixel coordinates of each edge pixel in the coordinate system.
  • the image to be processed is a two-dimensional pixel matrix composed of a plurality of pixels.
  • a coordinate system can be established according to the image to be processed, so that each pixel in the image to be processed can be marked with a two-dimensional coordinate.
  • Position This is a two-dimensional coordinate to find the specific position of the positioning pixel.
  • the pixel point in the lower left corner of the image to be processed can be used as the coordinate origin, and a coordinate system can be established. One pixel point is shifted to the right and the horizontal coordinate is increased by one.
  • the position of each edge pixel can be represented by a two-dimensional pixel coordinate.
  • the pixel coordinates may include a horizontal pixel coordinate and a vertical pixel coordinate, which respectively represent a lateral displacement and a longitudinal displacement of the edge pixel point relative to the origin.
  • the pixel coordinate of an edge pixel is (20,150)
  • Operation 704 Determine vertex coordinates according to the obtained pixel coordinates of each edge pixel, and determine a minimum rectangular area according to the vertex coordinates.
  • the vertex coordinates can be determined according to the obtained pixel coordinates, and then the smallest rectangular area can be determined according to the vertex coordinates.
  • the pixel coordinates include horizontal pixel coordinates and vertical pixel coordinates.
  • the minimum value of the horizontal coordinate and the maximum value of the horizontal coordinate may be determined according to the obtained horizontal pixel coordinates of each edge pixel point, and the minimum value of the vertical coordinate and the vertical direction may be determined according to the obtained vertical pixel coordinates The maximum coordinate.
  • the four apex coordinates of the smallest rectangular area can be determined based on the obtained minimum horizontal coordinate, maximum horizontal coordinate, minimum vertical coordinate, and maximum vertical coordinate, and then the minimum rectangular area is determined based on the vertex coordinates.
  • FIG. 8 is a schematic diagram of the smallest rectangular area in one embodiment.
  • a coordinate system can be established according to the image 80 to be processed, with the bottom-left coordinate in the image to be processed as the origin O, the positive direction to the x-axis, and the positive direction to the y-axis.
  • the coordinates of the vertex 802, the vertex 804, the vertex 806, and the vertex 808 can be determined by the coordinates of the edge pixels, and the range of the smallest rectangular area 810 is determined according to the coordinates of the vertex 802, the vertex 804, the vertex 806, and the vertex 808.
  • the number of pixels included in the minimum rectangular area is counted as the number of area pixels.
  • the first number threshold can be determined according to the number of area pixels, but if the area of the smallest rectangular area is too small, it can also be determined that there is no moiré in the image to be processed. Specifically, the area area of the smallest rectangular area can be obtained. If the area area of the smallest rectangular area is greater than the area threshold, the number of pixels included in the smallest rectangular area is counted as the number of area pixels. A first number threshold is then determined according to the number of regional pixels. If the area of the smallest rectangular area is smaller than the area threshold, it is determined that there is no moiré in the image to be processed.
  • the area area of the minimum rectangular area can be calculated according to the vertex coordinates of the four vertices, the length and width of the minimum rectangular area can be calculated based on the four vertex coordinates, and then the area area of the minimum rectangular area can be calculated based on the obtained length and width.
  • the area area calculated in this way is equal to the number of area pixels calculated, and the area area can also be directly expressed by the number of area pixels. That is, the number of pixels included in the smallest rectangular area is counted as the number of area pixels.
  • the first number threshold is determined according to the number of area pixels.
  • the number of area pixels is less than or equal to the area number threshold It is directly determined that there is no moiré in the image to be processed.
  • the image processing method provided in the foregoing embodiment may perform edge detection on an image to be processed, and determine edge pixel points included in the image to be processed. Then determine the smallest rectangular area containing all edge pixels, and determine the first number threshold based on the number of pixels contained in the smallest rectangular area. Count the first number of edge pixels in the image to be processed, and determine whether there are moiré patterns in the image to be processed according to the comparison result between the first number of pixels and the first number threshold. In this way, there is no need to perform spatial conversion on the image to be processed, and the moiré in the image can be detected quickly, which improves the efficiency of image processing.
  • FIG. 1, FIG. 3, FIG. 4, FIG. 5, and FIG. 7 are sequentially displayed according to the directions of the arrows, these operations are not necessarily performed sequentially in the order indicated by the arrows. Unless explicitly stated in this article, there is no strict order in which these operations can be performed, and these operations can be performed in other orders. Moreover, at least a part of the operations in FIG. 1, FIG. 3, FIG. 4, FIG. 5, and FIG. 7 may include multiple sub-operations or multiple phases.
  • FIG. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment.
  • the image processing apparatus 900 includes an image acquisition module 902, an edge detection module 904, a quantity statistics module 906, and a moiré determination module 908. among them:
  • the image acquisition module 902 is configured to acquire an image to be processed.
  • An edge detection module 904 is configured to perform edge detection on the image to be processed, and determine edge pixel points included in the image to be processed.
  • the number counting module 906 is configured to count the number of edge pixels included in the image to be processed as the first number of pixels.
  • Moiré determining module 908 is configured to determine that moiré exists in the image to be processed if the first number of pixels is greater than a first number threshold.
  • the image processing apparatus may perform edge detection on an image to be processed, and determine edge pixel points included in the image to be processed. Then, the first pixel number of the edge pixel points in the image to be processed is counted, and it is determined whether the moire exists in the image to be processed according to the first pixel number. This eliminates the need to perform spatial conversion on the image to be processed, can quickly detect moiré in the image, and improves the efficiency of image processing.
  • FIG. 10 is a schematic structural diagram of an image processing apparatus in another embodiment.
  • the image processing apparatus 1000 includes an image acquisition module 1002, an edge detection module 1004, a quantity statistics module 1006, a threshold value determination module 1008, and a moire determination module 1010. among them:
  • An image acquisition module 1002 is configured to acquire an image to be processed.
  • An edge detection module 1004 is configured to perform edge detection on the image to be processed, and determine edge pixel points included in the image to be processed.
  • the number counting module 1006 is configured to count the number of edge pixels included in the image to be processed as the first number of pixels.
  • a threshold value determination module 1008 is configured to extract a smallest rectangular region containing all edge pixels in the image to be processed, and count the number of pixels contained in the smallest matrix region as the number of regional pixels; and obtain the number of pixels according to the number of regional pixels. The first number threshold.
  • Moiré determination module 1010 is configured to determine that moiré exists in the image to be processed if the first number of pixels is greater than a first number threshold.
  • the image processing apparatus may perform edge detection on an image to be processed, and determine edge pixel points included in the image to be processed. Then, the first pixel number of the edge pixel points in the image to be processed is counted, and it is determined whether the moire exists in the image to be processed according to the first pixel number. In this way, there is no need to perform spatial conversion on the image to be processed, and the moiré in the image can be detected quickly, which improves the efficiency of image processing.
  • the quantity statistics module 1006 is further configured to perform a binarization process on the image to be processed to obtain a binarized image, where the binarized image includes a first color pixel point and a second color pixel point A first color pixel point corresponds to an edge pixel point in the image to be processed, and a second color pixel point corresponds to other pixel points in the image to be processed except an edge pixel point; The number of first pixels of a color pixel.
  • the threshold value determination module 1008 is further configured to establish a coordinate system according to the image to be processed, and obtain pixel coordinates of each edge pixel point in the coordinate system; determine according to the obtained pixel coordinates of each edge pixel point. Vertex coordinates, and a minimum rectangular area is determined according to the vertex coordinates.
  • the moiré determining module 1010 is further configured to count the number of edge pixel points of a specified color as the second pixel number if the first pixel number is greater than the first number threshold; if the second pixel is If the quantity is greater than the second quantity threshold, it is determined that moiré exists in the image to be processed; wherein the second quantity threshold is less than or equal to the first quantity threshold.
  • the moiré determination module 1010 is further configured to perform cluster processing on edge pixels of a specified color, and each type of edge pixels obtained after the clustering processing forms a continuous edge line; An edge line with a length exceeding a length threshold is used as a target edge line, and the total number of edge pixel points included in the target edge line is counted as the second pixel number.
  • the moiré determination module 1010 is further configured to determine that moiré does not exist in the image to be processed if the number of the first pixels is less than or equal to a first number threshold; and identify the image to be processed. To obtain an image classification label corresponding to the image to be processed, where the image classification label is used to mark a classification of the image to be processed.
  • each module in the above image processing apparatus is for illustration only. In other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the above image processing apparatus.
  • An embodiment of the present application further provides a computer-readable storage medium.
  • One or more non-volatile computer-readable storage media containing computer-executable instructions, when the computer-executable instructions are executed by one or more processors, causing the processors to perform the image processing provided by the foregoing embodiments method.
  • An embodiment of the present application further provides an electronic device.
  • the above electronic device includes an image processing circuit.
  • the image processing circuit may be implemented by hardware and / or software components, and may include various processing units that define an ISP (Image Signal Processing) pipeline.
  • FIG. 11 is a schematic diagram of an image processing circuit in an embodiment. As shown in FIG. 11, for ease of description, only aspects of the image processing technology related to the embodiments of the present application are shown.
  • the image processing circuit includes an ISP processor 1140 and a control logic 1150.
  • the image data captured by the imaging device 1110 is first processed by the ISP processor 1140.
  • the ISP processor 1140 analyzes the image data to capture image statistical information that can be used to determine and / or one or more control parameters of the imaging device 1110.
  • the imaging device 1110 may include a camera having one or more lenses 1112 and an image sensor 1114.
  • the image sensor 1114 may include a color filter array (such as a Bayer filter).
  • the image sensor 1114 may obtain light intensity and wavelength information captured by each imaging pixel of the image sensor 1114, and provide a set of raw data that may be processed by the ISP processor 1140. Image data.
  • the sensor 1120 (such as a gyroscope) may provide parameters (such as image stabilization parameters) of the acquired image processing to the ISP processor 1140 based on the interface type of the sensor 1120.
  • the sensor 1120 interface may use a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the foregoing interfaces.
  • SMIA Standard Mobile Imaging Architecture
  • the image sensor 1114 may also send the original image data to the sensor 1120.
  • the sensor 1120 may provide the original image data to the ISP processor 1140 based on the interface type of the sensor 1120, or the sensor 1120 stores the original image data in the image memory 1130.
  • the ISP processor 1140 processes the original image data pixel by pixel in a variety of formats.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 1140 may perform one or more image processing operations on the original image data and collect statistical information about the image data.
  • the image processing operations may be performed with the same or different bit depth accuracy.
  • the ISP processor 1140 may also receive image data from the image memory 1130.
  • the sensor 1120 interface sends the original image data to the image memory 1130, and the original image data in the image memory 1130 is then provided to the ISP processor 1140 for processing.
  • the image memory 1130 may be a part of a memory device, a storage device, or a separate dedicated memory in an electronic device, and may include a DMA (Direct Memory Access) feature.
  • DMA Direct Memory Access
  • the ISP processor 1140 may perform one or more image processing operations, such as time-domain filtering.
  • the processed image data may be sent to the image memory 1130 for further processing before being displayed.
  • the ISP processor 1140 receives processed data from the image memory 1130, and performs image data processing on the processed data in the original domain and in the RGB and YCbCr color spaces.
  • the image data processed by the ISP processor 1140 may be output to a display 1170 for viewing by a user and / or further processed by a graphics engine or a GPU (Graphics Processing Unit).
  • the output of the ISP processor 1140 can also be sent to the image memory 1130, and the display 1170 can read image data from the image memory 1130.
  • the image memory 1130 may be configured to implement one or more frame buffers.
  • the output of the ISP processor 1140 may be sent to an encoder / decoder 1160 to encode / decode image data.
  • the encoded image data can be saved and decompressed before being displayed on the display 1170 device.
  • the encoder / decoder 1160 may be implemented by a CPU or a GPU or a coprocessor.
  • the statistical data determined by the ISP processor 1140 may be sent to the control logic 1150 unit.
  • the statistical data may include statistical information of the image sensor 1114 such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, and lens 1112 shading correction.
  • the control logic 1150 may include a processor and / or a microcontroller that executes one or more routines (such as firmware). The one or more routines may determine the control parameters of the imaging device 1110 and the ISP processing based on the received statistical data. 1140 control parameters.
  • control parameters of the imaging device 1110 may include sensor 1120 control parameters (such as gain, integration time for exposure control, image stabilization parameters, etc.), camera flash control parameters, lens 1112 control parameters (such as focus distance for focusing or zooming), or these A combination of parameters.
  • ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), and lens 1112 shading correction parameters.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which is used as external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR, SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR dual data rate SDRAM
  • SDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

一种图像处理方法包括:获取待处理图像;对所述待处理图像进行边缘检测,确定所述待处理图像中包含的边缘像素点;统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量;若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹。

Description

图像处理方法、计算机可读存储介质和电子设备
相关申请的交叉引用
本申请要求于2018年06月08日提交中国专利局、申请号为201810590031.5、发明名称为“图像处理方法、装置、计算机可读存储介质和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种图像处理方法、计算机可读存储介质和电子设备。
背景技术
智能设备在拍摄图像的时候,摄像头的感光元件拥有一个空间频率。如果被拍摄物体拥有同样的空间分布规律,就会使得感光元件与被拍摄物体的空间分布比较接近,从而产生对图像造成干扰的摩尔纹。通常要检测图像中的摩尔纹,需要将图像进行空间转换,这样检测的效率比较低。
发明内容
根据本申请的各种实施例,提供一种图像处理方法、计算机可读存储介质和电子设备。
一种图像处理方法,所述方法包括:
获取待处理图像;
对所述待处理图像进行边缘检测,确定所述待处理图像中包含的边缘像素点;
统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量;
若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹。
一种图像处理装置,所述装置包括:
图像获取模块,用于获取待处理图像;
边缘检测模块,用于对所述待处理图像进行边缘检测,确定所述待处理图像中包含的边缘像素点;
数量统计模块,用于统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量;
摩尔纹确定模块,用于若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹。
一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如下操作:
获取待处理图像;
对所述待处理图像进行边缘检测,确定所述待处理图像中包含的边缘像素点;
统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量;
若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹。
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行如下操作:
获取待处理图像;
对所述待处理图像进行边缘检测,确定所述待处理图像中包含的边缘像素点;
统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量;
若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹。
上述图像处理方法、计算机可读存储介质和电子设备,可以对待处理图像进行边缘检 测,确定待处理图像中包含的边缘像素点。然后统计待处理图像中的边缘像素点的第一像素数量,根据第一像素数量确定待处理图像中是否存在摩尔纹。这样不用对待处理图像进行空间转换,可以快速地检测图像中的摩尔纹,提高了图像处理的效率。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中图像处理方法的流程图。
图2为一个实施例中图像中的摩尔纹的展示示意图。
图3为另一个实施例中图像处理方法的流程图。
图4为又一个实施例中图像处理方法的流程图。
图5为又一个实施例中图像处理方法的流程图。
图6为一个实施例中边缘线的展示示意图。
图7为又一个实施例中图像处理方法的流程图。
图8为一个实施例中最小矩形区域的示意图。
图9为一个实施例中图像处理装置的结构示意图。
图10为另一个实施例中图像处理装置的结构示意图。
图11为一个实施例中图像处理电路的示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一客户端称为第二客户端,且类似地,可将第二客户端称为第一客户端。第一客户端和第二客户端两者都是客户端,但其不是同一客户端。
图1为一个实施例中图像处理方法的流程图。如图1所示,该图像处理方法包括操作102至操作108。其中:
操作102,获取待处理图像。
在一个实施例中,电子设备上可内置或外接式地安装摄像头,安装摄像头的位置和数量不限。例如,可以在手机正面安装一个摄像头,在手机背面安装两个摄像头。在拍摄图像的过程一般分为两个阶段:预览阶段和拍摄阶段。在预览阶段时,摄像头会每间隔一定时长采集一次图像,采集的图像不会进行存储,但会进行显示供用户查看,用户根据显示的图像来调整拍摄的角度、光线等参数。当检测到用户输入的拍摄指令时,进入拍摄阶段,电子设备会将接收到拍摄指令后的下一帧图像进行存储,作为最终得到的拍摄图像。
电子设备可将拍摄的图像进行存储,并发送至服务器或其他电子设备中。可以理解的是,本实施例中获取的待处理图像并不仅限于是电子设备自身拍摄的,也可以是其他电子设备发送的,或者通过网络下载的。电子设备在获取到图像之后,可以立即对图像进行处理,也可以将图像统一存放在一个文件夹中,在该文件夹中存储的图像到达一定数量之后,再将存储的图像统一进行处理。例如,电子设备可以将获取的图像存储到相册中,当相册中存储的图像大于一定数量时,就触发对相册中的图像进行处理。
操作104,对待处理图像进行边缘检测,确定待处理图像中包含的边缘像素点。
边缘检测是指对图像中亮度变化比较明显的点进行检测的过程,这些亮度变化比较明显的点就是边缘像素点。图像中的边缘一般分为阶梯形边缘、屋顶形边缘和线性边缘等三种。可以理解的是,待处理图像是由若干个像素点构成的,边缘的亮度值变化比较大,因此可对图像中的像素点进行求导运算。根据求导运算可以检测出图像中亮度变化比较大的像素点,则根据求导运算的结果就可以检测出图像中的边缘,确定边缘像素点。
例如,在拍摄风景的时候,沙滩和海水交界的地方就会形成一条边缘线,通过边缘检测就可以检测出沙滩和海水交界的边缘线,并确定边缘线上的像素点为边缘像素点。具体的,边缘像素点可以但不限于是根据Canny、Roberts、Prewitt和Sobel等算法检测出来的。
操作106,统计待处理图像中包含的边缘像素点的数量,作为第一像素数量。
通过边缘检测处理,可检测到待处理图像中的边缘所包含的边缘像素点。检测到边缘像素点可以进行标记,根据该标记可以找到待处理图像中的边缘像素点,并统计被标记的边缘像素点的数量,作为第一像素数量。例如,将边缘像素点的亮度值标记为一个特定值,这样在统计第一像素数量的时候,就可以直接统计亮度值为该特定值的像素点。
操作108,若第一像素数量大于第一数量阈值,则确定待处理图像中存在摩尔纹。
摩尔纹是指感光元件在成像时出现高频干扰的现象,使得图像中产生的不规律条纹。这些不规则的条纹一般会比较密集的分布在图像中,因此摩尔纹在图像中占的面积也比较广。例如,在拍摄手机或电脑屏幕时,图像中会产生一些并不是屏幕上本身所带有的一些不规则条纹,这样的条纹就叫做摩尔纹。在本实施例中,通过边缘检测后可以检测到这些不规则条纹的边缘线,然后统计边缘线上的像素点数量,若统计的像素点数量超过一定值,说明这些边缘条纹在图像中分布得比较密集,就可以认为这些边缘条纹是图像中产生的摩尔纹。
具体的,由于获取的待处理图像的分辨率可能不同,因此上述第一数量阈值可以是根据待处理图像包含的像素点总数量的比例来获取的。例如,可以是第一数量阈值可以是待处理图像包含的像素点总数量的50%。还可以将第一数量阈值设置为一个固定的数量阈值,待处理图像可以统一缩放到一个固定的尺寸再进行边缘检测,然后将边缘像素点的数量与该固定的数量阈值进行比较。
图2为一个实施例中图像中的摩尔纹的展示示意图。如图2所示,该图像202中包含不规则分布的条纹,这些不规则条纹即为摩尔纹204。电子设备可首先检测图像202中的边缘,根据统计的边缘像素点的数量来判断是否存在摩尔纹204。
上述实施例提供的图像处理方法,可以对待处理图像进行边缘检测,确定待处理图像中包含的边缘像素点。然后统计待处理图像中的边缘像素点的第一像素数量,根据第一像素数量确定待处理图像中是否存在摩尔纹。这样不用对待处理图像进行空间转换,可以快速地检测图像中的摩尔纹,提高了图像处理的效率。
图3为另一个实施例中图像处理方法的流程图。如图3所示,该图像处理方法包括操作302至操作320。其中:
操作302,获取待处理图像。
在一个实施例中,电子设备可能会对待处理图像进行批量处理,假设需处理的待处理图像比较多时,由于终端的处理能力有限,那么终端可以先将待处理图像上传到服务器上,通过服务器来对待处理图像进行批量处理,再将处理的结果返回给终端。终端也可以在处于待机状态的时候再对待处理图像进行批量处理,以免占用终端资源,影响用户使用。
操作304,对待处理图像进行边缘检测,确定待处理图像中包含的边缘像素点。
电子设备可以对批量的待处理图像进行处理,在对批量待处理图像进行处理的时候, 为提高对待处理图像的处理速度,可以将待处理图像统一压缩到固定尺寸,然后对压缩之后的待处理图像进行边缘检测,确定压缩之后的待处理图像中包含的边缘像素点。然后通过对压缩之后的待处理图像中包含的边缘像素点的统计,判断待处理图像中是否包含摩尔纹。
操作306,提取待处理图像中包含所有边缘像素点的最小矩形区域,统计最小矩阵区域中包含的像素点的数量,作为区域像素数量。
可以理解的是,在拍摄图像时,可能摩尔纹是由某一部分物体产生的,也就是说图像中可能只是在某些区域产生的摩尔纹,而并不是整幅图像中都会产生摩尔纹。如果根据整个图像来确认边缘的密集程度,可能会产生一定的误差。所以在确定是否存在摩尔纹的时候,可以只根据出现边缘的区域来确认边缘的密集程度,这样准确性会更高。
最小矩形区域是指能包含所有边缘像素点的最小矩形。在检测到边缘像素点之后,根据边缘像素点的位置来确定最小矩形区域。确定最小矩形区域之后,统计该最小矩形区域中包含的像素点的数量,作为区域像素数量。区域像素数量越大,说明最小矩形区域中包含的像素点越多。
操作308,根据区域像素数量获取第一数量阈值。
在一个实施例中,可以根据最小矩形区域中包含的边缘像素点的数量的多少来确定是否存在摩尔纹。具体的,可以根据区域像素数量的比例来确定第一数量阈值,然后将边缘像素点的数量与第一数量阈值进行比较,可以确定最小矩形区域内包含的边缘线的密度,以此来确定是否存在摩尔纹。例如,最小矩形区域中包含10000个像素点,则可以将区域像素数量的40%作为第一数量阈值,即第一数量阈值就为4000个像素点。
操作310,统计待处理图像中包含的边缘像素点的数量,作为第一像素数量。
统计待处理图像中包含的边缘像素点的第一像素数量,由于最小矩形区域为包含所有边缘像素点的最小矩形,所以统计得到的第一像素数量也就是最小矩形区域中包含的边缘像素点数量。电子设备可以遍历待处理图像中的像素点,统计边缘像素点的第一像素数量,也可以遍历最小矩形区域中的像素点,统计边缘像素点的第一像素数量,在此不做限定。
具体的,在检测到待处理图像中的边缘像素点后,可将边缘像素点进行标记,在遍历待处理图像中的像素点的过程中,当判断像素点为边缘像素点时,计数器加一,直到将待处理图像中所有像素点统计完成之后,计数器停止计数,将计数器得到的数值作为统计得到的第一像素数量。
操作312,若第一像素数量大于第一数量阈值,则统计指定颜色的边缘像素点的数量,作为第二像素数量。
将统计得到的第一像素数量与第一数量阈值进行比较,当第一像素数量大于第一数量阈值时,说明边缘线的分布比较密集;当第一像素数量小于或等于第一数量阈值时,说明边缘线的分布比较分散。当边缘线分布比较密集的时候,则认为该待处理图像中可能存在摩尔纹。
不难理解的是,图像中产生的摩尔纹一般都具有一定的颜色变化规律,因此在统计边缘像素点的数量的时候,可以只统计指定颜色的边缘像素点。将指定颜色的边缘像素点作为摩尔纹上排列的像素点。然后统计指定颜色的边缘像素点数量,作为第二像素数量。
操作314,若第二像素数量大于第二数量阈值,则确定待处理图像中存在摩尔纹;其中,第二数量阈值小于或等于第一数量阈值。
若统计得到第二像素数量大于第二数量阈值,则判定待处理图像中存在摩尔纹。若判定待处理图像中存在摩尔纹,则可以对待处理图像中的摩尔纹进行消除,使得待处理图像进行还原。在本实施例中,根据区域像素数量获取第一数量阈值,则为保证第二数量阈值小于或等于第一数量阈值,需相应地根据第一数量阈值调整第二数量阈值。
操作316,若第一像素数量小于或等于第一数量阈值,则确定待处理图像中不存在摩 尔纹。
当统计的第一像素数量小于或等于第一数量阈值时,则说明待处理图像中边缘像素点的密度比较小,可判定待处理图像中不存在摩尔纹。若统计的第一像素数量大于第一数量阈值,且第二像素数量小于或等于第二数量阈值,则也可以说明边缘像素点的密度比较小,判定待处理图像中不存在摩尔纹。
操作318,对待处理图像进行识别,得到待处理图像对应的图像分类标签,图像分类标签用于标记待处理图像的分类。
对待处理图像进行识别具体是指对待处理图像的拍摄场景进行识别,对待处理图像进行识别可得到对应的图像分类标签。例如,图像的拍摄场景可分为海滩、雪景、夜景、蓝天、室内等场景,假设识别到图像的拍摄场景为雪景,那么就生成对应的图像分类标签“场景-雪景”。
一般在对待处理图像进行识别的时候,若待处理图像中存在摩尔纹,则对待处理图像的识别就不准确。所以在检测到待处理图像中存在摩尔纹的时候,可以不对待处理图像进行识别,或者将摩尔纹进行消除处理后在进行识别。当检测到待处理图像中不存在摩尔纹时,再对待处理图像进行识别,得到图像分类标签。
在一个实施例中,在对待处理图像进行识别的时候,可以首先检测待处理图像中的前景目标和背景区域。前景目标是指图像中比较突出的主体目标,是用户比较关注的物体。图像中除前景目标之外的区域为背景区域。例如,在拍摄的图像中,人可以看作是前景目标,海滩可看作是背景区域。
检测到的前景目标是由待处理图像中的部分或全部像素点构成的,可以统计前景目标所在区域中包含的像素点数量,根据统计得到的像素点数量计算该前景目标所占的目标面积。具体的,目标面积可以直接通过前景目标中包含的像素点数量进行表示,也可以用前景目标中包含的像素点数量与待处理图像中包含的像素点数量的比例进行表示。前景目标中包含的像素点数量越多,对应的目标面积越大。
电子设备在检测到前景目标之后,获取前景目标的目标面积。若目标面积大于面积阈值,则认为前景目标过大,相应的背景区域就比较小。背景区域过小的时候,对背景的识别就不准确,这时就可以根据前景目标来进行图像分类。例如,当前景目标占待处理图像的1/2以上的面积时,根据对前景目标的识别结果生成图像分类标签。当前景目标所占的面积小于待处理图像的1/2时,根据对背景区域的识别结果生成图像分类标签。具体的,检测到前景目标后,可获取前景目标的目标面积。当目标面积大于面积阈值时,可对前景目标进行识别得到图像分类标签;当目标面积小于或等于面积阈值时,对背景区域进行识别得到图像分类标签。
在一个实施例中,待处理图像中统计第一像素数量的操作,具体可以包括:
操作402,将待处理图像进行二值化处理得到二值化图像,其中二值化图像中包含第一颜色像素点和第二颜色像素点,第一颜色像素点对应待处理图像中的边缘像素点,第二颜色像素点对应待处理图像中除边缘像素点之外的其他像素点。
在检测到待处理图像中的边缘后,可以将待处理图像进行二值化处理,得到二值化图像。进行二值化处理之后,二值化图像中只包含两种颜色的像素点。待处理图像中的边缘像素点全部转换为第一颜色,除边缘像素点之外的其他像素点全部转换为第二颜色。例如,将边缘像素点的灰度值全部设置为255,将除边缘像素点之外的其他像素点的灰度值全部设置为0,这样就可以通过颜色区分出边缘像素点和其他像素点。
操作404,统计二值化图像中第一颜色像素点的数量,作为第一像素数量。
将待处理图像转换为二值化图像之后,二值化图像中只包含两种颜色的像素点,根据像素点的颜色就可以快速地识别边缘像素点。具体的,可统计二值化图像中第一颜色对应的像素点数量,作为第一像素数量。例如,将边缘像素点的灰度值置为255,其他像素点 的灰度值置为0,那么就可以直接统计灰度值为255,即白色的像素点的数量,作为第一像素数量。
在本申请提供的实施例中,统计第二像素数量的操作具体可以包括:
操作502,将指定颜色的边缘像素点进行聚类处理,通过聚类处理后得到的每一类边缘像素点形成一条连续的边缘线。
形成摩尔纹的边缘线一般是连续的,因此在确定边缘像素点后,可以将边缘像素点进行聚类处理,通过聚类处理后得到的每一类边缘像素点形成一条连续的边缘线。也就是说,将可形成连续边缘线的边缘像素点分为一类,这样可以统计每一条连续边缘线的长度。具体的,可将指定颜色的边缘像素点进行聚类处理,聚类处理后的每一类边缘像素点可形成一条连续的边缘线。
操作504,将边缘长度超过长度阈值的边缘线作为目标边缘线,并统计目标边缘线包含的边缘像素点的总数量,作为第二像素数量。
将边缘像素点进行聚类处理后,每一类边缘像素点可以形成一条连续的边缘线。然后计算各个边缘线的边缘长度,可认为边缘长度超过长度阈值的边缘线是形成摩尔纹的边缘线。边缘线的长度可以通过边缘线中包含的边缘像素点的数量进行表示,包含的边缘像素点的数量越多,边缘线的长度越长。
将边缘长度超过长度阈值的边缘线作为目标边缘线,即认为目标边缘线可能为形成摩尔纹的边缘线。然后将各个目标边缘线的边缘长度相加,即将各个目标边缘线中包含的边缘像素点数量相加,得到的边缘像素点的总数量作为第二像素数量。再根据第二像素数量来判断待处理图像中是否存在摩尔纹。
图6为一个实施例中边缘线的展示示意图。如图6所示,待处理图像60中包含若干个边缘像素点,将这若干个像素点进行聚类处理。每一类边缘像素点形成不同的形成一条连续的边缘线,分别包括边缘线602、边缘线604和边缘线606,根据各个边缘线的边缘长度可确定目标边缘线。例如,假设边缘线602的边缘长度超过边缘阈值,边缘线604和边缘线606的边缘长度小于边缘阈值,则可将边缘602作为目标边缘线。
在一个实施例中,确定最小矩形区域的操作具体包括:
操作702,根据待处理图像建立坐标系,并获取各个边缘像素点在坐标系中的像素坐标。
具体的,待处理图像是由若干个像素点构成的二维像素矩阵,根据待处理图像可以建立一个坐标系,这样待处理图像中的每一个像素点都可以通过一个二维坐标来标记相应的位置,通过这个为二维坐标可以找到定位像素点的具体位置。例如,可以将待处理图像中最左下角的像素点作为坐标原点,建立坐标系,每向右移动一个像素点,横向坐标加一,每向左移动一个像素点,纵向坐标加一。
建立好坐标系之后,可以通过一个二维的像素坐标来表示各个边缘像素点的位置。像素坐标可以包括横向像素坐标和纵向像素坐标,分别表示该边缘像素点相对于原点的横向位移和纵向位移。例如,一个边缘像素点的像素坐标为(20,150),则表示该边缘像素点相对于原点的位置向横轴正方向移动20个像素点的距离,向纵轴正方向移动150个像素点的距离。
操作704,根据获取的各个边缘像素点的像素坐标确定顶点坐标,并根据顶点坐标确定最小矩形区域。
获取到各个边缘像素点的像素坐标之后,可以根据获取的像素坐标确定顶点坐标,然后根据顶点坐标确定最小矩形区域。具体的,像素坐标包括横向像素坐标和纵向像素坐标,可根据获取的各个边缘像素点的横向像素坐标确定横向坐标最小值和横向坐标最大值,根据获取的纵向像素坐标确定纵向坐标最小值和纵向坐标最大值。根据获取的横向坐标最小值、横向坐标最大值、纵向坐标最小值和纵向坐标最大值可以确定最小矩形区域的四个顶 点坐标,然后根据顶点坐标确定最小矩形区域。
图8为一个实施例中最小矩形区域的示意图。如图8所示,根据待处理图像80可以建立一个坐标系,以待处理图像中最左下角坐标为原点O,向右为x轴正方向,向上为y轴正方向。通过边缘像素点的坐标可以确定顶点802、顶点804、顶点806和顶点808的坐标,并根据顶点802、顶点804、顶点806和顶点808的坐标确定最小矩形区域810的范围。
在一个实施例中,确定最小矩形区域之后,统计最小矩形区域中包含的像素点的数量,作为区域像素数量。根据区域像素数量可以确定第一数量阈值,但如果最小矩形区域的面积过小,那么也可以判定待处理图像中不存在摩尔纹。具体的,可以获取最小矩形区域的区域面积,若最小矩形区域的区域面积大于面积阈值,则统计最小矩形区域中包含的像素点数量,作为区域像素数量。然后根据区域像素数量确定第一数量阈值。若最小矩形区域的区域面积小于面积阈值,则之间判定该待处理图像中不存在摩尔纹。
其中,最小矩形区域的区域面积可以根据四个顶点的顶点坐标进行计算,根据四个顶点坐标计算得到最小矩形区域的长度和宽度,然后根据得到的长度和宽度计算最小矩形区域的区域面积。这样计算得到的区域面积和统计得到的区域像素数量是相等的,也可以直接用区域像素数量来表示区域面积。也即统计最小矩形区域中包含的像素点数量,作为区域像素数量;当区域像素数量大于区域数量阈值时,根据区域像素数量确定第一数量阈值;当区域像素数量小于或等于区域数量阈值时,直接判定该待处理图像中不存在摩尔纹。
上述实施例提供的图像处理方法,可以对待处理图像进行边缘检测,确定待处理图像中包含的边缘像素点。然后确定包含所有边缘像素点的最小矩形区域,并根据最小矩形区域中包含的像素点的数量确定第一数量阈值。统计待处理图像中的边缘像素点的第一像素数量,根据第一像素数量和第一数量阈值的比较结果,确定待处理图像中是否存在摩尔纹。这样不用对待处理图像进行空间转换,可以快速地检测图像中的摩尔纹,提高了图像处理的效率。
应该理解的是,虽然图1、图3、图4、图5、图7的流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且图1、图3、图4、图5、图7中的至少一部分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作的子操作或者阶段的至少一部分轮流或者交替地执行。
图9为一个实施例中图像处理装置的结构示意图。如图9所示,该图像处理装置900包括图像获取模块902、边缘检测模块904、数量统计模块906和摩尔纹确定模块908。其中:
图像获取模块902,用于获取待处理图像。
边缘检测模块904,用于对所述待处理图像进行边缘检测,确定所述待处理图像中包含的边缘像素点。
数量统计模块906,用于统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量。
摩尔纹确定模块908,用于若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹。
上述实施例提供的图像处理装置,可以对待处理图像进行边缘检测,确定待处理图像中包含的边缘像素点。然后统计待处理图像中的边缘像素点的第一像素数量,根据第一像素数量确定待处理图像中是否存在摩尔纹。这样不用对待处理图像进行空间转换,可以快 速地检测图像中的摩尔纹,提高了图像处理的效率。
图10为另一个实施例中图像处理装置的结构示意图。如图10所示,该图像处理装置1000包括图像获取模块1002、边缘检测模块1004、数量统计模块1006、阈值确定模块1008和摩尔纹确定模块1010。其中:
图像获取模块1002,用于获取待处理图像。
边缘检测模块1004,用于对所述待处理图像进行边缘检测,确定所述待处理图像中包含的边缘像素点。
数量统计模块1006,用于统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量。
阈值确定模块1008,用于提取所述待处理图像中包含所有边缘像素点的最小矩形区域,统计所述最小矩阵区域中包含的像素点的数量,作为区域像素数量;根据所述区域像素数量获取第一数量阈值。
摩尔纹确定模块1010,用于若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹。
上述实施例提供的图像处理装置,可以对待处理图像进行边缘检测,确定待处理图像中包含的边缘像素点。然后统计待处理图像中的边缘像素点的第一像素数量,根据第一像素数量确定待处理图像中是否存在摩尔纹。这样不用对待处理图像进行空间转换,可以快速地检测图像中的摩尔纹,提高了图像处理的效率。
在一个实施例中,数量统计模块1006还用于将所述待处理图像进行二值化处理得到二值化图像,其中所述二值化图像中包含第一颜色像素点和第二颜色像素点,第一颜色像素点对应所述待处理图像中的边缘像素点,第二颜色像素点对应所述待处理图像中除边缘像素点之外的其他像素点;统计所述二值化图像中第一颜色像素点的第一像素数量。
在一个实施例中,阈值确定模块1008还用于根据所述待处理图像建立坐标系,并获取各个边缘像素点在所述坐标系中的像素坐标;根据获取的各个边缘像素点的像素坐标确定顶点坐标,并根据所述顶点坐标确定最小矩形区域。
在一个实施例中,摩尔纹确定模块1010还用于若所述第一像素数量大于第一数量阈值,则统计指定颜色的边缘像素点的数量,作为第二像素数量;若所述第二像素数量大于第二数量阈值,则确定所述待处理图像中存在摩尔纹;其中,所述第二数量阈值小于或等于所述第一数量阈值。
在一个实施例中,摩尔纹确定模块1010还用于将指定颜色的边缘像素点进行聚类处理,通过所述聚类处理后得到的每一类边缘像素点形成一条连续的边缘线;将边缘长度超过长度阈值的边缘线作为目标边缘线,并统计所述目标边缘线包含的边缘像素点的总数量,作为第二像素数量。
在一个实施例中,摩尔纹确定模块1010还用于若所述第一像素数量小于或等于第一数量阈值,则确定所述待处理图像中不存在摩尔纹;对所述待处理图像进行识别,得到所述待处理图像对应的图像分类标签,所述图像分类标签用于标记所述待处理图像的分类。
上述图像处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将图像处理装置按照需要划分为不同的模块,以完成上述图像处理装置的全部或部分功能。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行上述实施例提供的图像处理方法。
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例提供的图像处理方法。
本申请实施例还提供一种电子设备。上述电子设备中包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图11为一个实施例中图像处理电路的示意图。如图11所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。
如图11所示,图像处理电路包括ISP处理器1140和控制逻辑器1150。成像设备1110捕捉的图像数据首先由ISP处理器1140处理,ISP处理器1140对图像数据进行分析以捕捉可用于确定和/或成像设备1110的一个或多个控制参数的图像统计信息。成像设备1110可包括具有一个或多个透镜1112和图像传感器1114的照相机。图像传感器1114可包括色彩滤镜阵列(如Bayer滤镜),图像传感器1114可获取用图像传感器1114的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器1140处理的一组原始图像数据。传感器1120(如陀螺仪)可基于传感器1120接口类型把采集的图像处理的参数(如防抖参数)提供给ISP处理器1140。传感器1120接口可以利用SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行照相机接口或上述接口的组合。
此外,图像传感器1114也可将原始图像数据发送给传感器1120,传感器1120可基于传感器1120接口类型把原始图像数据提供给ISP处理器1140,或者传感器1120将原始图像数据存储到图像存储器1130中。
ISP处理器1140按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器1140可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。
ISP处理器1140还可从图像存储器1130接收图像数据。例如,传感器1120接口将原始图像数据发送给图像存储器1130,图像存储器1130中的原始图像数据再提供给ISP处理器1140以供处理。图像存储器1130可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。
当接收到来自图像传感器1114接口或来自传感器1120接口或来自图像存储器1130的原始图像数据时,ISP处理器1140可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器1130,以便在被显示之前进行另外的处理。ISP处理器1140从图像存储器1130接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。ISP处理器1140处理后的图像数据可输出给显示器1170,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器1140的输出还可发送给图像存储器1130,且显示器1170可从图像存储器1130读取图像数据。在一个实施例中,图像存储器1130可被配置为实现一个或多个帧缓冲器。此外,ISP处理器1140的输出可发送给编码器/解码器1160,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器1170设备上之前解压缩。编码器/解码器1160可由CPU或GPU或协处理器实现。
ISP处理器1140确定的统计数据可发送给控制逻辑器1150单元。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜1112阴影校正等图像传感器1114统计信息。控制逻辑器1150可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定成像设备1110的控制参数及ISP处理器1140的控制参数。例如,成像设备1110的控制参数可包括传感器1120控制参数(例如增益、曝光控制的积分时间、防抖参数等)、照相机闪光控制参数、透镜1112控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜1112 阴影校正参数。
以下为运用图11中图像处理技术实现上述实施例提供的图像处理方法。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种图像处理方法,所述方法包括:
    获取待处理图像;
    对所述待处理图像进行边缘检测,确定所述待处理图像中包含的边缘像素点;
    统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量;
    若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹。
  2. 根据权利要求1所述的方法,其特征在于,所述统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量,包括:
    将所述待处理图像进行二值化处理得到二值化图像,其中所述二值化图像中包含第一颜色像素点和第二颜色像素点,第一颜色像素点对应所述待处理图像中的边缘像素点,第二颜色像素点对应所述待处理图像中除边缘像素点之外的其他像素点;
    统计所述二值化图像中第一颜色像素点的数量,作为第一像素数量。
  3. 根据权利要求1所述的方法,其特征在于,所述若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹,包括:
    若所述第一像素数量大于第一数量阈值,则统计指定颜色的边缘像素点的数量,作为第二像素数量;
    若所述第二像素数量大于第二数量阈值,则确定所述待处理图像中存在摩尔纹;其中,所述第二数量阈值小于或等于所述第一数量阈值。
  4. 根据权利要求3所述的方法,其特征在于,所述统计指定颜色的边缘像素点的数量,作为第二像素数量,包括:
    将指定颜色的边缘像素点进行聚类处理,根据所述聚类处理后得到的每一类边缘像素点形成一条连续的边缘线;
    将边缘长度超过长度阈值的边缘线作为目标边缘线,并统计所述目标边缘线包含的边缘像素点的总数量,作为第二像素数量。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    提取所述待处理图像中包含所有边缘像素点的最小矩形区域,统计所述最小矩阵区域中包含的像素点的数量,作为区域像素数量;
    根据所述区域像素数量获取第一数量阈值。
  6. 根据权利要求5所述的方法,其特征在于,所述提取所述待处理图像中包含所有边缘像素点的最小矩形区域,包括:
    根据所述待处理图像建立坐标系,并获取各个边缘像素点在所述坐标系中的像素坐标;
    根据获取的各个边缘像素点的像素坐标确定顶点坐标,并根据所述顶点坐标确定最小矩形区域。
  7. 根据权利要求1至6所述的方法,其特征在于,所述方法还包括:
    若所述第一像素数量小于或等于第一数量阈值,则确定所述待处理图像中不存在摩尔纹;
    对所述待处理图像进行识别,得到所述待处理图像对应的图像分类标签,所述图像分类标签用于标记所述待处理图像的分类。
  8. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如下操作:
    获取待处理图像;
    对所述待处理图像进行边缘检测,确定所述待处理图像中包含的边缘像素点;
    统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量;
    若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹。
  9. 根据权利要求8所述的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行所述统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量时,还执行如下操作:
    将所述待处理图像进行二值化处理得到二值化图像,其中所述二值化图像中包含第一颜色像素点和第二颜色像素点,第一颜色像素点对应所述待处理图像中的边缘像素点,第二颜色像素点对应所述待处理图像中除边缘像素点之外的其他像素点;
    统计所述二值化图像中第一颜色像素点的数量,作为第一像素数量。
  10. 根据权利要求8所述的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行所述若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹时,还执行如下操作:
    若所述第一像素数量大于第一数量阈值,则统计指定颜色的边缘像素点的数量,作为第二像素数量;
    若所述第二像素数量大于第二数量阈值,则确定所述待处理图像中存在摩尔纹;其中,所述第二数量阈值小于或等于所述第一数量阈值。
  11. 根据权利要求10所述的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行所述统计指定颜色的边缘像素点的数量,作为第二像素数量时,还执行如下操作:
    将指定颜色的边缘像素点进行聚类处理,根据所述聚类处理后得到的每一类边缘像素点形成一条连续的边缘线;
    将边缘长度超过长度阈值的边缘线作为目标边缘线,并统计所述目标边缘线包含的边缘像素点的总数量,作为第二像素数量。
  12. 根据权利要求8所述的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时,还执行如下操作:
    提取所述待处理图像中包含所有边缘像素点的最小矩形区域,统计所述最小矩阵区域中包含的像素点的数量,作为区域像素数量;
    根据所述区域像素数量获取第一数量阈值。
  13. 根据权利要求12所述的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行所述提取所述待处理图像中包含所有边缘像素点的最小矩形区域时,还执行如下操作:
    根据所述待处理图像建立坐标系,并获取各个边缘像素点在所述坐标系中的像素坐标;
    根据获取的各个边缘像素点的像素坐标确定顶点坐标,并根据所述顶点坐标确定最小矩形区域。
  14. 根据权利要求8至13所述的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时,还执行如下操作:
    若所述第一像素数量小于或等于第一数量阈值,则确定所述待处理图像中不存在摩尔纹;
    对所述待处理图像进行识别,得到所述待处理图像对应的图像分类标签,所述图像分类标签用于标记所述待处理图像的分类。
  15. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行如下操作:
    获取待处理图像;
    对所述待处理图像进行边缘检测,确定所述待处理图像中包含的边缘像素点;
    统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量;
    若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹。
  16. 根据权利要求15所述的电子设备,其特征在于,所述处理器执行所述统计所述待处理图像中包含的边缘像素点的数量,作为第一像素数量时,还执行如下操作:
    将所述待处理图像进行二值化处理得到二值化图像,其中所述二值化图像中包含第一颜色像素点和第二颜色像素点,第一颜色像素点对应所述待处理图像中的边缘像素点,第二颜色像素点对应所述待处理图像中除边缘像素点之外的其他像素点;
    统计所述二值化图像中第一颜色像素点的数量,作为第一像素数量。
  17. 根据权利要求15所述的电子设备,其特征在于,所述处理器执行所述若所述第一像素数量大于第一数量阈值,则确定所述待处理图像中存在摩尔纹时,还执行如下操作:
    若所述第一像素数量大于第一数量阈值,则统计指定颜色的边缘像素点的数量,作为第二像素数量;
    若所述第二像素数量大于第二数量阈值,则确定所述待处理图像中存在摩尔纹;其中,所述第二数量阈值小于或等于所述第一数量阈值。
  18. 根据权利要求17所述的电子设备,其特征在于,所述处理器执行所述统计指定颜色的边缘像素点的数量,作为第二像素数量时,还执行如下操作:
    将指定颜色的边缘像素点进行聚类处理,根据所述聚类处理后得到的每一类边缘像素点形成一条连续的边缘线;
    将边缘长度超过长度阈值的边缘线作为目标边缘线,并统计所述目标边缘线包含的边缘像素点的总数量,作为第二像素数量。
  19. 根据权利要求15所述的电子设备,其特征在于,所述处理器还执行如下操作:
    提取所述待处理图像中包含所有边缘像素点的最小矩形区域,统计所述最小矩阵区域中包含的像素点的数量,作为区域像素数量;
    根据所述区域像素数量获取第一数量阈值。
  20. 根据权利要求15至19所述的电子设备,其特征在于,所述处理器还执行如下操作:
    若所述第一像素数量小于或等于第一数量阈值,则确定所述待处理图像中不存在摩尔纹;
    对所述待处理图像进行识别,得到所述待处理图像对应的图像分类标签,所述图像分类标签用于标记所述待处理图像的分类。
PCT/CN2019/087585 2018-06-08 2019-05-20 图像处理方法、计算机可读存储介质和电子设备 WO2019233264A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19815682.0A EP3783564A4 (en) 2018-06-08 2019-05-20 IMAGE PROCESSING METHOD, COMPUTER-READABLE STORAGE MEDIUM AND ELECTRONIC DEVICE
US17/026,220 US11430103B2 (en) 2018-06-08 2020-09-19 Method for image processing, non-transitory computer readable storage medium, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810590031.5 2018-06-08
CN201810590031.5A CN108921823B (zh) 2018-06-08 2018-06-08 图像处理方法、装置、计算机可读存储介质和电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/026,220 Continuation US11430103B2 (en) 2018-06-08 2020-09-19 Method for image processing, non-transitory computer readable storage medium, and electronic device

Publications (1)

Publication Number Publication Date
WO2019233264A1 true WO2019233264A1 (zh) 2019-12-12

Family

ID=64418683

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/087585 WO2019233264A1 (zh) 2018-06-08 2019-05-20 图像处理方法、计算机可读存储介质和电子设备

Country Status (4)

Country Link
US (1) US11430103B2 (zh)
EP (1) EP3783564A4 (zh)
CN (1) CN108921823B (zh)
WO (1) WO2019233264A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034389A (zh) * 2021-03-17 2021-06-25 武汉联影智融医疗科技有限公司 图像处理方法、装置、计算机设备和存储介质
CN116630312A (zh) * 2023-07-21 2023-08-22 山东鑫科来信息技术有限公司 一种恒力浮动打磨头打磨质量视觉检测方法

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921823B (zh) * 2018-06-08 2020-12-01 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN109636753B (zh) * 2018-12-11 2020-09-18 珠海奔图电子有限公司 图像处理方法和装置、电子设备及计算机可读存储介质
CN109584774B (zh) * 2018-12-29 2022-10-11 厦门天马微电子有限公司 一种显示面板的边缘处理方法及显示面板
CN110097533B (zh) * 2019-02-12 2023-04-07 哈尔滨新光光电科技股份有限公司 一种光斑外形尺寸和位置的精确测试方法
CN110111281A (zh) * 2019-05-08 2019-08-09 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN111340714B (zh) * 2019-08-29 2023-08-08 杭州海康慧影科技有限公司 一种摩尔纹处理方法、装置及电子设备
CN112967182B (zh) * 2019-12-12 2022-07-29 杭州海康威视数字技术股份有限公司 图像处理方法、装置及设备、存储介质
CN111144425B (zh) * 2019-12-27 2024-02-23 五八有限公司 检测拍屏图片的方法、装置、电子设备及存储介质
CN111583129A (zh) * 2020-04-09 2020-08-25 天津大学 基于卷积神经网络AMNet的屏摄图像摩尔纹去除方法
CN111724404A (zh) * 2020-06-28 2020-09-29 深圳市慧鲤科技有限公司 边缘检测方法及装置、电子设备及存储介质
CN111862244A (zh) * 2020-07-16 2020-10-30 安徽慧视金瞳科技有限公司 一种基于图像处理的塑料片智能色选方法
CN112819710B (zh) * 2021-01-19 2022-08-09 郑州凯闻电子科技有限公司 基于人工智能的无人机果冻效应自适应补偿方法及系统
CN113435287A (zh) * 2021-06-21 2021-09-24 深圳拓邦股份有限公司 草地障碍物识别方法、装置、割草机器人及可读存储介质
CN116128877B (zh) * 2023-04-12 2023-06-30 山东鸿安食品科技有限公司 一种基于温度检测的排汽回收智能监测系统
CN117422716B (zh) * 2023-12-19 2024-03-08 沂水友邦养殖服务有限公司 基于人工智能的肉鸡养殖生态预警方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7664316B2 (en) * 2004-08-24 2010-02-16 Sharp Kabushiki Kaisha Image processing apparatus, imaging apparatus, image processing method, image processing program and recording medium
CN103645036A (zh) * 2013-12-30 2014-03-19 京东方科技集团股份有限公司 摩尔纹测评方法及测评装置
WO2017058349A1 (en) * 2015-09-30 2017-04-06 Csr Imaging Us, Lp Systems and methods for selectively screening image data
CN106875346A (zh) * 2016-12-26 2017-06-20 奇酷互联网络科技(深圳)有限公司 图像处理方法、装置和终端设备
CN108921823A (zh) * 2018-06-08 2018-11-30 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3881119B2 (ja) * 1998-12-15 2007-02-14 富士通株式会社 モアレ除去装置
JP4032632B2 (ja) * 2000-11-02 2008-01-16 コニカミノルタビジネステクノロジーズ株式会社 画像処理装置
JP2003234893A (ja) * 2002-02-06 2003-08-22 Ricoh Co Ltd 画像処理装置
JP2007124287A (ja) * 2005-10-28 2007-05-17 Seiko Epson Corp 画像調整方法、画像調整装置および画像調整プログラム
US7508994B2 (en) * 2005-12-05 2009-03-24 Eastman Kodak Company Method for detecting streaks in digital images
JP4926568B2 (ja) * 2006-06-29 2012-05-09 キヤノン株式会社 画像処理装置、画像処理方法、及び画像処理プログラム
JP2010206725A (ja) * 2009-03-05 2010-09-16 Sharp Corp 画像処理装置、画像形成装置、画像処理方法、プログラムおよび記録媒体
US20170337711A1 (en) 2011-03-29 2017-11-23 Lyrical Labs Video Compression Technology, LLC Video processing and encoding
CN103123691B (zh) * 2013-02-26 2019-02-12 百度在线网络技术(北京)有限公司 一种莫尔条纹的过滤方法和装置
JP2015190776A (ja) * 2014-03-27 2015-11-02 キヤノン株式会社 画像処理装置および撮像システム
CN104657975B (zh) * 2014-05-13 2017-10-24 武汉科技大学 一种视频图像横向条纹扰动检测的方法
CN104486534B (zh) 2014-12-16 2018-05-15 西安诺瓦电子科技有限公司 摩尔纹检测抑制方法及装置
ITUB20153912A1 (it) 2015-09-25 2017-03-25 Sisvel Tech S R L Metodi e apparati per codificare e decodificare immagini digitali mediante superpixel
US20180059275A1 (en) * 2016-08-31 2018-03-01 Chevron U.S.A. Inc. System and method for mapping horizons in seismic images
CN106936964B (zh) * 2016-12-14 2019-11-19 惠州旭鑫智能技术有限公司 一种基于霍夫变换模板匹配的手机屏幕角点检测方法
CN107424123B (zh) * 2017-03-29 2020-06-23 北京猿力教育科技有限公司 一种摩尔纹去除方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7664316B2 (en) * 2004-08-24 2010-02-16 Sharp Kabushiki Kaisha Image processing apparatus, imaging apparatus, image processing method, image processing program and recording medium
CN103645036A (zh) * 2013-12-30 2014-03-19 京东方科技集团股份有限公司 摩尔纹测评方法及测评装置
WO2017058349A1 (en) * 2015-09-30 2017-04-06 Csr Imaging Us, Lp Systems and methods for selectively screening image data
CN106875346A (zh) * 2016-12-26 2017-06-20 奇酷互联网络科技(深圳)有限公司 图像处理方法、装置和终端设备
CN108921823A (zh) * 2018-06-08 2018-11-30 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3783564A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034389A (zh) * 2021-03-17 2021-06-25 武汉联影智融医疗科技有限公司 图像处理方法、装置、计算机设备和存储介质
CN113034389B (zh) * 2021-03-17 2023-07-25 武汉联影智融医疗科技有限公司 图像处理方法、装置、计算机设备和存储介质
CN116630312A (zh) * 2023-07-21 2023-08-22 山东鑫科来信息技术有限公司 一种恒力浮动打磨头打磨质量视觉检测方法
CN116630312B (zh) * 2023-07-21 2023-09-26 山东鑫科来信息技术有限公司 一种恒力浮动打磨头打磨质量视觉检测方法

Also Published As

Publication number Publication date
US20210004952A1 (en) 2021-01-07
EP3783564A4 (en) 2021-06-09
EP3783564A1 (en) 2021-02-24
CN108921823B (zh) 2020-12-01
CN108921823A (zh) 2018-11-30
US11430103B2 (en) 2022-08-30

Similar Documents

Publication Publication Date Title
WO2019233264A1 (zh) 图像处理方法、计算机可读存储介质和电子设备
CN110717942B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
EP3849170B1 (en) Image processing method, electronic device, and computer-readable storage medium
WO2019085792A1 (en) Image processing method and device, readable storage medium and electronic device
CN108717530B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN107481186B (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备
EP3480784B1 (en) Image processing method, and device
CN108833785B (zh) 多视角图像的融合方法、装置、计算机设备和存储介质
CN113766125B (zh) 对焦方法和装置、电子设备、计算机可读存储介质
CN109685853B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
CN110661977B (zh) 主体检测方法和装置、电子设备、计算机可读存储介质
CN107563979B (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备
CN107704798B (zh) 图像虚化方法、装置、计算机可读存储介质和计算机设备
CN109559353B (zh) 摄像模组标定方法、装置、电子设备及计算机可读存储介质
CN111368819B (zh) 光斑检测方法和装置
CN107959841B (zh) 图像处理方法、装置、存储介质和电子设备
WO2019105260A1 (zh) 景深获取方法、装置及设备
CN109068060B (zh) 图像处理方法和装置、终端设备、计算机可读存储介质
CN113313626A (zh) 图像处理方法、装置、电子设备及存储介质
CN110490196A (zh) 主体检测方法和装置、电子设备、计算机可读存储介质
CN110365897B (zh) 图像修正方法和装置、电子设备、计算机可读存储介质
CN108737733B (zh) 信息提示方法和装置、电子设备、计算机可读存储介质
CN112581481A (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN109040598B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
TWI676965B (zh) 物件影像辨識系統及物件影像辨識方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19815682

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019815682

Country of ref document: EP

Effective date: 20201119

NENP Non-entry into the national phase

Ref country code: DE