WO2024016791A1 - 处理图形符号的方法、装置和计算机可读存储介质 - Google Patents

处理图形符号的方法、装置和计算机可读存储介质 Download PDF

Info

Publication number
WO2024016791A1
WO2024016791A1 PCT/CN2023/092278 CN2023092278W WO2024016791A1 WO 2024016791 A1 WO2024016791 A1 WO 2024016791A1 CN 2023092278 W CN2023092278 W CN 2023092278W WO 2024016791 A1 WO2024016791 A1 WO 2024016791A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processed
color space
brightness component
processing
Prior art date
Application number
PCT/CN2023/092278
Other languages
English (en)
French (fr)
Inventor
陈晓艺
陈飞
江冠南
Original Assignee
宁德时代新能源科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宁德时代新能源科技股份有限公司 filed Critical 宁德时代新能源科技股份有限公司
Priority to EP23762124.8A priority Critical patent/EP4332879A1/en
Priority to US18/517,022 priority patent/US20240086661A1/en
Publication of WO2024016791A1 publication Critical patent/WO2024016791A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/1473Recognising objects as potential recognition candidates based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/168Smoothing or thinning of the pattern; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Definitions

  • the present application relates to the field of image processing technology, and more specifically, to a method, device and computer-readable storage medium for processing graphic symbols.
  • QR codes QR codes
  • BS bar codes
  • Embodiments of the present application provide a method, device, and computer-readable storage medium for processing graphic symbols, which can improve the efficiency of image repair.
  • the first aspect provides a method for processing graphic symbols, including: obtaining the graph to be processed The brightness component image of the image; perform binarization processing based on the brightness component image of the image to be processed to obtain a binary image; perform grayscale morphological operations based on the binarized image to obtain the target image; where, the image to be processed with graphic symbols.
  • a high-quality target image can be obtained, and the graphic outline contained in the target image can be obtained It is clear and has less noise, which is helpful to improve the accuracy and efficiency of subsequent pattern recognition.
  • obtaining the brightness component image of the image to be processed includes: converting the color space of the image to be processed into the target color space according to the mapping relationship between the color space of the image to be processed and the target color space; Extract the brightness component image of the target color space to obtain the brightness component image of the image to be processed.
  • the color space of the image to be processed that does not originally include the brightness component can be converted to the target color space of the image that includes the brightness component, thereby realizing the extraction of the brightness component of the image to be processed.
  • the color space of the image to be processed may be RGB color space or BGR color space; the target color space may be YCbCr color space, YCrCb color space or YUV color space.
  • an image containing only color components is mapped into a color space containing brightness components, making the image mapping method simpler and more robust, which is beneficial to the identification of areas of interest in the image.
  • Y is the brightness value of the pixel in the brightness component image of the target color space
  • R, G, and B are the red chromaticity value, green chromaticity value, and blue chromaticity value of the pixel in the original image respectively
  • the image components containing the most sufficient information on texture and structural features in the image to be processed can be obtained, which is beneficial to subsequent processing and recognition.
  • the method for processing graphic symbols before performing binarization processing based on the brightness component image of the image to be processed, the method for processing graphic symbols further includes: performing image enhancement on the brightness component image of the image to be processed.
  • the visual effect of the image can be improved, the image can be made clearer, and the interpretation and recognition effect of the graphic symbols in the image can be enhanced.
  • performing image enhancement on the brightness component image of the image to be processed includes: using a point arithmetic algorithm to perform image enhancement on the brightness component image.
  • the gray scale range occupied by the image data can be changed and the contrast of the feature of interest can be expanded.
  • using a point operation algorithm to enhance the brightness component image of the image to be processed includes: performing contrast stretching on the brightness component image of the image to be processed.
  • the grayscale can be adjusted according to the characteristics of the image, so that the contrast of the image is transformed into an appropriate range, and the difference in grayscale in different areas of the image is expanded to facilitate subsequent binarization processing.
  • performing contrast stretching on the brightness component image of the image to be processed includes: traversing the pixel points in the brightness component image of the image to be processed, and determining each pixel point in the brightness component image of the image to be processed. the grayscale value of each pixel in the brightness component; determine the contrast stretching function according to the grayscale range of the grayscale value of each pixel in the brightness component; perform grayscale processing on the pixels in the brightness component image of the image to be processed based on the contrast stretching function Transform.
  • the difference between the foreground and the background can be expanded, so that the features of interest more prominent.
  • performing binarization processing based on the brightness component image of the image to be processed includes: using a local adaptive binarization algorithm to perform binarization processing on the brightness component image of the image to be processed.
  • the brightness component image of the image to be processed is binarized, which can remove the light and dark information contained in the grayscale image and convert the grayscale image into a black and white image, which is beneficial to the processing and recognition of the image in the subsequent process.
  • using a local adaptive binarization algorithm to binarize the brightness component of the image to be processed includes: determining the size of the binarization processing window; using the binarization processing window to traverse the to-be-processed image Each pixel of the image's brightness component image; calculate the sum of pixel values of all pixels covered by the binarization processing window; determine the grayscale threshold covered by the binarization processing window; when the sum of pixel values is greater than or equal to the grayscale In the case of a threshold, the pixel value of the pixel corresponding to the center of the binarization window is set to 1; otherwise, the pixel value of the pixel corresponding to the center of the binarization window is set to 0.
  • the grayscale threshold can be determined according to the following formula:
  • T is the grayscale threshold
  • n represents the side length of the binarization processing window
  • v ij represents the grayscale value of the pixel in the i-th row and j-th column in the binarization processing window
  • C is a constant term
  • the constant term is The value can be determined based on actual image processing needs.
  • the gray threshold T can use a single variable control method, Bayesian optimization or other parameter optimization methods to find the optimal solution.
  • the binarization threshold at each pixel position can be determined by the distribution of its surrounding neighborhood pixels instead of being fixed, so that the binarization threshold of the area with higher brightness can be increased, while the binarization threshold of the area with higher brightness can be increased.
  • the binarization threshold for low image areas is reduced accordingly. This enables reasonable binarization of local image areas with different brightness, contrast, and texture. Reduce the obvious grayscale of pixels in local areas of the image due to uneven illumination in the actual collected image. The impact of differences.
  • the image processing method further includes: filtering the binarized image.
  • filtering the binarized image includes: performing edge-preserving filtering on the binarized image.
  • noise in the image can be filtered out, which is beneficial to image recognition, and edge-preserving filtering can filter out as much noise as possible while retaining more edge details.
  • performing edge-preserving filtering on a binary image includes: converting the binary image into an RGB image; performing color mean shift on all pixels on the RGB image; The drifted RGB image is converted into a binary image.
  • the binary image can be made smoother and the calculation amount of morphological operations can be reduced.
  • performing grayscale morphological operations based on the binarized image includes: performing morphological closing operations and opening operations on the binarized image.
  • the noise in the image can be filtered out while keeping the area size of the image unchanged.
  • the closing operation includes: selecting the first structural element based on the binary image; and sequentially performing the closing operation area in the binary image according to the first structural element and the preset closing operation rules. Expansion treatment and corrosion treatment.
  • the opening operation includes: selecting a second structural element based on the image after the closing operation; and selecting an opening operation area in the binarized image according to the second structural element and the preset opening operation rules. Carry out corrosion treatment and expansion treatment in sequence.
  • the boundaries can be smoothed, fine spikes eliminated, and narrow connections, thereby eliminating small holes in the image; it can also further filter out noise that cannot be filtered out in the edge-preserving filtering step, thereby filtering out most of the noise in the image.
  • the graphic symbol in the image to be processed is a two-dimensional code or a barcode.
  • an image containing a QR code or barcode can be repaired, and the outline of the QR code or barcode in the repaired image is clear, making it easy to be recognized by a code scanning device.
  • a device for processing graphic symbols including: an acquisition module for acquiring a brightness component image of an image to be processed; and a binarization processing module for binarization based on the brightness component image of the image to be processed. Process and output a binary image; the operation module is used to perform grayscale morphological operations based on the binary image and output a target image, where the image to be processed is an image containing graphic symbols.
  • the graphic symbols contained in the image to be processed are QR codes or barcodes.
  • a high-quality target image can be obtained.
  • the target image contains The graphics have clear outlines and less noise, which is beneficial to improving the accuracy and efficiency of subsequent graphic recognition.
  • the acquisition module is used to: convert the color space of the image to be processed into the target color space according to the mapping relationship between the color space of the image to be processed and the target color space; and extract the color space of the target color space.
  • Luminance component image is used to: convert the color space of the image to be processed into the target color space according to the mapping relationship between the color space of the image to be processed and the target color space; and extract the color space of the target color space.
  • the color space of the image to be processed is RGB color space or BGR color space
  • the target color space is YCbCr color space, YCrCb color space or YUV color space.
  • Y is the brightness value of the pixel in the brightness component image of the target color space
  • R, G, and B are respectively the red chromaticity value, green chromaticity value, and blue chromaticity value of the pixel in the image to be processed
  • the image processing device further includes an image enhancement module, configured to perform image enhancement on the brightness component image of the image to be processed before the binarization process.
  • the image enhancement module is configured to use a point arithmetic algorithm to enhance the brightness component image of the image to be processed.
  • the image enhancement module is used to perform contrast stretching on the brightness component image of the image to be processed.
  • the image enhancement module is configured to traverse the pixel points in the brightness component image of the image to be processed, and determine the grayscale value of each pixel in the brightness component image of the image to be processed; according to the The grayscale range of the grayscale value of each pixel in the brightness component image of the processed image determines the contrast stretching function; the grayscale transformation is performed on the pixel points in the brightness component image of the image to be processed according to the contrast stretching function.
  • the binarization processing module is used to binarize the enhanced image using a local adaptive binarization algorithm.
  • the binarization processing module is used to: determine the size of the binarization processing window; use the binarization processing window to traverse the brightness component image of the image to be processed or the enhanced image for each pixel; calculate the sum of pixel values of all pixels covered by the binarization window; when the sum of pixel values is greater than or equal to the preset threshold, set the pixel value of the pixel corresponding to the center of the window to 1, Otherwise, set the pixel value of the pixel corresponding to the center of the window to 0.
  • the image processing device further includes a filter mode block to filter the binarized image before morphological operations.
  • the filtering module is used to perform edge-preserving filtering on the binarized image.
  • the filtering module is used to: convert the binary image into an RGB image; perform color mean shift on all pixels on the RGB image; convert the RGB image after color mean shift Convert to binary image.
  • the operation module is used to sequentially perform morphological closing operations and opening operations on the binary image.
  • the operation module includes a closed operation unit for selecting the first structural element based on the binarized image; and calculating the first structural element in the binarized image according to the first structural element and the preset closed operation rule.
  • the closed operation area is subjected to expansion processing and corrosion processing in sequence.
  • the operation module further includes an opening operation unit, which is used to select a second structural element based on the image after the closing operation; and perform binarization according to the second structural element and the preset opening operation rules.
  • the open operation areas in the image are corroded and expanded in sequence.
  • a device for processing graphic symbols including a processor and a memory.
  • the memory is used to store a program.
  • the processor is used to call and run the above-mentioned program from the memory to execute the above-mentioned first aspect or the method of the first aspect.
  • Method for processing graphic symbols in any possible implementation.
  • a computer-readable storage medium for storing a computer program.
  • the computer program When the computer program is run on a computer, it causes the computer to execute the above-mentioned first aspect or any possible implementation of the first aspect. Methods of handling graphic symbols.
  • Figure 1 is a schematic structural diagram of the provided system architecture
  • Figure 2 is a schematic flow chart of a method for processing graphic symbols according to an embodiment of the present application
  • Figure 3 is a schematic flow chart of a method for processing graphic symbols according to another embodiment of the present application.
  • Figure 4 is an image of each component of the target color space in the method of processing graphic symbols according to the embodiment of the present application.
  • Figure 5 is a grayscale histogram before and after contrast stretching in the method of processing graphic symbols according to the embodiment of the present application;
  • Figure 6 is a process image of the method for processing graphic symbols according to the embodiment of the present application.
  • Figure 7 is a schematic structural block diagram of a device for processing graphic symbols according to an embodiment of the present application.
  • Figure 8 is a schematic structural block diagram of a device for processing graphic symbols according to an embodiment of the present application.
  • the size of the sequence numbers of each process does not mean the order of execution.
  • the execution order of each process should be determined by its functions and internal logic, and should not be used in the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • Embodiments of the present application can be applied to the processing of images containing features such as shape, texture, etc., so that the information contained in the image can be more easily recognized in subsequent processes.
  • Embodiments of this application include but are not limited to the processing of QR code images.
  • the images may be images captured by a charge coupled device (CCD) camera, images captured by other cameras, or screen captures, etc.
  • CCD charge coupled device
  • the embodiment disclosed in this application does not limit the image collection method.
  • QR codes have the advantages of large amount of information, easy identification, and low cost, they have been rapidly developed and widely used in recent years. By scanning the QR code, various functions such as information acquisition, mobile payment, anti-counterfeiting traceability and account login can be realized.
  • Common QR codes include QR (Quick Response Code) code, PDF (Portable Data File) 417 QR code, Data matrix (Data matrix) QR code, etc.
  • Data Matrix QR code is a QR code for industrial products. The minimum size of this QR code is the smallest among all current barcodes. It is especially suitable for marking small parts and printing directly on entities.
  • Each Data matrix QR code consists of data area, Finder Pattern, Alignment Patterns and blank area. Among them, the data area is composed of regularly arranged square modules.
  • the data area is surrounded by positioning graphics, and the positioning graphics are surrounded by blank areas.
  • the data area is separated by ranking graphics.
  • the positioning graphic is the boundary of the data area.
  • Two adjacent sides are dark solid lines, which are mainly used to limit the physical size and positioning.
  • the other two adjacent sides are composed of alternating dark and light modules, which are mainly used to limit the symbol.
  • the unit structure can also help determine physical size and distortion. Since the Data Matrix QR code only needs to read 20% of the data to accurately read, it is very suitable for use in places where barcodes are easily damaged, such as parts printed on parts exposed to special environments such as high heat, chemical cleaners, and mechanical abrasion. superior. When a QR code is damaged, image restoration is required before the information contained in it can be read.
  • Existing technology usually first performs grayscale processing and image enhancement on the obtained color QR code, and then trains a deep learning model based on a large amount of data to The damaged QR code in the QR code image area is repaired and repaired. The resulting QR code image is then read.
  • the training of the deep learning model first requires the collection of sample sets and training sets.
  • the training set is various types of damaged QR code images, and the sample set is clear QR code images corresponding to the damaged QR code images in the training set.
  • the sample set and training set also need to go through manual classification and other processes before they can be used for model training. Therefore, using deep learning methods to repair QR code images consumes a lot of time in collecting data, analyzing data, organizing data and training models. The efficiency is low; and it is highly dependent on data, making it difficult for the trained model to be widely used.
  • embodiments of the present application provide a method for processing graphic symbols, which uses digital image processing methods to perform brightness component extraction, image enhancement, image binarization processing, filtering and morphological operations on the collected images containing graphic symbols.
  • the final target image can show clear shape features of the area of interest.
  • the image can be quickly repaired, and the Data Matrix QR code in the repaired image can be correctly identified.
  • this embodiment of the present application provides a system architecture 100.
  • the image acquisition device 110 is used to input an image 1101 to be processed to the image processing device 120 .
  • the image collection device 110 can be any device with image shooting or collection functions, such as a camera, a video camera, a camera, a scanner, a mobile phone, a tablet computer, or a code scanner.
  • the image 1101 to be processed is shot or collected by the above device. image; it can also be a device with a data storage function, and the image 1101 to be processed is stored in the device.
  • This application does not limit the type of the image acquisition device 110 .
  • the image to be processed is an image containing graphic symbols, and optionally, may be an image containing a QR code or a barcode.
  • the image acquisition device 120 is used to process the image to be processed 1101 and output the target image 1201.
  • the target image 1201 is an image that can clearly embody certain characteristics such as certain shapes, textures, etc. after being processed by the image processing device 120 , that is, an image that can clearly embody certain characteristics that the user is interested in.
  • the target image 1201 may be an image that can be correctly recognized by the QR code or barcode recognition device after processing.
  • the image processing device 120 can be any device with image processing functions, such as a computer, Smartphone, workstation, or other device with a central processing unit. This application does not limit the type of the image processing device 120 .
  • the above-mentioned image acquisition device 110 and the above-mentioned image processing device 120 may be the same device.
  • the image acquisition device 110 and the image processing device 120 are both smart phones, or both are code scanners.
  • the image acquisition device 110 and the image processing device 120 may be different devices.
  • the image acquisition device 110 is a terminal device
  • the image processing device 120 is a computer, a workstation, and other devices.
  • the image acquisition device 110 can interact with the image processing device 120 through a communication network using any communication mechanism/communication standard.
  • the communication network can be a wide area network. , LAN, point-to-point connection, etc., or any combination thereof.
  • the image to be processed in this embodiment is a QR code image collected by an image acquisition device.
  • the image acquisition device may be a QR code scanning device, such as a code scanning gun, or a camera or other device.
  • QR code scanning device such as a code scanning gun
  • This method is not limited to the processing of QR code images, and can also be applied to the processing of other images. This application does not limit the application objects of this method.
  • FIGS 2 and 3 show a schematic flowchart of a method 200 for processing graphic symbols according to an embodiment of the present application. This method can be implemented using the image processing device 120 shown in Figure 1.
  • the brightness component image is an image that embodies the brightness information of the picture. Images in different formats correspond to different color spaces, and different color spaces contain different channels, such as brightness channels and chroma channels.
  • the component image corresponding to the brightness channel is a grayscale image, that is, an image marked with grayscale.
  • the sampling rate of the luma channel is higher than that of the chroma channel, so generally speaking, the luma component image refers to a grayscale image.
  • Grayscale images can be obtained through floating point method, component method, maximum method, average method, weighted average method or Gamma correction algorithm, or generated through the target color adjustment tool included in multimedia editing software. The above targets Most color adjustment tools generate grayscale images by performing the above operations on images, or they can also be obtained through color space conversion.
  • obtaining the brightness component image of the image to be processed in step 210 includes: converting the image to be processed from the original color space to the target color space, and extracting the brightness component image of the target color space, Obtain the brightness component image of the image to be processed.
  • the original color space is the color space of the image to be processed.
  • the original color space depends on the format of the image to be processed. It can be a color space that uses color to describe colors such as RGB color space, BGR color space, or YUV. , YCbCr and other color spaces that originally include brightness channels, or other color spaces.
  • RGB color space and BGR color space are two color spaces commonly used for color images.
  • the RGB color space includes three channels, namely R (Red, red) channel, G (Green, green) channel and B (Blue, blue) channel.
  • the target color space is a color space including a brightness channel such as YUV, YCbCr, etc.
  • the YCbCr color space includes a Y (brightness) channel, a Cb (blue chroma) channel and a Cr (red chroma) channel.
  • the color space of the image to be processed does not contain brightness information, it needs to be converted to the target color space first to extract the brightness component. Through the conversion of the color space, the brightness and chrominance can be separated, and you can see what you need more intuitively. Information.
  • color space conversion can be performed based on the mapping relationship between the color space of the image to be processed and the target color space.
  • Y is the brightness value of the pixel in the brightness component image of the target color space
  • R, G, and B are the red chromaticity value, green chromaticity value, and blue chromaticity value of the pixel in the original image respectively
  • the mapping method of YCbCr color space It is more intuitive and robust than the mapping method of RGB color space or BGR color space.
  • the brightness and chroma of the image can be separated, so that the information contained in the image is displayed more intuitively; not only the QR code
  • the outline features can be displayed more clearly, and it is also conducive to the recognition of Data Matrix QR codes.
  • the brightness component image of the image to be processed may be obtained by performing operations on the image to be processed, or may be obtained by converting the image to be processed from the original color space to the target color space as described above.
  • Image Binarization is to determine all pixels on the image whose grayscale is greater than or equal to a certain threshold as belonging to a specific object, and its grayscale value is 255, otherwise these pixels are excluded from the object area.
  • the gray value is 0, indicating the background or exceptional object areas. That is, the process of presenting an obvious black and white effect to the entire image.
  • the binarization of the image can greatly reduce the amount of data in the image, thereby highlighting the outline of the target.
  • the point operation algorithm may be: global fixed threshold method, local adaptive threshold method; OTSU (Otsu) binarization algorithm and other methods.
  • the global fixed threshold method uses the same threshold to binarize the entire image; while the local adaptive threshold determines the binarization of the pixel position based on the pixel value distribution of the neighborhood block of the pixel. threshold.
  • the adaptive binarization algorithm can be used to binarize the image into a foreground part and a background part according to the grayscale characteristics of the image, thereby obtaining a binarized image, so as to reasonably analyze local image areas with different brightness, contrast, and texture. of binarization.
  • the point arithmetic algorithm is a local adaptive binarization algorithm.
  • the adaptive binarization algorithm can be wolf local adaptive binarization algorithm, Niblack binarization algorithm or sauvola binarization algorithm.
  • the local adaptive binarization algorithm includes the following steps:
  • the grayscale threshold can be determined according to the following formula:
  • T is the grayscale threshold
  • n represents the side length of the binarization processing window
  • v ij represents the grayscale value of the pixel in the i-th row and j-th column in the binarization processing window
  • C is a constant selected according to actual image processing requirements.
  • the gray threshold T can use a single variable control method, Bayesian optimization or other parameter optimization methods to find the optimal solution.
  • an adaptive binarization algorithm is used to binarize the enhanced image.
  • the binarization threshold at each pixel position is not fixed, but is determined by the distribution of pixels in its surrounding neighborhood. , the image area with higher brightness can have a higher binarization threshold, while the image area with lower brightness has a relatively lower binarization threshold. It can be seen from (c) and (d) in Figure 6 that the contours in the binarized image after local adaptive binarization are clearer than the enhanced image.
  • Grayscale morphological operations include erosion, expansion, opening operations, and closing operations. Through the above operations, parameters such as grayscale value and spatial size of the original image can be adjusted.
  • the grayscale morphology operation in step 250 may be based on the binarized image.
  • the closing operation includes: selecting the first structural element based on the binary image; and sequentially performing expansion processing and corrosion processing on the closed operation area in the binary image according to the first structural element and the preset closing operation rules.
  • the opening operation includes: selecting a second structural element based on the image after the opening operation; and sequentially performing corrosion processing and expansion processing on the opening operation area in the binarized image according to the second structural element and the preset opening operation rules.
  • Structural elements are basic elements in morphological transformation. They are images of specific shapes and sizes designed to detect certain structural information of images. Structural elements can be circles, squares, lines, etc., and can carry shape, size, grayscale and Chroma and other information.
  • the structural element In the process of image processing, the structural element can be regarded as a two-dimensional matrix. In this two-dimensional matrix, the value of the matrix element is "0" or "1".
  • the size of the structural elements is smaller than the size of the image to be processed.
  • the first structural element and the second structural element can be the same structural element or different structural elements, and the size of the structural element can be adjusted according to the actual image processing effect.
  • the closed operation area and the open operation area can be the entire area where the aforementioned binary image is located, or they can be part of the area in the binary image. For example, an area with a blurred outline or a lot of noise can be selected as the closed operation area and/or Or open the operation area.
  • the above image processing method may also include before step 230:
  • Image enhancement refers to improving the visual effects of images for image application situations to meet the needs of certain special analyses.
  • an algorithm based on the spatial domain such as a point operation algorithm or a neighborhood denoising algorithm, can be used for image enhancement; an algorithm based on the frequency domain can also be used for image enhancement.
  • a point operation algorithm can be used to analyze the brightness of the image to be processed. image enhancement.
  • a point operation algorithm is used to enhance the brightness component image of the image to be processed, which can be: contrast stretching of the brightness component image of the image to be processed, or grayscale histogram displacement of the brightness component image of the image to be processed.
  • the contrast stretching method may be a linear stretching method, that is, a linear proportional change is made to the pixel values of the brightness component image of the image to be processed.
  • methods such as global linear stretching, 2% linear stretching, piecewise linear stretching, grayscale window slicing, etc. can be used to linearly stretch the brightness component image of the image to be processed.
  • the contrast stretching method may be a nonlinear stretching method, that is, using a nonlinear function to stretch the image.
  • a nonlinear function to stretch the image.
  • an exponential function e.g., a logarithmic function
  • Functions such as the Gaussian function perform nonlinear stretching of the brightness component image of the image to be processed.
  • the contrast stretching process can be expressed as follows:
  • I (x, y) is the gray value of the pixel in the brightness component image of the image to be processed
  • (x, y) is the coordinate value of the pixel
  • I min is the minimum gray value of the brightness component image of the image to be processed.
  • I max is the maximum gray value of the brightness component image of the image to be processed
  • MIN and MAX are the gray minimum and maximum values of the gray space to be stretched to, optionally, the minimum value can be 0, The maximum value can be 255.
  • step 220 may not be performed.
  • step 230 may include step 230a as shown in Figure 3:
  • the above image processing method may also include:
  • edge-preserving filtering is performed on the binarized image to filter out noise as much as possible while retaining more edge details.
  • performing edge-preserving filtering on a binary image includes: converting the binary image into an RGB image; performing color mean shift on all pixels on the RGB image; RGB images are converted into binary images.
  • the binary image can be made smoother and the calculation amount of morphological operations can be reduced.
  • performing color mean drift on all pixels on the RGB image includes: determining the physical space radius and color space radius used to establish the iteration space; taking any pixel on the RGB image as the initial center point, based on the physical space The radius and the color space radius establish a space sphere; using the space sphere as the iteration space, calculate the vector sum of the color vectors of all pixel points in the iteration space relative to the center point; move the center point to the sum vector the end point of the vector sum, recalculate the vector sum until the end point of the vector sum coincides with the center point, and use the end point of the vector sum as the final center point; update the color value of the initial center point to the color of the final center point value.
  • step 240 when the binarized image obtained in step 230 has higher quality and less noise, step 240 may not be performed.
  • step 250 when there are many noise points in the binary image obtained in step 230, a filtering operation can be performed on the binarized image through step 240 before step 250.
  • the filtering operation can eliminate relatively noisy points in the image. The noise is easily eliminated, thereby reducing the computational complexity of step 250 and enhancing the denoising effect.
  • step 250 may include step 250a as shown in Figure 3:
  • the above image processing method is applied to the processing of DataMatrix QR code images.
  • the image to be processed is a color QR code image input by the image acquisition device, and the color space is RGB color. space, the processing steps are as follows:
  • Step 1 Convert the color QR code image from RGB color space to YCbCr color space, separate the Y component, Cb component and Cr component, and extract the QR code image of the Y component, which is the brightness component image of the QR code image to be processed.
  • Figure 4 shows the image of the three components after converting the color space of the Data Matrix QR code image into the YCbCr color space using the method of processing graphic symbols of the present application, where (a) is the image of the Y component, (b) is the image of the Cb component, (c) is the image of the Cr component.
  • Step 2 Use the contrast stretching method for image enhancement on the Y component QR code image to expand the grayscale difference between the foreground and the background to obtain a stretched QR code image, that is, a grayscale image;
  • Figure 5 shows the The grayscale histogram of the Data Matrix QR code image before and after image enhancement, where (a) is the grayscale histogram of the Y channel component image before contrast stretching, (b) is the enhanced image obtained after contrast stretching The grayscale histogram can be seen from Figure 5.
  • the grayscale of the image before the contrast stretching is relatively concentrated, so it is difficult to reflect the texture and shape features in the image.
  • Step 3 Perform local adaptive binarization on the stretched QR code image.
  • the grayscale characteristics of the image divide the image into foreground and background parts to obtain the binarized QR code image;
  • Figure 6 (d) is the binary image obtained after binarization processing. It can be seen that the image contains a lot of noise, which will interfere with the subsequent recognition process. Therefore, based on the characteristics of the DataMatrix QR code image, edge Keep filtering to initially filter out noise while improving the clarity of graphics edges.
  • Step 4 Perform edge-preserving filtering on the binarized QR code image, filter out noise as much as possible while retaining more edge details, and obtain the filtered QR code image; (e) in Figure 6 is filtering From the filtered image obtained, it can be seen from (d) and (e) in Figure 6 that the contours in the filtered image obtained after edge-preserving filtering are reduced compared to the noise in the binarized image. However, although some noise has been filtered out, there are still small particle noises such as tiny spikes and holes in the image. Therefore, in the next step, morphological operations are performed on the QR code to filter out small particle noise in the image.
  • Step 5 Perform morphological closing operations and opening operations on the filtered QR code image in sequence to obtain the target QR code image; in this embodiment, when the sizes of the first structural element and the second structural element are both 3* At 3 o'clock, the QR code after morphological operation is the clearest.
  • (f) in Figure 6 is the target image obtained after image morphology operation; as can be seen from (e) and (f) in Figure 6, the target image obtained after morphological opening and closing operations has a clear outline. There is almost no noise in the picture that affects the recognition of the QR code. After a series of processing by the above method, the blurred QR code image becomes clear and can be recognized by the code scanning device.
  • Figure 7 shows a schematic block diagram of a device 400 for processing graphic symbols according to an embodiment of the present application, including: an acquisition module 410, used to obtain the brightness component image of the image to be processed; a binarization processing module 430, used based on the image to be processed. The brightness component image of the processed image is binarized and a binarized image is output; the operation module 450 is used to perform grayscale morphological operations based on the binarized image and output a target image.
  • a high-quality target image can be obtained.
  • the target image contains The graphics have clear outlines and less noise, which is beneficial to improving the accuracy and efficiency of subsequent graphic recognition.
  • the acquisition module 410 is configured to: convert the color space of the image to be processed into the target color space according to the mapping relationship between the color space of the image to be processed and the target color space; extract the brightness of the target color space component image.
  • the color space of the image to be processed is RGB color space or BGR color space
  • the target color space is YCbCr color space, YCrCb color space or YUV color space.
  • Y is the brightness value of the pixel in the brightness component image of the target color space
  • R, G, and B are respectively the red chromaticity value, green chromaticity value, and blue chromaticity value of the pixel in the image to be processed
  • the image processing device 400 further includes an image enhancement module 420, configured to perform image enhancement on the brightness component image of the image to be processed, and output the enhanced image.
  • an image enhancement module 420 configured to perform image enhancement on the brightness component image of the image to be processed, and output the enhanced image.
  • the image enhancement module 420 is configured to use a point operation algorithm to enhance the brightness component image of the image to be processed.
  • the image enhancement module 420 is used to perform contrast stretching on the brightness component image of the image to be processed.
  • the image enhancement module 420 is used to traverse the pixel points in the brightness component image of the image to be processed, and determine the grayscale value of each pixel in the brightness component image of the image to be processed; according to the image to be processed
  • the contrast stretching function is determined by the grayscale range of the grayscale value of each pixel in the brightness component image; grayscale transformation is performed on the pixel points in the brightness component image of the image to be processed according to the contrast stretching function.
  • the binarization processing module 430 is configured to use a local adaptive binarization algorithm to binarize the enhanced image.
  • the binarization processing module is used for:
  • the grayscale threshold can be determined according to the following formula:
  • T is the gray threshold
  • n represents the side length of the binarization window
  • v ij represents the The grayscale value of the pixel in the i-th row and j-th column in the binary processing window
  • C is a constant selected according to the actual image processing requirements.
  • the image processing device 400 further includes a filtering module 440 for filtering the binarized image.
  • the filtering module 440 is used to perform edge-preserving filtering on the binarized image.
  • the operation module 450 is used to perform morphological closing operations and opening operations on the binary image or sequentially.
  • the operation module 450 includes a closed operation unit for selecting the first structural element based on the binarized image; and sequentially expanding the closed operation area according to the first structural element and the preset closed operation rules. treatment and corrosion treatment.
  • the operation module 450 also includes an opening operation unit for selecting a second structural element based on the image after the closing operation; and opening the operation area according to the second structural element and the preset opening operation rules. Carry out corrosion treatment and expansion treatment in sequence.
  • FIG. 8 is a schematic diagram of the hardware structure of a device for processing graphic symbols according to an embodiment of the present application.
  • the device 500 for processing graphic symbols shown in FIG. 5 includes a memory 501, a processor 502, a communication interface 503 and a bus 504.
  • the memory 501, the processor 502, and the communication interface 503 realize communication connections between each other through the bus 504.
  • the memory 501 may be a read-only memory (ROM), a static storage device, and a random access memory (RAM).
  • the memory 501 can store programs. When the program stored in the memory 501 is executed by the processor 502, the processor 502 and the communication interface 503 are used to execute various steps of the method for processing graphic symbols in the embodiment of the present application.
  • the processor 502 may be a general central processing unit (CPU), a microprocessor, or an application specific integrated circuit (application specific integrated circuit). circuit, ASIC), graphics processing unit (GPU) or one or more integrated circuits, used to execute related programs to implement the functions required to be performed by the units in the device for processing graphic symbols in the embodiment of the present application, Or execute the method for processing graphic symbols according to the embodiment of the present application.
  • CPU central processing unit
  • microprocessor or an application specific integrated circuit
  • ASIC application specific integrated circuit
  • GPU graphics processing unit
  • integrated circuits used to execute related programs to implement the functions required to be performed by the units in the device for processing graphic symbols in the embodiment of the present application, Or execute the method for processing graphic symbols according to the embodiment of the present application.
  • the processor 502 may also be an integrated circuit chip with signal processing capabilities. During the implementation process, each step of the method for processing graphic symbols in the embodiment of the present application can be completed by instructions in the form of hardware integrated logic circuits or software in the processor 502 .
  • the above-mentioned processor 502 can also be a general-purpose processor, digital signal processing (DSP), ASIC, off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • Each method, step and logical block diagram disclosed in the embodiment of this application can be implemented or executed.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware processor for execution, or can be executed by a combination of hardware and software modules in the processor.
  • the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field.
  • the storage medium is located in the memory 501.
  • the processor 502 reads the information in the memory 501, and combines its hardware to complete the functions required to be performed by the units included in the device for processing graphical symbols in the embodiment of the present application, or to perform the processing of the embodiment of the present application. Graphical notation method.
  • the communication interface 503 uses a transceiver device such as but not limited to a transceiver to implement communication between the device 500 and other devices or communication networks. For example, the traffic data of the unknown device can be obtained through the communication interface 503.
  • a transceiver device such as but not limited to a transceiver to implement communication between the device 500 and other devices or communication networks.
  • the traffic data of the unknown device can be obtained through the communication interface 503.
  • Bus 504 may include a path that carries information between various components of device 500 (eg, memory 501, processor 502, communication interface 503).
  • the above device 500 only shows a memory, a processor, a communication interface
  • the device 500 may also include other components necessary for normal operation.
  • the device 500 may also include hardware devices that implement other additional functions.
  • the device 500 may only include components necessary to implement the embodiments of the present application, and does not necessarily include all components shown in FIG. 5 .
  • Embodiments of the present application also provide a computer-readable storage medium, which stores program code for device execution.
  • the program code includes instructions for executing the steps in the above method of processing graphical symbols.
  • Embodiments of the present application also provide a computer program product.
  • the computer program product includes a computer program stored on a computer-readable storage medium.
  • the computer program includes program instructions. When the program instructions are executed by a computer, the computer is caused to execute The above method of processing graphic symbols.
  • the above-mentioned computer-readable storage medium may be a transient computer-readable storage medium or a non-transitory computer-readable storage medium.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the described embodiments may be implemented by software, hardware, or a combination of software and hardware.
  • the described embodiments may also be embodied by computer-readable media having computer-readable code stored thereon, the computer-readable code including instructions executable by at least one computing device.
  • the computer-readable medium can be associated with any data storage device capable of storing data readable by a computer system. Examples of computer-readable media may include read-only memory, random access memory, compact disc read-only memory (Compact Disc Read-Only Memory, CD-ROM), hard disk drive (Hard Disk Drive, HDD), digital Video discs (Digital Video Disc, DVD), tapes and optical data storage devices, etc.
  • the computer-readable medium can also be distributed among computer systems coupled through a network, so that the computer-readable code can be stored and executed in a distributed manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

本申请实施例公开了一种处理图形符号的方法、装置和计算机可读存储介质。其中方法包括:获取待处理图像中的亮度分量图像,基于待处理图像的亮度分量图像进行二值化处理,得到二值化图像,基于二值化图像进行灰度形态学运算,得到目标图像。通过本申请实施例的处理图形符号的方法、装置和计算机可读存储介质,能够提高图像修复的效率。

Description

处理图形符号的方法、装置和计算机可读存储介质
相关申请的交叉引用
本申请要求享有于2022年07月22日提交的名称为“处理图形符号的方法、装置和计算机可读存储介质”的中国专利申请202210869883.4的优先权,该申请的全部内容通过引用并入本文中。
技术领域
本申请涉及图像处理技术领域,更为具体地,涉及一种处理图形符号的方法、装置和计算机可读存储介质。
背景技术
实际环境下拍摄的包含二维码(Quick Code,QS)、条形码(Bar code,BS)的图像由于污损、光照等外部环境因素的影响,成像画面可能出现亮度不均、画面有污渍、噪声等现象而难以正确解码。因此常要对采集到的图像进行修复以实现正确识别图像中包含的信息。
现有技术中常使用经过训练的深度学习模型来对图像进行修复,而深度学习模型需要经过大量数据的训练才能应用,训练过程繁琐,对数据的依赖性强,影响图像修复效率。
发明内容
本申请实施例提供一种处理图形符号的方法、装置和计算机可读存储介质,能够提高图像修复的效率。
第一方面,提供了一种处理图形符号的方法,包括:获取待处理图 像的亮度分量图像;基于该待处理图像的亮度分量图像进行二值化处理,得到二值化图像;基于该二值化图像进行灰度形态学运算,得到目标图像;其中,该待处理图像中具有图形符号。
以上实施方式,通过提取图像采集设备采集到的待处理图像亮度分量图像,并基于该亮度分量图像进行二值化处理和形态学运算,能够获得高质量的目标图像,目标图像中包含的图形轮廓清晰且噪点少,有利于提高后续的图形识别准确率和效率。
在第一方面一些可能的实施方式中,获取待处理图像的亮度分量图像,包括:根据待处理图像的颜色空间和目标颜色空间的映射关系,将待处理图像的颜色空间转换为目标颜色空间;提取该目标颜色空间的亮度分量图像,得到待处理图像的亮度分量图像。
通过以上实施方式,可以将原本不包括亮度分量的待处理图像的颜色空间转换至包括亮度分量图像的目标颜色空间,从而实现待处理图像的亮度分量的提取。
在第一方面一些可能的实施方式中,该待处理图像的颜色空间可以为RGB颜色空间或BGR颜色空间;目标颜色空间可以为YCbCr颜色空间、YCrCb颜色空间或YUV颜色空间。
通过以上实施方式,将仅包含颜色分量的图像映射到包含亮度分量的颜色空间中,使图像的映射方式更为简单和鲁棒,有利于图像中感兴趣区域的识别。
在第一方面的一些可能的实施方式中,该待处理图像的颜色空间和目标颜色空间的映射关系如以下公式所示:
Y=krR+kgG+kbB。
其中,Y是目标颜色空间的亮度分量图像中像素点的亮度值,R、G、B分别为原始图像中像素点的红色色度值、绿色色度值和蓝色色度值;kr、 kg、kb为加权因数,且满足如下关系:
kr+kg+kb=1。
通过以上实施方式,能够获得待处理图像中包含纹理和结构特征最为充足的信息的图像分量,有利于后续的处理和识别。
在第一方面的一些可能的实施方式中,在基于该待处理图像的亮度分量图像进行二值化处理之前,该处理图形符号的方法还包括:对待处理图像的亮度分量图像进行图像增强。
通过以上实施方式,能够改善图像的视觉效果,使图像更为清晰,加强图像中图形符号的判读和识别效果。
在第一方面一些可能的实施方式中,对待处理图像的亮度分量图像进行图像增强包括:使用点运算算法对该亮度分量图像进行图像增强。
通过以上实施方式,能够改变图像数据占据的灰度范围,扩展感兴趣特征的对比度。
在第一方面一些可能的实施方式中,使用点运算算法对待处理图像的亮度分量图像进行增强,包括:对待处理图像的亮度分量图像进行对比度拉伸。
通过以上实施方式,能够根据图像的特点进行灰度的调整,使图像的对比度变换到适当的范围内,扩大图像中不同区域的灰度的差异,便于后续的二值化处理。
在第一方面一些可能的实施方式中,对待处理图像的亮度分量图像进行对比度拉伸包括:遍历待处理图像的亮度分量图像中的像素点,确定待处理图像的亮度分量图像中每个像素点的灰度值;根据亮度分量中每个像素点的灰度值所在的灰度范围确定对比度拉伸函数;根据该对比度拉伸函数对该待处理图像的亮度分量图像中的像素点进行灰度变换。
通过以上实施方式,能够扩大前景和背景的差异,使感兴趣的特征 更突出。
在第一方面一些可能的实施方式中,基于待处理图像的亮度分量图像进行二值化处理包括:采用局部自适应二值化算法对待处理图像的亮度分量图像进行二值化处理。
通过以上实施方式,对待处理图像的亮度分量图像进行二值化处理,能够去除灰度图像中包含的明暗信息,将灰度图像转换为黑白图像,有利于图像在后续过程中的处理和识别。
在第一方面一些可能的实施方式中,采用局部自适应二值化算法对待处理图像的亮度分量进行二值化处理包括:确定二值化处理窗口的尺寸;用二值化处理窗口遍历待处理图像的亮度分量图像的每个像素;计算二值化处理窗口覆盖的所有像素的像素值之和;确定该二值化处理窗口覆盖的灰度阈值;在该像素值之和大于或等于灰度阈值的情况下,将二值化处理窗口中心对应的像素的像素值设置为1,否则将二值化处理窗口中心对应的像素的像素值设置为0。其中,该灰度阈值可以根据如下公式确定:
其中,T为灰度阈值,n表示二值化处理窗口的边长,vij表示二值化处理窗口中第i行第j列像素的灰度值,C为常数项,该常数项的取值可以根据实际图像处理需求确定。其中,灰度阈值T可以采用单一变量控制法、贝叶斯优化或其他参数优化方法寻找最优解。
通过以上实施方式,能够使每个像素位置处的二值化阈值由其周围邻域像素的分布决定而不是固定不变的,使亮度较高的区域的二值化阈值增大,而亮度较低的图像区域的二值化阈值相应的减小。从而能够良好地对不同亮度、对比度、纹理的局部图像区域进行合理的二值化。降低实际采集的图像中由于光照不均匀而使得图像中局部区域像素灰度产生的明显 差异的影响。
在第一方面一些可能的实施方式中,在基于该二值化图像进行灰度形态学运算,该处理的图像方法还包括:对二值化图像进行滤波。
在第一方面一些可能的实施方式中,对二值化图像进行滤波包括:对二值化图像进行边缘保持滤波。
通过以上实施方式,能够滤除图像中的噪声,有利于图像的识别,而边缘保持滤波能够在保留较多边缘细节的情况下尽可能滤除最多噪声。
在第一方面一些可能的实施方式中,对二值化图像进行边缘保持滤波包括:将该二值化图像转换为RGB图像;对该RGB图像上所有像素点进行色彩均值漂移;将经过色彩均值漂移后的RGB图像转换为二值化图像。
通过以上实施方式,能够使二值化图像更为平滑,减少形态学运算的计算量。
在第一方面一些可能的实施方式中,基于二值化图像进行灰度形态学运算,包括:对该二值化图像进行形态学的闭运算和开运算。
通过以上实施方式,能够在滤除图像中的噪声的同时保持图像的面积大小不变。
在第一方面一些可能的实施方式中,闭运算包括:基于二值化图像选取第一结构元素;根据第一结构元素和预设闭运算规则对该二值化图像中的闭运算区域依次进行膨胀处理和腐蚀处理。
通过以上实施方式,可以填充图像中亮色区域中存在的细小空隙。
在第一方面一些可能的实施方式中,开运算包括:基于闭运算后的图像选取第二结构元素;根据第二结构元素和预设的开运算规则对该二值化图像中的开运算区域依次进行腐蚀处理和膨胀处理。
通过以上实施方式,可以使边界平滑,消除细小的尖刺,断开窄小 的连接,继而消除图像中细小的孔洞;还可以进一步滤除边缘保持滤波步骤中过滤不掉的噪声,从而实现图像中大部分噪声的滤除。
在第一方面一些可能的实施方式中,该待处理图像中的图形符号是二维码或者条形码。
通过以上实施方式,可以对包含二维码或条码的图像进行修复,修复后的图像中的二维码或条码的轮廓清晰,从而易于被扫码设备识别。
第二方面,提供了一种处理图形符号的装置,包括:获取模块,用于获取待处理图像的亮度分量图像;二值化处理模块,用于基于待处理图像的亮度分量图像进行二值化处理,输出二值化图像;运算模块,用于基于二值化图像进行灰度形态学运算,输出目标图像,其中,该待处理图像是包含图形符号的图像。在第二方面一些可能的实施方式汇总,待处理图像中包含的图形符号是二维码或条形码。
在以上技术方案中,通过提取图像采集设备采集到的待处理图像的亮度分量图像,并基于该亮度分量图像进行二值化处理和形态学运算,能够获得高质量的目标图像,目标图像中包含的图形轮廓清晰且噪点少,有利于提高后续的图形识别准确率和效率。
在第二方面一些可能的实施方式中,该获取模块用于:根据待处理图像的颜色空间和目标颜色空间的映射关系,将待处理图像的颜色空间转换为目标颜色空间;提取目标颜色空间的亮度分量图像。
在第二方面一些可能的实施方式中,待处理图像的颜色空间为RGB颜色空间或BGR颜色空间,目标颜色空间为YCbCr颜色空间、YCrCb颜色空间或YUV颜色空间。
在第二方面一些可能的实施方式中,获取模块用于将待处理图像的颜色空间按以下公式所示的映射关系转换到目标颜色空间:
Y=krR+kgG+kbB
其中,Y是目标颜色空间的亮度分量图像中像素点的亮度值,R、G、B分别为待处理图像中像素点的红色色度值、绿色色度值和蓝色色度值;kr、kg、kb为加权因数,且满足如下关系:
kr+kg+kb=1。
在第二方面一些可能的实施方式中,该图像处理装置还包括图像增强模块,用于在二值化处理之前对待处理图像的亮度分量图像进行图像增强。
在第二方面一些可能的实施方式中,图像增强模块用于使用点运算算法对该待处理图像的亮度分量图像进行增强。
在第二方面一些可能的实施方式中,该图像增强模块用于对该待处理图像的亮度分量图像进行对比度拉伸。
在第二方面一些可能的实施方式中,图像增强模块用于遍历该待处理图像的亮度分量图像中的像素点,确定待处理图像的亮度分量图像中每个像素的灰度值;根据该待处理图像的亮度分量图像中每个像素的灰度值所在的灰度范围确定对比度拉伸函数;根据该对比度拉伸函数对该待处理图像的亮度分量图像中的像素点进行灰度变换。
在第二方面一些可能的实施方式中,二值化处理模块用于采用局部自适应二值化算法对该增强后的图像进行二值化处理。
在第二方面一些可能的实施方式中,二值化处理模块用于:确定二值化处理窗口的尺寸;用该二值化处理窗口遍历该待处理图像的亮度分量图像或该增强后的图像的每个像素;计算该二值化处理窗口覆盖的所有像素的像素值之和;当该像素值之和大于或等于预设的阈值时,将窗口中心对应的像素的像素值设置为1,否则将窗口中心对应的像素的像素值设置为0。
在第二方面一些可能的实施方式中,该图像处理装置还包括滤波模 块,用于在形态学运算之前对该二值化图像进行滤波。
在第二方面一些可能的实施方式中,滤波模块用于对二值化图像进行边缘保持滤波。
在第二方面一些可能的实施方式中,该滤波模块用于:将该二值化图像转换为RGB图像;对该RGB图像上所有像素点进行色彩均值漂移;将经过色彩均值漂移后的RGB图像转换为二值化图像。
在第二方面一些可能的实施方式中,运算模块用于对二值化图像依次进行形态学的闭运算和开运算。
在第二方面一些可能的实施方式中,运算模块包括闭运算单元,用于基于二值化图像选取第一结构元素;并根据第一结构元素和预设的闭运算规则对二值化图像中的闭运算区域依次进行膨胀处理和腐蚀处理。
在第二方面一些可能的实施方式中,运算模块还包括开运算单元,用于基于闭运算后的图像选取第二结构元素;并根据该第二结构元素和预设开运算规则对二值化图像中的开运算区域依次进行腐蚀处理和膨胀处理。
第三方面,提供了一种处理图形符号的装置,包括处理器和存储器,该存储器用于存储程序,该处理器用于从存储器中调用并运行上述程序以执行上述第一方面或第一方面的任一可能的实施方式中的处理图形符号的方法。
第四方面,提供了一种计算机可读存储介质,用于存储计算机程序,当该计算机程序在计算机上运行时,使得计算机执行上述第一方面或第一方面的任一可能的实施方式中的处理图形符号的方法。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅 仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据附图获得其他的附图。
图1是提供的系统架构的结构示意图;
图2是本申请一个实施例的处理图形符号的方法的示意性流程图;
图3是本申请另一个实施例的处理图形符号的方法的示意性流程图;
图4是本申请实施例处理图形符号的方法中目标颜色空间各分量的图像;
图5是本申请实施例处理图形符号的方法中对比度拉伸前后的灰度直方图;
图6是本申请实施例处理图形符号的方法的过程图像;
图7是本申请实施例的一种处理图形符号的装置的示意性结构框图;
图8是本申请实施例的一种处理图形符号的装置的示意性结构框图。
具体实施方式
下面结合附图和实施例对本申请的实施方式作进一步详细描述。以下实施例的详细描述和附图用于示例性地说明本申请的原理,但不能用来限制本申请的范围,即本申请不限于所描述的实施例。应理解,本文中的具体的例子只是为了帮助本领域技术人员更好地理解本申请实施例,而非限制本申请实施例的范围。
还应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
除非另有说明,本申请实施例所使用的所有技术和科学术语与本申请的技术领域的技术人员通常理解的含义相同。本申请中所使用的术语只是为了描述具体的实施例的目的,不是旨在限制本申请的范围。本申请所 使用的术语“和/或”包括一个或多个相关的所列项的任意的和所有的组合。
本申请实施例可适用于包含形状、纹理等特征的图像的处理,以便于后续过程中更容易识读出图像中包含的信息。本申请实施例包括但不限于二维码图像的处理,该图像可以是电荷耦合元件(charge coupled device,CCD)相机拍摄的图像,也可以是其他相机拍摄的图像,也可以是通过屏幕截取等方式获取的图像,本申请公开的实施例对图像的采集方式不作限定。
由于二维码具有包含的信息量大、易识别、成本低等优点,在近些年来得到了快速发展和广泛的应用。通过扫描二维码能够实现信息得到、手机支付、防伪溯源和账号登录等多种功能。常见的二维码有QR(Quick Response Code)码、PDF(Portable Data File)417二维码、数据矩阵(Data matrix)二维码等。其中,Data Matrix二维码是一种面向工业产品的二维码,该种二维码的最小尺寸是目前所有条码中最小的,尤其特别适用于小零件的标识,以及直接印刷在实体上。每个Data matrix二维码由资料区、定位图形(Finder Pattern)、排位图形(Alignment Patterns)和空白区组成。其中,资料区由规则排列的方形模组构成,资料区的四周由定位图形所包围,定位图形的四周则由空白区包围,资料区再以排位图形加以分隔。定位图形是资料区域的边界,其中两条邻边为暗实线,主要用于限定物理尺寸和定位,另两条邻边由交替的深色和浅色模组组成,主要用于限定符号的单元结构,也能辅助确定物理尺寸及失真。由于Data Matrix二维码只需要读取资料的20%即可精确辨读,因此很适合应用在条码容易受损的场所,例如印在暴露于高热、化学清洁剂、机械剥蚀等特殊环境的零件上。当二维码受损时需经过图像修复才能读取其中包含的信息,现有技术通常先对得到的彩色二维码进行灰度化处理和图像增强,然后基于大量数据训练深度学习模型,对二维码图像区域内的污损二维码进行修复,得到修复 后的二维码图像,再对其进行读取。但深度学习模型的训练首先需要收集样本集和训练集,其中训练集为各类受损的二维码图像,样本集为与训练集内受损二维码图像对应的清晰二维码图像,样本集和训练集还需经过人工分类等过程才能用于模型的训练,因此,采用深度学习方法修复二维码图像在收集数据、分析数据、整理数据和训练模型方面都要消耗大量的时间,效率低;且对数据依赖性强,训练出的模型难以泛用。
鉴于此,本申请实施例提供了一种处理图形符号的方法,通过数字图像处理方法对采集到的包含图形符号的图像进行亮度分量提取、图像增强、图像二值化处理、滤波和形态学运算,最终的目标图像上能够显示感兴趣区域的清晰形状特征,使用该方法能够对图像进行快速修复,修复后的图像中的Data Matrix二维码能够被正确识别。
如图1所示,本申请实施例提供了一种系统架构100。在图1中,图像采集设备110用于将待处理图像1101输入到图像处理设备120。该图像采集设备110可以是任意具有图像拍摄或采集功能的设备,例如摄像头、摄像机、相机、扫描仪、手机、平板电脑或扫码器等设备,待处理图像1101是由上述设备拍摄或采集到的图像;也可以是具有数据存储功能的设备,待处理图像1101存储在该设备中。对于图像采集设备110的类型,本申请不作限定。针对本申请实施例的图像的处理方法而言,待处理图像是包含图形符号的图像,可选地,可以是包含二维码或条形码的图像。图像采集设备120用于对待处理图像1101进行处理并输出目标图像1201。目标图像1201是经图像处理设备120处理后能够清晰地体现某些特定的形状、纹理等特征的图像,即能够清晰地体现用户感兴趣的某些特征的图像。当待处理图像1101是包含二维码或条形码的图像时,该目标图像1201可以是经过处理后能够被二维码或条形码识别设备正确识别的图像。该图像处理设备120可以是任意具有图像处理功能的设备,例如计算机、 智能手机、工作站或具有中央处理器的其他设备。对于图像处理设备120的类型,本申请不作限定。
在一些实施方式中,上述图像采集设备110可以与上述图像处理设备120为同一设备。例如,图像采集设备110与图像处理设备120均为智能手机,或均为扫码器。
在另一些实施方式中,上述图像采集设备110可以与图像处理设备120为不同设备。例如,图像采集设备110为终端设备,而图像处理设备120为计算机、工作站等设备,图像采集设备110可以通过任何通信机制/通信标准的通信网络与图像处理设备120进行交互,通信网络可以是广域网、局域网、点对点连接等方式,或它们的任意组合。
该实施例中的待处理图像是由图像采集设备采集到的二维码图像,该图像采集设备可以是二维码扫描设备,如扫码枪等,也可以是相机等设备。本方法也不限于二维码图像的处理,还可以应用于其他图像的处理,对于该方法的应用对象,本申请不作限定。
图2和图3示出了本申请实施例的处理图形符号的方法200的流程示意图,该方法可以利用图1中示出的图像处理设备120实现。
210、获取待处理图像的亮度分量图像。
亮度分量图像是体现图片亮度信息的图像。不同格式的图像所对应的颜色空间不同,而不同的颜色空间包含不同的通道,例如亮度通道和色度通道,亮度通道对应的分量图像是灰度图像,即用灰度标识的图像。亮度通道的采样率高于色度通道,因此,一般而言,亮度分量图像即指灰度图像。灰度图像可以通过浮点法、分量法、最大值法、平均值法、加权平均值法或Gamma校正算法等方式运算获得,或通过多媒体编辑软件中带有的目标色调整工具生成,上述目标色调整工具多是通过对图像进行以上的运算方法生成灰度图的,也可以通过颜色空间转换的方式获得。
可选地,在本申请的一些实施方式中,步骤210中得到待处理图像的亮度分量图像,包括:将待处理图像从原始颜色空间转换至目标颜色空间,提取目标颜色空间的亮度分量图像,得到待处理图像的亮度分量图像。其中,原始颜色空间是待处理图像所处的颜色空间,该原始颜色空间取决于待处理图像的格式,可以为RGB颜色空间、BGR颜色空间等用色彩来描述颜色的颜色空间,也可以是YUV、YCbCr等原本即包括亮度通道的颜色空间,或其他颜色空间。RGB颜色空间和BGR颜色空间两种颜色空间是彩色图像较为常用的颜色空间,其中RGB颜色空间包括三个通道,即R(Red,红色)通道、G(Green,绿色)通道和B(Blue,蓝色)通道。目标颜色空间为YUV、YCbCr等包括亮度通道的颜色空间,其中YCbCr颜色空间包括Y(亮度)通道、Cb(蓝色色度)通道和Cr(红色色度)通道。
当待处理图像的颜色空间不包含亮度信息时,需要先将其转换至目标颜色空间,以提取亮度分量,通过颜色空间的转换,可以将亮度和色度分离开,更直观地看到所需的信息。
可选地,可以根据待处理图像的颜色空间和目标颜色空间之间的映射关系进行颜色空间转换。
当待处理图像的原始颜色空间为RGB颜色空间或BGR颜色空间,目标颜色空间为YCbCr颜色空间时,其映射关系如以下公式所示:
Y=krR+kgG+kbB
其中,Y是目标颜色空间的亮度分量图像中像素点的亮度值,R、G、B分别为原始图像中像素点的红色色度值、绿色色度值和蓝色色度值;kr、kg、kb为加权因数,且满足如下关系:
kr+kg+kb=1
在Data Matrix二维码的识别算法中,YCbCr颜色空间的映射方式 比RGB颜色空间或BGR颜色空间的映射方式更为直观和鲁棒,经过颜色空间转换后,可以分离图像的亮度和色度,使图像中所包含的信息显示的更为直观;不但二维码的轮廓特征能够显示的更清晰,同时也有利于Data Matrix二维码的识别。
230、基于待处理图像的亮度分量图像进行图像二值化处理,得到二值化图像。
其中,待处理图像的亮度分量图像可以是对待处理图像进行运算获得的,也可以是如上述将待处理图像从原始颜色空间转换至目标颜色空间获得的。图像二值化(Image Binarization)就是将图像上所有灰度大于或等于某个阈值的像素点判定为属于特定物体,将其灰度值为255表示,否则这些像素点被排除在物体区域以外,灰度值为0,表示背景或者例外的物体区域。也就是将整个图像呈现出明显的黑白效果的过程,图像的二值化能够使图像中数据量大为减少,从而能凸显出目标的轮廓。
可选地,在一些实施方式中,该点运算算法可以是:全局固定阈值法、局部自适应阈值法;OTSU(大津)二值化处理算法等方法。其中,全局固定阈值法是对整幅图像都是用相同的阈值来进行二值化;而局部自适应阈值则是根据像素的邻域块的像素值分布来确定该像素位置上的二值化阈值。
由于环境光照不均匀等原因,使得采集的原始图像在不同的局部区域的像素灰度可能存在明显差异。使用自适应二值化算法能够按图像的灰度特性将图像二值化处理成前景部分和背景部分,从而得到二值化图像,从而良好地对不同亮度、对比度、纹理的局部图像区域进行合理的二值化。
可选地,在一些实施方式中,该点运算算法是局部自适应二值化算法。
可选地,该自适应二值化算法可以为wolf局部自适应二值化算法、 Niblack二值化算法或sauvola二值化算法。
在一些实施方式中,该局部自适应二值化算法包括以下步骤:
确定二值化处理窗口的尺寸;用二值化处理窗口遍历待处理图像的亮度分量图像的每个像素;计算二值化处理窗口覆盖的所有像素的像素值之和;确定该二值化处理窗口覆盖的灰度阈值;当该像素值之和大于或等于灰度阈值时,将二值化处理窗口中心对应的像素的像素值设置为1,否则将二值化处理窗口中心对应的像素的像素值设置为0。
其中,灰度阈值可以根据如下公式确定:
其中,T为灰度阈值,n表示二值化处理窗口的边长,vij表示二值化处理窗口中第i行第j列像素的灰度值,C根据实际图像处理需求选择的常数。
其中,灰度阈值T可以采用单一变量控制法、贝叶斯优化或其他参数优化方法寻找最优解。
本实施例中采用自适应二值化算法对增强后的图像进行二值化处理,每个像素位置处的二值化阈值不是固定不变的,而是由其周围邻域像素的分布来决定的,能够使亮度较高的图像区域具有较高的二值化阈值,而亮度较低的图像区域的具有相对较低的二值化阈值。从图6中(c)和(d)可以看出,经过局部自适应二值化的二值化图像中的轮廓相比于增强后的图像更为清楚。
250、基于二值化图像进行灰度形态学运算,得到目标图像。
灰度形态学运算包括腐蚀、膨胀、开运算和闭运算,通过以上运算,可以调整原始图像的灰度值和空间尺寸等参数。
可选地,步骤250中的灰度形态学运算可以是基于二值化图像依 次进行形态学的闭运算和开运算;
其中,闭运算包括:基于二值化图像选取第一结构元素;根据第一结构元素和预设闭运算规则对二值化图像中的闭运算区域依次进行膨胀处理和腐蚀处理。
开运算包括:基于开运算后的图像选取第二结构元素;根据第二结构元素和预设的开运算规则对二值化图像中的开运算区域依次进行腐蚀处理和膨胀处理。
结构元素是形态学变换中的基本元素,是为了探测图像的某种结构信息而设计特定形状和尺寸的图像,结构元素可以是圆形、方形、线形等,可以携带形态、大小、灰度和色度等信息。在图像的处理过程中,结构元素可以看作一个二维矩阵,在该二维矩阵中,矩阵元素的值为“0”或“1”。一般情况下,结构元素的尺寸小于待处理图像的尺寸。在本申请实施例中,该第一结构元素和第二结构元素可以为相同的结构元素,也可以为不同的结构元素,结构元素的大小可以根据实际的图像处理效果进行调整。闭运算区域和开运算区域可以是前述二值化图像所在的整个区域,也可以是二值化图像中的部分区域,例如,可以选择轮廓较为模糊或噪点较多的区域作为闭运算区域和/或开运算区域。
可选地,在本申请的一些实施方式中,上述图像处理方法在步骤230之前还可以包括:
220、对待处理图像的亮度分量图像进行图像增强,得到增强后的待处理图像的亮度分量图像;
图像增强是指针对图像的应用场合改善图像的视觉效果,满足某些特殊分析的需要。可选地,可以采用基于空域的算法,如点运算算法或邻域去噪算法进行图像增强;也可以采用基于频域的算法进行图像增强。
可选地,步骤220中可以采用点运算算法对待处理图像的亮度分 量图像进行增强。
可选地,使用点运算算法对待处理图像的亮度分量图像进行增强,可以是:对待处理图像的亮度分量图像进行对比度拉伸,或者对待处理图像的亮度分量图像进行灰度直方图位移。
可选地,对比度拉伸采用的方法可以是线性拉伸方法,即,对待处理图像的亮度分量图像的像素值进行线性的比例变化。根据本申请的一些实施例,可选地,可以采用全域线性拉伸、2%线性拉伸、分段线性拉伸、灰度窗口切片等方法对待处理图像的亮度分量图像进行线性拉伸。
可选地,对比度拉伸采用的方法可以是非线性拉伸方法,即,使用非线性函数对图像进行拉伸,根据本申请的一些实施例,可选地,可以使用指数函数、对数函数、高斯函数等函数对待处理图像的亮度分量图像进行非线性拉伸。
在本申请的一些实施方式中,对比度拉伸过程可以用公式表达如下:
其中,I(x,y)是待处理图像的亮度分量图像中像素点的灰度值,(x,y)是像素点的坐标值,Imin是待处理图像的亮度分量图像的最小灰度值,Imax是待处理图像的亮度分量图像的最大灰度值,MIN和MAX是要拉伸到的灰度空间的灰度最小值和最大值,可选地,该最小值可以是0,该最大值可以是255。
可选地,在一些实施方式中,当步骤210获取的待处理图像的亮度分量图像较为清晰或提供的信息较为充足时,可以不进行步骤220。
可选地,在另一些实施方式中,当待处理图像的亮度分量图像质量较低或模糊时,可以在步骤230之前,通过步骤220对待处理图像的亮 度分量图像进行增强,以扩大图像中不同物体特征之间的差别,抑制不感兴趣的特征,从而改善图像质量,加强后续的图像判读和识别效果。此时,步骤230可以包括如图3所示的步骤230a:
230a、对增强后的待处理图像的亮度分量图像进行二值化处理,得到二值化图像。
可选地,在本申请的一些实施方式中,在步骤250之前,上述图像处理方法还可以包括:
240、对二值化图像进行滤波,得到滤波后的二值化图像。
具体地,步骤240中,对二值化图像进行边缘保持滤波,能够在保留较多边缘细节的情况下尽可能滤除噪声。
在一些可能的实施方式中,对二值化图像进行边缘保持滤波包括:将该二值化图像转换为RGB图像;对该RGB图像上所有像素点进行色彩均值漂移;将经过色彩均值漂移后的RGB图像转换为二值化图像。
通过以上实施方式,能够使二值化图像更为平滑,减少形态学运算的计算量。
其中,对该RGB图像上所有像素点进行色彩均值漂移包括:确定用于建立迭代空间的物理空间半径和色彩空间半径;以所述RGB图像上任一像素点为初始中心点,基于所述物理空间半径和所述色彩空间半径建立空间球体;以所述空间球体作为迭代空间,计算所述迭代空间中所有像素点相对于中心点的色彩向量的向量和;移动所述中心点至所述和向量的终点,重新计算所述向量和,直至所述向量和的终点与所述中心点重合,以所述向量和的终点作为最终中心点;将初始中心点的色彩值更新为最终中心点的色彩值。
在本申请的一些实施方式中,当在步骤230中获得的二值化图像质量较高、噪点较少时,可以不进行步骤240。
在本申请的一些实施方式中,当步骤230获得的二值化图像中存在较多噪点时,可以在步骤250前通过步骤240对二值化图像进行滤波运算,通过滤波运算可以消除图像中较为容易消除的噪声,从而降低步骤250的运算量,增强去噪效果。此时,步骤250可以包括如图3所示的步骤250a:
250a、对滤波后的二值化图像依次进行形态学的开运算和闭运算。
如图4至图6所示的实施例中,上述处理的图像的方法被应用于DataMatrix二维码图像的处理,待处理图像为图像采集设备输入的彩色二维码图像,颜色空间为RGB颜色空间,处理步骤如下:
步骤1、将彩色二维码图像从RGB颜色空间转换至YCbCr颜色空间,并分离Y分量、Cb分量和Cr分量,提取Y分量的二维码图像,即待处理二维码图像的亮度分量图像;图4示出了采用本申请的处理图形符号的方法将Data Matrix二维码图像的颜色空间转换为YCbCr颜色空间后三个分量的图像,其中(a)是Y分量的图像,(b)是Cb分量的图像,(c)是Cr分量的图像,由图4可以看出,Y分量图像相比于其他两个分量的图像,包含的信息更为充足;图5示出了本申请实施例方法处理Data Matrix二维码图像的过程图,其中(a)是待处理的图像,(b)是颜色空间转换后提取的Y通道的图像,由图6中(a)和(b)可以看出,Y分量能够提供更为充足的信息,因此本实施例中分离YCbCr颜色空间的三个分量,并提取Y分量图像作为后续操作的基础图像,更有利于后续的图像处理和二维码识别。
步骤2、对Y分量的二维码图像采用对比度拉伸方法进行图像增强,扩大前景和背景的灰度差异,获得拉伸后的二维码图像,即灰度图像;图5示出了对Data Matrix二维码图像进行图像增强前后的灰度直方图,其中(a)是对比度拉伸之前的Y通道分量图像的灰度直方图,(b)是对比度拉伸之后获得的增强后的图像的灰度直方图,通过图5可以看出,经过对比 度拉伸之前的图像灰度较为集中,因此难以体现图像中的纹理和形状特征,而对比度拉伸之后灰度值分散在0-255的区间之内,对比度得到了增强;图6中(c)是经对比度拉伸后获得的增强后的图像;从图6中(b)和(c)可以看出,对比度拉伸前的图像较为模糊,而经过对比度拉伸后的增强后的图像更加清晰,能够更多地体现图像中包含的纹理和形状特征,有利于二维码的识别。
步骤3、对拉伸后的二维码图像进行局部自适应二值化,按图像的灰度特性,将图像分为前景和背景两部分,得到二值化后的二维码图像;图6中(d)是二值化处理后获得的二值化图像,可以看出图中包含较多噪点,对后续的识别过程造成干扰,因此,针对DataMatrix二维码图像的特点,对其进行边缘保持滤波,初步滤除噪声的同时提升图形边缘的清晰度。
步骤4、对二值化后的二维码图像进行边缘保持滤波,在保留较多边缘细节的情况下尽可能滤除噪声,得到滤波后的二维码图像;图6中(e)是滤波后获得的滤波后的图像,通过图6中(d)和(e)可以看出,经过边缘保持滤波后获得的滤波后的图像中的轮廓相比于二值化图像中噪点有所减少。但尽管已经滤除了部分噪声,图像中仍然存在细小的尖刺、空洞等小颗粒噪声,因此在下一步骤中,对该二维码进行形态学的运算,滤除图像中的小颗粒噪声。
步骤5、对滤波后的二维码图像依次进行形态学的闭运算和开运算,得到目标二维码图像;在该实施例中,当第一结构元素和第二结构元素大小均为3*3时,经形态学运算后的二维码最为清晰。
图6中(f)是经过图像形态学运算后获得的目标图像;从图6中(e)和(f)可以看出,经过形态学开运算和闭运算后获得的目标图像,轮廓清晰,且图中几乎没有影响二维码识别的噪点,经过上述方法的一系列处理,模糊的二维码图像变得清晰,从而能够被扫码设备识别。
上文详细地描述了本申请实施例的方法实施例,下面描述本申请实施例的装置实施例,装置实施例与方法实施例相互对应,因此未详细描述的部分可参见前面方法实施例,装置可以实现上述方法中任意可能实现的方式。
图7示出了本申请一个实施例的处理图形符号的装置400的示意性框图,包括:获取模块410,用于得到待处理图像的亮度分量图像;二值化处理模块430,用于基于待处理图像的亮度分量图像进行二值化处理,输出二值化图像;运算模块450,用于基于二值化图像进行灰度形态学运算,输出目标图像。
在以上技术方案中,通过提取图像采集设备采集到的待处理图像的亮度分量图像,并基于该亮度分量图像进行二值化处理和形态学运算,能够获得高质量的目标图像,目标图像中包含的图形轮廓清晰且噪点少,有利于提高后续的图形识别准确率和效率。
可选地,在一些实施例中,获取模块410用于:根据待处理图像的颜色空间和目标颜色空间的映射关系,将待处理图像的颜色空间转换为目标颜色空间;提取目标颜色空间的亮度分量图像。
可选地,在一些实施例中,待处理图像的颜色空间为RGB颜色空间或BGR颜色空间,目标颜色空间为YCbCr颜色空间、YCrCb颜色空间或YUV颜色空间。
可选地,在一些实施例中,获取模块410用于将待处理图像的颜色空间按以下公式所示的映射关系转换到目标颜色空间:
Y=krR+kgG+kbB
其中,Y是目标颜色空间的亮度分量图像中像素点的亮度值,R、G、B分别为待处理图像中像素点的红色色度值、绿色色度值和蓝色色度值;kr、kg、kb为加权因数,且满足如下关系:
kr+kg+kb=1。
可选地,在一些实施例中,该图像处理装置400还包括图像增强模块420,用于对待处理图像的亮度分量图像进行图像增强,输出增强后的图像。
可选地,在一些实施例中,图像增强模块420用于使用点运算算法对待处理图像的亮度分量图像进行增强。
可选地,在一些实施例中,图像增强模块420用于对待处理图像的亮度分量图像进行对比度拉伸。
可选地,在一些实施例中,图像增强模块420用于遍历待处理图像的亮度分量图像中的像素点,确定待处理图像的亮度分量图像中每个像素的灰度值;根据待处理图像的亮度分量图像中每个像素的灰度值所在的灰度范围确定对比度拉伸函数;根据对比度拉伸函数对待处理图像的亮度分量图像中的像素点进行灰度变换。
可选地,在一些实施例中,二值化处理模块430用于采用局部自适应二值化算法对所述增强后的图像进行二值化处理。
可选地,在一些实施例中,二值化处理模块用于:
确定二值化处理窗口的尺寸;用二值化处理窗口遍历待处理图像的亮度分量图像的每个像素;计算二值化处理窗口覆盖的所有像素的像素值之和;确定该二值化处理窗口覆盖的灰度阈值;当该像素值之和大于或等于灰度阈值时,将二值化处理窗口中心对应的像素的像素值设置为1,否则将二值化处理窗口中心对应的像素的像素值设置为0。
其中,灰度阈值可以根据如下公式确定:
其中,T为灰度阈值,n表示二值化处理窗口的边长,vij表示所 述二值化处理窗口中第i行第j列像素的灰度值,C根据实际图像处理需求选择的常数。
可选地,在一些实施例中,该图像处理装置400还包括滤波模块440,用于对二值化图像进行滤波。
可选地,在一些实施例中,滤波模块440用于对二值化图像进行边缘保持滤波。
可选地,在一些实施例中,运算模块450用于对二值化图像或依次进行形态学的闭运算和开运算。
可选地,在一些实施例中,运算模块450包括闭运算单元,用于基于二值化图像选取第一结构元素;并根据第一结构元素和预设闭运算规则对闭运算区域依次进行膨胀处理和腐蚀处理。
可选地,在一些实施例中,运算模块450还包括开运算单元,用于基于闭运算后的图像选取第二结构元素;并根据第二结构元素和预设的开运算规则对开运算区域依次进行腐蚀处理和膨胀处理。
图8是本申请实施例的处理图形符号的装置的硬件结构示意图。图5所示的处理图形符号的装置500包括存储器501、处理器502、通信接口503以及总线504。其中,存储器501、处理器502、通信接口503通过总线504实现彼此之间的通信连接。
存储器501可以是只读存储器(read-only memory,ROM),静态存储设备和随机存取存储器(random access memory,RAM)。存储器501可以存储程序,当存储器501中存储的程序被处理器502执行时,处理器502和通信接口503用于执行本申请实施例的处理图形符号的方法的各个步骤。
处理器502可以采用通用的中央处理器(central processing unit,CPU),微处理器,应用专用集成电路(application specific integrated  circuit,ASIC),图形处理器(graphics processing unit,GPU)或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例的处理图形符号的装置中的单元所需执行的功能,或者执行本申请实施例的处理图形符号的方法。
处理器502还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请实施例的处理图形符号的方法的各个步骤可以通过处理器502中的硬件的集成逻辑电路或者软件形式的指令完成。
上述处理器502还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、ASIC、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器501,处理器502读取存储器501中的信息,结合其硬件完成本申请实施例的处理图形符号的装置中包括的单元所需执行的功能,或者执行本申请实施例的处理图形符号的方法。
通信接口503使用例如但不限于收发器一类的收发装置,来实现装置500与其他设备或通信网络之间的通信。例如,可以通过通信接口503获取未知设备的流量数据。
总线504可包括在装置500各个部件(例如,存储器501、处理器502、通信接口503)之间传送信息的通路。
应注意,尽管上述装置500仅仅示出了存储器、处理器、通信接 口,但是在具体实现过程中,本领域的技术人员应当理解,装置500还可以包括实现正常运行所必须的其他器件。同时,根据具体需要,本领域的技术人员应当理解,装置500还可包括实现其他附加功能的硬件器件。此外,本领域的技术人员应当理解,装置500也可仅仅包括实现本申请实施例所必须的器件,而不必包括图5中所示的全部器件。
本申请实施例还提供了一种计算机可读存储介质,该存储介质上存储有用于设备执行的程序代码,程序代码包括用于执行上述处理图形符号的方法中的步骤的指令。
本申请实施例还提供了一种计算机程序产品,该计算机程序产品包括存储在计算机可读存储介质上的计算机程序,该计算机程序包括程序指令,当该程序指令被计算机执行时,使该计算机执行上述处理图形符号的方法。
上述的计算机可读存储介质可以是暂态计算机可读存储介质,也可以是非暂态计算机可读存储介质。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
本申请中使用的用词仅用于描述实施例并且不用于限制权利要求。 如在实施例以及权利要求的描述中使用的,除非上下文清楚地表明,否则单数形式的“一个”和“所述”旨在同样包括复数形式。类似地,如在本申请中所使用的术语“和/或”是指包含一个或一个以上相关联的列出的任何以及所有可能的组合。另外,当用于本申请中时,术语“包括”指陈述的特征、整体、步骤、操作、元素,和/或组件的存在,但不排除一个或一个以上其它特征、整体、步骤、操作、元素、组件和/或这些的分组的存在或添加。
所描述的实施例中的各方面、实施方式、实现或特征能够单独使用或以任意组合的方式使用。所描述的实施例中的各方面可由软件、硬件或软硬件的结合实现。所描述的实施例也可以由存储有计算机可读代码的计算机可读介质体现,该计算机可读代码包括可由至少一个计算装置执行的指令。所述计算机可读介质可与任何能够存储数据的数据存储装置相关联,该数据可由计算机系统读取。用于举例的计算机可读介质可以包括只读存储器、随机存取存储器、紧凑型光盘只读储存器(Compact Disc Read-Only Memory,CD-ROM)、硬盘驱动器(Hard Disk Drive,HDD)、数字视频光盘(Digital Video Disc,DVD)、磁带以及光数据存储装置等。所述计算机可读介质还可以分布于通过网络联接的计算机系统中,这样计算机可读代码就可以分布式存储并执行。
上述技术描述可参照附图,这些附图形成了本申请的一部分,并且通过描述在附图中示出了依照所描述的实施例的实施方式。虽然这些实施例描述的足够详细以使本领域技术人员能够实现这些实施例,但这些实施例是非限制性的;这样就可以使用其它的实施例,并且在不脱离所描述的实施例的范围的情况下还可以做出变化。比如,流程图中所描述的操作顺序是非限制性的,因此在流程图中阐释并且根据流程图描述的两个或两个以上操作的顺序可以根据若干实施例进行改变。作为另一个例子,在若 干实施例中,在流程图中阐释并且根据流程图描述的一个或一个以上操作是可选的,或是可删除的。另外,某些步骤或功能可以添加到所公开的实施例中,或两个以上的步骤顺序被置换。所有这些变化被认为包含在所公开的实施例以及权利要求中。
另外,上述技术描述中使用术语以提供所描述的实施例的透彻理解。然而,并不需要过于详细的细节以实现所描述的实施例。因此,实施例的上述描述是为了阐释和描述而呈现的。上述描述中所呈现的实施例以及根据这些实施例所公开的例子是单独提供的,以添加上下文并有助于理解所描述的实施例。上述说明书不用于做到无遗漏或将所描述的实施例限制到本申请的精确形式。根据上述教导,若干修改、选择适用以及变化是可行的。在某些情况下,没有详细描述为人所熟知的处理步骤以避免不必要地影响所描述的实施例。虽然已经参考优选实施例对本申请进行了描述,但在不脱离本申请的范围的情况下,可以对其进行各种改进并且可以用等效物替换其中的部件。尤其是,只要不存在结构冲突,各个实施例中所提到的各项技术特征均可以任意方式组合起来。本申请并不局限于文中公开的特定实施例,而是包括落入权利要求的范围内的所有技术方案。

Claims (16)

  1. 一种处理图形符号的方法,其特征在于,所述方法包括:
    获取待处理图像的亮度分量图像;
    基于所述待处理图像的亮度分量图像进行二值化处理,得到二值化图像;
    基于所述二值化图像进行灰度形态学运算,得到目标图像,
    其中,所述待处理图像中具有图形符号。
  2. 根据权利要求1所述的方法,其特征在于,所述获取待处理图像的亮度分量图像,包括:
    根据所述待处理图像的颜色空间和目标颜色空间的映射关系,将所述待处理图像的颜色空间转换为所述目标颜色空间;
    提取所述目标颜色空间的亮度分量图像,得到所述待处理图像的亮度分量图像。
  3. 根据权利要求2所述的方法,其特征在于,所述待处理图像的颜色空间为RGB颜色空间或BGR颜色空间,所述目标颜色空间为YCbCr颜色空间、YCrCb颜色空间或YUV颜色空间。
  4. 根据权利要求2或3所述的方法,其特征在于,所述待处理图像的颜色空间和目标颜色空间的映射关系如以下公式所示:
    Y=krR+kgG+kbB;
    其中,Y是所述目标颜色空间的亮度分量图像中像素点的亮度值,R、G、B分别为所述待处理图像中像素点的红色色度值、绿色色度值和蓝色色度值;kr、kg、kb为加权因数,且满足如下关系:
    kr+kg+kb=1。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,在所述基于所述待处理图像的亮度分量图像进行二值化处理之前,所述方法还包括:
    对所述待处理图像的亮度分量图像进行对比度拉伸。
  6. 根据权利要求5所述的方法,其特征在于,所述对所述亮度分量 图像进行对比度拉伸包括:
    遍历所述待处理图像的亮度分量图像中的像素点,确定所述待处理图像的亮度分量图像中每个像素点的灰度值;
    根据所述待处理图像的亮度分量图像中每个像素点的灰度值所在的灰度范围确定对比度拉伸函数;
    根据所述对比度拉伸函数对所述待处理图像的亮度分量图像中的像素点进行灰度变换。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述基于所述亮度分量图像进行二值化处理包括:
    采用局部自适应二值化算法对所待处理图像的述亮度分量图像进行二值化处理。
  8. 根据权利要求7所述的方法,其特征在于,所述采用局部自适应二值化算法对所述亮度分量图像进行二值化处理包括:
    确定二值化处理窗口的尺寸;
    用所述二值化处理窗口遍历所述待处理图像的亮度分量图像的每个像素点;
    确定所述二值化处理窗口覆盖的灰度阈值;
    在所述待处理图像的亮度分量图像的每个像素点的像素值之和大于或等于所述灰度阈值的情况下,将所述二值化处理窗口中心对应的像素点的像素值设置为1,否则将所述二值化处理窗口中心对应的像素点的像素值设置为0。
  9. 根据权利要求8所述的方法,其特征在于,所述灰度阈值根据如下公式确定:
    其中,T为所述灰度阈值,n为所述二值化处理窗口的边长,vij为所述二值化处理窗口中第i行第j列像素的灰度值,C为常数项。
  10. 根据权利要求1至9中任一项所述的方法,其特征在于,在基于所述二值化图像进行灰度形态学运算之前,所述方法还包括:
    对所述二值化图像进行边缘保持滤波。
  11. 根据权利要求1至10中任一项所述的方法,其特征在于,所述基于所述二值化图像进行灰度形态学运算,包括:
    对所述二值化图像依次进行形态学的闭运算和开运算。
  12. 根据权利要求11所述的方法,其特征在于,所述闭运算包括:
    基于所述二值化图像选取第一结构元素;
    根据所述第一结构元素和预设的闭运算规则对所述二值化图像中的闭运算区域依次进行膨胀处理和腐蚀处理;
    所述开运算包括:
    基于所述闭运算后的图像选取第二结构元素;
    根据所述第二结构元素和预设开运算规则对所述二值化图像中的开运算区域依次进行腐蚀处理和膨胀处理。
  13. 根据权利要求1-12中任一项所述的方法,其特征在于,所述待处理图像中的所述图形符号是二维码或者条形码。
  14. 一种处理图形符号的装置,其特征在于,所述装置包括:
    获取模块,用于获取待处理图像的亮度分量图像;
    二值化处理模块,用于基于所述待处理图像的亮度分量图像进行二值化处理,输出二值化图像;
    运算模块,用于基于所述二值化图像进行灰度形态学运算,输出目标图像;
    其中,所述待处理图像具有图形符号。
  15. 一种处理图形符号的装置,其特征在于,包括处理器和存储器,所述存储器用于存储程序,所述处理器用于从所述存储器中调用并运行所述程序以执行权利要求1至13中任一项所述的处理图形符号的方法。
  16. 一种计算机可读存储介质,其特征在于,所述介质中存储有计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行权利要求1至13中任一项所述的处理图形符号的方法。
PCT/CN2023/092278 2022-07-22 2023-05-05 处理图形符号的方法、装置和计算机可读存储介质 WO2024016791A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP23762124.8A EP4332879A1 (en) 2022-07-22 2023-05-05 Method and apparatus for processing graphic symbol, and computer-readable storage medium
US18/517,022 US20240086661A1 (en) 2022-07-22 2023-11-22 Method and apparatus for processing graphic symbol and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210869883.4 2022-07-22
CN202210869883.4A CN115829848A (zh) 2022-07-22 2022-07-22 处理图形符号的方法、装置和计算机可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/517,022 Continuation US20240086661A1 (en) 2022-07-22 2023-11-22 Method and apparatus for processing graphic symbol and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2024016791A1 true WO2024016791A1 (zh) 2024-01-25

Family

ID=85522893

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/092278 WO2024016791A1 (zh) 2022-07-22 2023-05-05 处理图形符号的方法、装置和计算机可读存储介质

Country Status (4)

Country Link
US (1) US20240086661A1 (zh)
EP (1) EP4332879A1 (zh)
CN (1) CN115829848A (zh)
WO (1) WO2024016791A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115829848A (zh) * 2022-07-22 2023-03-21 宁德时代新能源科技股份有限公司 处理图形符号的方法、装置和计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070019257A1 (en) * 2005-07-20 2007-01-25 Xerox Corporation Background suppression method and apparatus
CN104463795A (zh) * 2014-11-21 2015-03-25 高韬 一种点阵式dm二维码图像处理方法及装置
CN107545207A (zh) * 2017-09-28 2018-01-05 云南电网有限责任公司电力科学研究院 基于图像处理的dm二维码识别方法及装置
CN108345816A (zh) * 2018-01-29 2018-07-31 广州中大微电子有限公司 一种在光照不均匀下的二维码提取方法及系统
CN113781338A (zh) * 2021-08-31 2021-12-10 咪咕文化科技有限公司 图像增强方法、装置、设备及介质
CN115829848A (zh) * 2022-07-22 2023-03-21 宁德时代新能源科技股份有限公司 处理图形符号的方法、装置和计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070019257A1 (en) * 2005-07-20 2007-01-25 Xerox Corporation Background suppression method and apparatus
CN104463795A (zh) * 2014-11-21 2015-03-25 高韬 一种点阵式dm二维码图像处理方法及装置
CN107545207A (zh) * 2017-09-28 2018-01-05 云南电网有限责任公司电力科学研究院 基于图像处理的dm二维码识别方法及装置
CN108345816A (zh) * 2018-01-29 2018-07-31 广州中大微电子有限公司 一种在光照不均匀下的二维码提取方法及系统
CN113781338A (zh) * 2021-08-31 2021-12-10 咪咕文化科技有限公司 图像增强方法、装置、设备及介质
CN115829848A (zh) * 2022-07-22 2023-03-21 宁德时代新能源科技股份有限公司 处理图形符号的方法、装置和计算机可读存储介质

Also Published As

Publication number Publication date
EP4332879A1 (en) 2024-03-06
CN115829848A (zh) 2023-03-21
US20240086661A1 (en) 2024-03-14

Similar Documents

Publication Publication Date Title
JP6216871B2 (ja) 文書バウンダリ検知方法
WO2017121018A1 (zh) 二维码图像处理的方法和装置、终端、存储介质
CN110008954B (zh) 一种基于多阈值融合的复杂背景文本图像提取方法及系统
US6985631B2 (en) Systems and methods for automatically detecting a corner in a digitally captured image
EP2415015B1 (en) Barcode processing
CN102800094A (zh) 一种快速彩色图像分割方法
KR20130016213A (ko) 광학 문자 인식되는 텍스트 영상의 텍스트 개선
US9401027B2 (en) Method and apparatus for scene segmentation from focal stack images
US10438376B2 (en) Image processing apparatus replacing color of portion in image into single color, image processing method, and storage medium
WO2024016791A1 (zh) 处理图形符号的方法、装置和计算机可读存储介质
CN108965646B (zh) 图像处理装置、图像处理方法
CN108711160B (zh) 一种基于hsi增强性模型的目标分割方法
CN107256539B (zh) 一种基于局部对比度的图像锐化方法
Ramos et al. Single image highlight removal for real-time image processing pipelines
CN110599553B (zh) 一种基于YCbCr的肤色提取及检测方法
CN115272362A (zh) 一种数字病理全场图像有效区域分割方法、装置
CN111414877B (zh) 去除颜色边框的表格裁切方法、图像处理设备和存储介质
CN105809677B (zh) 一种基于双边滤波器的图像边缘检测方法及系统
CN111445402A (zh) 一种图像去噪方法及装置
Fan Enhancement of camera-captured document images with watershed segmentation
RU2383924C2 (ru) Способ адаптивного повышения резкости цифровых фотографий в процессе печати
CN109934215B (zh) 一种身份证识别方法
CN108090950A (zh) 一种优化围棋图像高光污染的方法
CN114219760A (zh) 仪表的读数识别方法、装置及电子设备
CN113902817B (zh) 一种基于灰度值的细胞图片拼接方法

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2023762124

Country of ref document: EP

Effective date: 20230906