CN112184723B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112184723B
CN112184723B CN202010974907.3A CN202010974907A CN112184723B CN 112184723 B CN112184723 B CN 112184723B CN 202010974907 A CN202010974907 A CN 202010974907A CN 112184723 B CN112184723 B CN 112184723B
Authority
CN
China
Prior art keywords
image
binary
binary image
gray value
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010974907.3A
Other languages
Chinese (zh)
Other versions
CN112184723A (en
Inventor
何滨
金顺楠
刘华水
周迪斌
陈汉清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Santan Medical Technology Co Ltd
Original Assignee
Hangzhou Santan Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Santan Medical Technology Co Ltd filed Critical Hangzhou Santan Medical Technology Co Ltd
Priority to CN202010974907.3A priority Critical patent/CN112184723B/en
Publication of CN112184723A publication Critical patent/CN112184723A/en
Application granted granted Critical
Publication of CN112184723B publication Critical patent/CN112184723B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention discloses an image processing method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a calibration plate image, and performing binary segmentation on the calibration plate image by adopting a fixed threshold value to obtain a first binary image; dividing the calibration plate image into a plurality of sub-images, carrying out self-adaptive threshold processing on each sub-image, and merging each sub-image subjected to the self-adaptive threshold processing into a second binary image; fusing the first binary image and the second binary image to obtain a fused image; and detecting the calibration mark in the fusion image. The method can filter out the influence of the problems of uneven background light, shielding and the like possibly existing in the original image on the segmentation result, can better extract the calibration marks from the image, and reduces the false detection rate of the identification of the calibration marks.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
In applications such as machine vision, image measurement, three-dimensional reconstruction, etc., to correct lens aberrations; determining a conversion relation between the physical size and the pixels; and determining the interrelation between the three-dimensional geometric position of a certain point on the surface of the space object and the corresponding point in the image, wherein a geometric model imaged by the camera needs to be established.
The geometric model of the camera can be obtained through shooting a calibration plate with a fixed-interval pattern array by the camera and calculating by a calibration algorithm, so that high-precision measurement and reconstruction results are obtained. The calibration algorithm is performed on the premise that the position of the calibration point can be accurately obtained from the image.
Disclosure of Invention
The invention provides an image processing method and device, electronic equipment and storage medium, so as to accurately acquire the position of a calibration point from an image.
Specifically, the invention is realized by the following technical scheme:
in a first aspect, there is provided an image processing method including:
acquiring a calibration plate image, and performing binary segmentation on the calibration plate image by adopting a fixed threshold value to obtain a first binary image;
dividing the calibration plate image into a plurality of sub-images, carrying out self-adaptive threshold processing on each sub-image, and merging each sub-image subjected to the self-adaptive threshold processing into a second binary image;
fusing the first binary image and the second binary image to obtain a fused image;
and detecting the calibration mark in the fusion image.
Optionally, fusing the first binary image and the second binary image includes:
performing difference value operation on the gray value of each pixel point in the first binary image and the gray value of the corresponding pixel point in the second binary image;
and taking the result of the difference operation as the gray value of each pixel point of the fusion image.
Optionally, fusing the first binary image and the second binary image includes:
performing inverse binarization processing on the second binary image;
the gray value of each pixel point in the second binary image subjected to inverse binarization processing is weighted and overlapped with the gray value of the corresponding pixel point in the first binary image;
and taking the weighted superposition result as the gray value of each pixel point of the fusion image.
Optionally, detecting the calibration identifier in the fused image includes:
determining position coordinates and/or sizes of areas with gray values within a preset range in the fusion image;
identifying the area with the position coordinates and/or the size meeting preset conditions as the area where the calibration mark is located; wherein the preset conditions include: the size is within a preset size range; the number of other position coordinates with the distance between the position coordinates and the preset distance range is at least 2.
Optionally, the position coordinates and/or the size of the region are determined based on a hough circle detection algorithm.
Optionally, the number of other location coordinates is determined based on a gradient search algorithm.
Optionally, before determining the position coordinates and/or the size of the area, the method further comprises:
optimizing the boundary of the region based on a morphological algorithm.
In a second aspect, there is provided an image processing apparatus comprising:
the binary segmentation module is used for acquiring a calibration plate image and carrying out binary segmentation on the calibration plate image by adopting a fixed threshold value to obtain a first binary image;
the self-adaptive segmentation module is used for dividing the calibration plate image into a plurality of sub-images, carrying out self-adaptive threshold processing on each sub-image, and merging each sub-image subjected to the self-adaptive threshold processing into a second binary image;
the fusion module is used for fusing the first binary image and the second binary image to obtain a fused image;
and the detection module is used for detecting the calibration mark in the fusion image.
Optionally, the fusion module includes:
the operation unit is used for carrying out difference value operation on the gray value of each pixel point in the first binary image and the gray value of the corresponding pixel point in the second binary image;
and the determining unit is used for determining the result of the difference operation as the gray value of each pixel point of the fusion image.
Optionally, the fusion module includes:
the reverse processing unit is used for performing reverse binarization processing on the second binary image;
the operation unit is used for carrying out weighted superposition on the gray value of each pixel point in the second binary image subjected to inverse binarization processing and the gray value of the corresponding pixel point in the first binary image;
and the determining unit is used for determining the weighted superposition result as the gray value of each pixel point of the fusion image.
Optionally, the detection module includes:
a determining unit, configured to determine a position coordinate and/or a size of an area in the fused image, where the gray value is within a preset range;
the identification unit is used for identifying the area with the position coordinates and/or the size meeting the preset conditions as the area where the calibration mark is located; wherein the preset conditions include: the size is within a preset size range; the number of other position coordinates with the distance between the position coordinates and the preset distance range is at least 2.
Optionally, the determining unit determines the position coordinates and/or the size of the region based on a hough circle detection algorithm.
Optionally, the identification unit determines the number of other location coordinates based on a gradient search algorithm.
Optionally, the detection module further comprises:
and the optimizing unit is used for optimizing the boundary of the region based on a morphological algorithm.
In a third aspect, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method of any of the preceding claims when executing the computer program.
In a fourth aspect, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method of any of the above.
The technical scheme provided by the embodiment of the invention can comprise the following beneficial effects:
in the embodiment of the invention, the fixed threshold segmentation result is used as a mask, and the segmentation result of the block self-adaptive threshold is used for fusion with the mask, so that the influence of the problems of uneven background light, shielding and the like possibly existing in the original image on the segmentation result can be filtered, the calibration mark can be better extracted from the image, and the false detection rate of the calibration mark is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present invention.
FIG. 2 is a schematic view of a calibration plate according to an exemplary embodiment of the present invention.
Fig. 3a is a schematic diagram of a partial region of a fused image according to an exemplary embodiment of the present invention.
FIG. 3b is a schematic diagram illustrating the results of the etching operation of FIG. 3a according to an exemplary embodiment of the present invention.
Fig. 3c is a schematic diagram illustrating the result of the expansion operation of fig. 3b according to an exemplary embodiment of the present invention.
Fig. 4 is a block diagram of an image processing apparatus according to an exemplary embodiment of the present invention.
Fig. 5 is a schematic structural view of an electronic device according to an exemplary embodiment of the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the invention. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
Accurate acquisition of the position of the calibration point from the image is a precondition for building a geometric model for distortion correction of the lens. At present, an existing calibration point positioning algorithm often uses an image matching mode, the mode generally needs to take an image without a calibration point as a basic image, when the calibration point is detected, the calibration point is searched one by one in a calibration plate image, when the searched image is very coincident with the shape and characteristic information of the basic image, the two images are considered as matching images, the two images are differentiated, and a difference result is the position of the calibration point.
The method for detecting the marking points is only suitable for the marking plate images with clean background and sufficient light, and for some images with complex background, such as uneven illumination and stronger noise, the marking points in the images are difficult to effectively divide and position.
Based on the above situation, the embodiment of the invention provides an image processing method, which can effectively divide and position the identification points in the calibration plate image with uneven illumination, shielding and other interference factors.
Fig. 1 is a flowchart of an image processing method according to an exemplary embodiment of the present invention, and the method may include the steps of:
and 101, acquiring a calibration plate image, and performing binary segmentation on the calibration plate image by adopting a fixed threshold value to obtain a first binary image.
The calibration plate image is an image obtained by shooting the calibration plate by adopting a lens to be corrected.
Referring to the calibration plate shown in fig. 2, the calibration plate includes a plurality of calibration marks, the plurality of calibration marks are arranged in a certain rule, and the calibration marks have a larger color difference with the background of the calibration plate. It should be noted that the calibration marks in the calibration plate are not limited to the circles shown in fig. 2, and may be ellipses, rectangles, and the like; the arrangement rule of the plurality of calibration marks is not limited to a rectangular array, and can be a circular array and the like.
The fixed threshold may be determined according to the gray value of the calibration mark, and is generally selected to be close to the gray value of the calibration mark. For example, if the calibration mark is a dark color system, such as black, the fixed threshold may be selected between 35 and 45; assuming that the fixed threshold is 35, when binary segmentation is performed, if the gray value of a pixel point in the calibration plate image is less than or equal to 35, setting the gray value of the pixel point to 0; if the gray value of the pixel point in the calibration plate image is larger than 35, the gray value of the pixel point is set to 255. If the calibration mark is a light color system, such as white, the fixed threshold can be selected from the range of 200-240; assuming that the fixed threshold is 200, when binary segmentation is performed, if the gray value of a pixel point in the calibration plate image is smaller than 200, setting the gray value of the pixel point to 0; if the gray value of the pixel point in the calibration plate image is greater than or equal to 200, the gray value of the pixel point is set to 255.
Due to the influence of ambient light, the shot calibration plate image has the condition of uneven light, and the fixed threshold value is adopted to carry out binary segmentation on the calibration plate image, so that brighter or darker light information in the calibration plate image can be removed.
In one embodiment, before the calibration plate image is subjected to binary segmentation, the calibration plate image may be subjected to gaussian filtering to remove noise in the image, reduce the influence of the noise, and in step 101, the calibration plate image subjected to gaussian filtering is subjected to binary segmentation.
And 102, dividing the calibration plate image into a plurality of sub-images, performing adaptive thresholding on each sub-image, and merging each sub-image subjected to the adaptive thresholding into a second binary image.
When dividing the image of the calibration plate, the size of each sub-image can be determined according to the size of the calibration mark in the image, optionally, each sub-image obtained by dividing is slightly larger than the size of the calibration mark, so that the sub-image containing the calibration mark contains a complete calibration mark.
In this embodiment, when the adaptive thresholding is adopted, the whole calibration plate image is not processed, but the calibration plate image is divided into a plurality of sub-images, so that illumination information of each sub-image is approximately uniform, and then adaptive thresholding is performed on each sub-image. The local threshold for each sub-image may be determined, but is not limited to, by calculating the mean, median, gaussian weighted average (gaussian filtering) of the gray values of the pixels in the sub-image area.
And 103, fusing the first binary image and the second binary image to obtain a fused image.
In one embodiment, when the images are fused, a difference value operation may be performed between the gray value of each pixel in the first binary image and the gray value of the corresponding pixel in the second binary image, and the result of the difference value operation is used as the gray value of each pixel in the fused image.
In another embodiment, when the image fusion is performed, the inverse binarization process may be performed on the second binary image first, that is, the gray value of the pixel with the gray value of 0 in the second binary image is converted into 255, and the gray value of the pixel with the gray value of 255 is converted into 0. And carrying out weighted superposition on the gray value of each pixel point in the second binary image subjected to inverse binarization processing and the gray value of the corresponding pixel point in the first binary image, and taking the weighted superposition result as the gray value of each pixel point of the fusion image.
Through fixed threshold processing, a good segmentation effect can be achieved for a calibration mark with a constant gray value theoretically, but an interference area cannot be well distinguished from an area to be detected when the calibration mark acts on an image with interference factors such as light, shielding and the like. In the embodiment of the invention, the fixed threshold segmentation result is used as a basic image (mask), the segmentation result of the block self-adaptive threshold is used for fusion with the fixed threshold segmentation result, so that the influence of the problems of uneven background light, shielding and the like possibly existing in the original image on the segmentation result can be filtered, the calibration mark can be better determined from the image, and the region where the calibration mark in the calibration plate image is located is segmented from the background.
And 104, detecting a calibration mark in the fusion image.
The gray value of the pixel point in the fusion image is 0 or 255, and if the calibration mark is a light color system, the region with the pixel value of 255 is generally the calibration mark; if the calibration marks are dark color lines, the area with the pixel value of 0 of the pixel point is generally the calibration mark. And judging whether the gray value is in a preset range or not, and preliminarily judging whether the corresponding region in the fusion image is the region where the calibration mark is located or not.
In the calibration plate image obtained in the complex environment, the area with the gray value of 0 may be a shadow area, the area with the gray value of 255 may be a light reflection area, and in order to further determine that the area with the gray value within the preset range in the fusion image is the area where the calibration mark is located, further judgment is needed for the initial calibration mark, so that the interference point in the initial calibration mark is eliminated. The initial calibration mark is a calibration mark determined according to the preliminary judgment of the gray value.
In one embodiment, the size of the initial calibration mark can be judged, so that the size of the initial calibration mark is not qualified. Taking the calibration marks as circles as an example, the radius of each initial calibration mark can be determined by adopting a Hough circle detection algorithm, the size of the calibration mark is represented by the radius, the initial calibration mark with the radius in the preset size range is determined as a final calibration mark, and the initial calibration mark with the radius not in the preset size range is eliminated. The preset size range is determined according to the actual size of the calibration mark on the calibration plate.
In one embodiment, the number of adjacent initial calibration marks may be eliminated by determining the number of adjacent initial calibration marks of the initial calibration marks. Whether adjacent initial calibration marks are or not can be determined by the distance between the position coordinates of each initial calibration mark. The calibration marks on the calibration plate are generally arranged in a fixed regular manner, so that at least two other calibration marks exist in the vicinity of one calibration mark. Taking the calibration marks as circles as an example, but not limited to, a Hough circle detection algorithm is adopted to determine the circle center coordinates of each initial calibration mark, the position coordinates are represented by the circle center coordinates, the distance between the initial calibration marks is calculated through the circle center coordinates, and other initial calibration marks, the distance between the initial calibration marks and the initial calibration marks is within a preset distance range, are determined to be adjacent initial calibration marks of the initial calibration marks. If one initial calibration mark has at least more than 2 adjacent initial calibration marks, determining the initial calibration mark as a final calibration mark; and if one initial calibration mark has only 1 adjacent initial calibration mark or has no adjacent initial calibration mark, eliminating the initial calibration mark. The preset distance range is determined according to the distance between the calibration marks on the calibration plate.
When the adjacent initial calibration marks are determined, gradient search can be conducted one by one through circle center coordinates so as to rapidly and accurately judge the number of the adjacent initial calibration marks of each initial calibration mark.
In another embodiment, the two conditions of the size and the number of adjacent initial calibration marks may be used as the conditions for determining the final calibration mark at the same time, that is, only when the size of the initial calibration mark is within the preset size range, and the initial calibration mark having at least two adjacent initial calibration marks is determined as the final calibration mark. Through 2 kinds of limiting conditions, interference points in the initial calibration marks can be accurately filtered, more accurate calibration mark positions are obtained, and the false detection rate of calibration mark recognition is reduced.
In another embodiment, before determining the position coordinates and/or the size of the initial calibration marks, the area where the initial calibration marks are located may be optimized based on a morphological algorithm, so as to accurately determine the area boundary of each initial calibration mark.
And carrying out morphological treatment on the area where the initial calibration mark is positioned, wherein morphological open operation or morphological closed operation can be adopted. The morphological operation is to erode the image and then expand the image, which can be used to eliminate small objects, separate objects at the fine points, smooth the boundary of larger objects without significantly changing the area. The morphological closing operation is an operation of expanding an image and then corroding, and the closing operation can exclude small black holes (black areas).
The following describes a specific implementation procedure of morphological processing of the initial calibration identification area by taking morphological open operation as an example:
mathematically speaking, the dilation or erosion operation is to convolve the initial calibration mark region with a convolution kernel, taking the schematic diagram of the partial region of the fused image shown in fig. 3a as an example, where each small square represents a pixel, the diagonally filled squares represent the initial calibration mark region, the white squares represent the background region, and the 3*3 dashed square regions represent the convolution kernel. The convolution kernel may be of any shape and size and is not limited to the 3*3 square composition shown in the figures. The convolution kernel has a single defined reference point, see the location of the region where the "open" is located in the figure.
Corrosion is the operation of local minimisation: the convolution kernel is used to move in the initial calibration identification area (fig. 3 a), the minimum value of the pixel points in the area covered by the convolution kernel is calculated once each movement, and the minimum value is assigned to the pixel point where the reference point is located. Referring to fig. 3b, a schematic diagram of a result of the etching treatment of fig. 3a is shown, comparing fig. 3a and 3b, in which the diagonal line filling area is reduced after the etching operation.
Expansion is the operation of maximizing locally: the convolution kernel is used to move in fig. 3b, and the maximum value of the pixel points in the area covered by the convolution kernel is calculated once each time of the movement, and the maximum value is assigned to the pixel point where the reference point is located, so that the diagonal filling area in the image is gradually increased. Referring to fig. 3c, a schematic diagram of a result after the expansion process of fig. 3c is shown, in which the diagonally filled region is an optimized initial calibration marking region, and the boundary thereof is the boundary of the initial calibration marking region.
The present invention also provides an embodiment of an image processing apparatus corresponding to the foregoing embodiment of the image processing method.
Fig. 4 is a schematic structural view of an image processing apparatus according to an exemplary embodiment of the present invention, which may include:
the binary segmentation module 41 is configured to obtain a calibration plate image, and perform binary segmentation on the calibration plate image by using a fixed threshold value to obtain a first binary image;
the adaptive segmentation module 42 is configured to divide the calibration plate image into a plurality of sub-images, perform adaptive thresholding on each sub-image, and combine each sub-image subjected to the adaptive thresholding into a second binary image;
a fusion module 43, configured to fuse the first binary image and the second binary image to obtain a fused image;
the detection module 44 is configured to detect the calibration identifier in the fused image.
Optionally, the fusion module includes:
the operation unit is used for carrying out difference value operation on the gray value of each pixel point in the first binary image and the gray value of the corresponding pixel point in the second binary image;
and the determining unit is used for determining the result of the difference operation as the gray value of each pixel point of the fusion image.
Optionally, the fusion module includes:
the reverse processing unit is used for performing reverse binarization processing on the second binary image;
the operation unit is used for carrying out weighted superposition on the gray value of each pixel point in the second binary image subjected to inverse binarization processing and the gray value of the corresponding pixel point in the first binary image;
and the determining unit is used for determining the weighted superposition result as the gray value of each pixel point of the fusion image.
Optionally, the detection module includes:
a determining unit, configured to determine a position coordinate and/or a size of an area in the fused image, where the gray value is within a preset range;
the identification unit is used for identifying the area with the position coordinates and/or the size meeting the preset conditions as the area where the calibration mark is located; wherein the preset conditions include: the size is within a preset size range; the number of other position coordinates with the distance between the position coordinates and the preset distance range is at least 2.
Optionally, the determining unit determines the position coordinates and/or the size of the region based on a hough circle detection algorithm.
Optionally, the identification unit determines the number of other location coordinates based on a gradient search algorithm.
Optionally, the detection module further comprises:
and the optimizing unit is used for optimizing the boundary of the region based on a morphological algorithm.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Fig. 5 is a schematic diagram of an electronic device, showing an exemplary electronic device 50 suitable for use in implementing embodiments of the present invention, in accordance with an exemplary embodiment of the present invention. The electronic device 50 shown in fig. 5 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 5, the electronic device 50 may be embodied in the form of a general purpose computing device, which may be a server device, for example. Components of electronic device 50 may include, but are not limited to: the at least one processor 51, the at least one memory 52, a bus 53 connecting the different system components, including the memory 52 and the processor 51.
The bus 53 includes a data bus, an address bus, and a control bus.
Memory 52 may include volatile memory such as Random Access Memory (RAM) 521 and/or cache memory 522, and may further include Read Only Memory (ROM) 523.
Memory 52 may also include a program tool 525 (or utility) having a set (at least one) of program modules 524, such program modules 524 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The processor 51 executes various functional applications and data processing, such as the methods provided in any of the embodiments described above, by running a computer program stored in the memory 52.
The electronic device 50 may also communicate with one or more external devices 54 (e.g., keyboard, pointing device, etc.). Such communication may occur through an input/output (I/O) interface 55. Also, model-generated electronic device 50 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet via network adapter 56. As shown, the network adapter 56 communicates with other modules of the model-generated electronic device 50 via the bus 53. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with model-generating electronic device 50, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, data backup storage systems, and the like.
It should be noted that although several units/modules or sub-units/modules of an electronic device are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present invention. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
The embodiment of the present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method provided by any of the embodiments described above.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the invention.

Claims (11)

1. An image processing method, comprising:
acquiring a calibration plate image, and performing binary segmentation on the calibration plate image by adopting a fixed threshold value to obtain a first binary image;
dividing the calibration plate image into a plurality of sub-images, carrying out self-adaptive threshold processing on each sub-image, and merging each sub-image subjected to the self-adaptive threshold processing into a second binary image;
fusing the first binary image and the second binary image to obtain a fused image;
detecting a calibration mark in the fusion image;
detecting the calibration mark in the fusion image comprises the following steps:
determining position coordinates and/or sizes of areas with gray values within a preset range in the fusion image;
identifying the area with the position coordinates and/or the size meeting preset conditions as the area where the calibration mark is located; wherein the preset conditions include: the size is within a preset size range; the number of other position coordinates with the distance between the position coordinates and the preset distance range is at least 2.
2. The image processing method according to claim 1, wherein fusing the first binary image and the second binary image includes:
performing difference value operation on the gray value of each pixel point in the first binary image and the gray value of the corresponding pixel point in the second binary image;
and taking the result of the difference operation as the gray value of each pixel point of the fusion image.
3. The image processing method according to claim 1, wherein fusing the first binary image and the second binary image includes:
performing inverse binarization processing on the second binary image;
the gray value of each pixel point in the second binary image subjected to inverse binarization processing is weighted and overlapped with the gray value of the corresponding pixel point in the first binary image;
and taking the weighted superposition result as the gray value of each pixel point of the fusion image.
4. The image processing method according to claim 1, wherein the position coordinates and/or the size of the region are determined based on a hough circle detection algorithm.
5. The image processing method according to claim 1, wherein the number of the other position coordinates is determined based on a gradient search algorithm.
6. The image processing method according to claim 1, characterized in that before determining the position coordinates and/or the size of the area, further comprising:
optimizing the boundary of the region based on a morphological algorithm.
7. An image processing apparatus, comprising:
the binary segmentation module is used for acquiring a calibration plate image and carrying out binary segmentation on the calibration plate image by adopting a fixed threshold value to obtain a first binary image;
the self-adaptive segmentation module is used for dividing the calibration plate image into a plurality of sub-images, carrying out self-adaptive threshold processing on each sub-image, and merging each sub-image subjected to the self-adaptive threshold processing into a second binary image;
the fusion module is used for fusing the first binary image and the second binary image to obtain a fused image;
the detection module is used for detecting the calibration mark in the fusion image;
the detection module comprises:
a determining unit, configured to determine a position coordinate and/or a size of an area in the fused image, where the gray value is within a preset range;
the identification unit is used for identifying the area with the position coordinates and/or the size meeting the preset conditions as the area where the calibration mark is located; wherein the preset conditions include: the size is within a preset size range; the number of other position coordinates with the distance between the position coordinates and the preset distance range is at least 2.
8. The image processing apparatus of claim 7, wherein the fusing module comprises:
the operation unit is used for carrying out difference value operation on the gray value of each pixel point in the first binary image and the gray value of the corresponding pixel point in the second binary image;
and the determining unit is used for determining the result of the difference operation as the gray value of each pixel point of the fusion image.
9. The image processing apparatus of claim 7, wherein the fusing module comprises:
the reverse processing unit is used for performing reverse binarization processing on the second binary image;
the operation unit is used for carrying out weighted superposition on the gray value of each pixel point in the second binary image subjected to inverse binarization processing and the gray value of the corresponding pixel point in the first binary image;
and the determining unit is used for determining the weighted superposition result as the gray value of each pixel point of the fusion image.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the image processing method of any of claims 1 to 6 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the image processing method of any one of claims 1 to 6.
CN202010974907.3A 2020-09-16 2020-09-16 Image processing method and device, electronic equipment and storage medium Active CN112184723B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010974907.3A CN112184723B (en) 2020-09-16 2020-09-16 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010974907.3A CN112184723B (en) 2020-09-16 2020-09-16 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112184723A CN112184723A (en) 2021-01-05
CN112184723B true CN112184723B (en) 2024-03-26

Family

ID=73921351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010974907.3A Active CN112184723B (en) 2020-09-16 2020-09-16 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112184723B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913236A (en) * 2021-02-09 2022-08-16 深圳市汇顶科技股份有限公司 Camera calibration method and device and electronic equipment
CN113762266B (en) * 2021-09-01 2024-04-26 北京中星天视科技有限公司 Target detection method, device, electronic equipment and computer readable medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013128617A1 (en) * 2012-03-01 2013-09-06 株式会社日本マイクロニクス Display unevenness detection method and device for display device
CN104966302A (en) * 2015-07-09 2015-10-07 深圳中科智酷机器人科技有限公司 Detecting and positioning method for laser cross at any angle
CN105160652A (en) * 2015-07-10 2015-12-16 天津大学 Handset casing testing apparatus and method based on computer vision
CN105719275A (en) * 2015-12-10 2016-06-29 中色科技股份有限公司 Parallel combination image defect segmentation method
CN108036929A (en) * 2017-12-27 2018-05-15 上海玮舟微电子科技有限公司 A kind of detection method of display device row graph parameter, apparatus and system
CN108171756A (en) * 2017-12-27 2018-06-15 苏州多比特软件科技有限公司 Self-adapting calibration method, apparatus and terminal
CN109345597A (en) * 2018-09-27 2019-02-15 四川大学 A kind of camera calibration image-pickup method and device based on augmented reality
CN109559324A (en) * 2018-11-22 2019-04-02 北京理工大学 A kind of objective contour detection method in linear array images
CN109615659A (en) * 2018-11-05 2019-04-12 成都西纬科技有限公司 A kind of the camera parameters preparation method and device of vehicle-mounted multiple-camera viewing system
CN109903272A (en) * 2019-01-30 2019-06-18 西安天伟电子系统工程有限公司 Object detection method, device, equipment, computer equipment and storage medium
KR20200000953A (en) * 2018-06-26 2020-01-06 주식회사 수올리나 Around view monitoring system and calibration method for around view cameras
WO2020010945A1 (en) * 2018-07-11 2020-01-16 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device and computer-readable storage medium
CN110879131A (en) * 2018-09-06 2020-03-13 舜宇光学(浙江)研究院有限公司 Imaging quality testing method and imaging quality testing device for visual optical system, and electronic apparatus
CN110895806A (en) * 2019-07-25 2020-03-20 研祥智能科技股份有限公司 Method and system for detecting screen display defects
CN111091571A (en) * 2019-12-12 2020-05-01 珠海圣美生物诊断技术有限公司 Nucleus segmentation method and device, electronic equipment and computer-readable storage medium
CN111340752A (en) * 2019-12-04 2020-06-26 京东方科技集团股份有限公司 Screen detection method and device, electronic equipment and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10818011B2 (en) * 2017-12-29 2020-10-27 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Carpal segmentation and recognition method and system, terminal and readable storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013128617A1 (en) * 2012-03-01 2013-09-06 株式会社日本マイクロニクス Display unevenness detection method and device for display device
CN104966302A (en) * 2015-07-09 2015-10-07 深圳中科智酷机器人科技有限公司 Detecting and positioning method for laser cross at any angle
CN105160652A (en) * 2015-07-10 2015-12-16 天津大学 Handset casing testing apparatus and method based on computer vision
CN105719275A (en) * 2015-12-10 2016-06-29 中色科技股份有限公司 Parallel combination image defect segmentation method
CN108036929A (en) * 2017-12-27 2018-05-15 上海玮舟微电子科技有限公司 A kind of detection method of display device row graph parameter, apparatus and system
CN108171756A (en) * 2017-12-27 2018-06-15 苏州多比特软件科技有限公司 Self-adapting calibration method, apparatus and terminal
KR20200000953A (en) * 2018-06-26 2020-01-06 주식회사 수올리나 Around view monitoring system and calibration method for around view cameras
WO2020010945A1 (en) * 2018-07-11 2020-01-16 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device and computer-readable storage medium
CN110879131A (en) * 2018-09-06 2020-03-13 舜宇光学(浙江)研究院有限公司 Imaging quality testing method and imaging quality testing device for visual optical system, and electronic apparatus
CN109345597A (en) * 2018-09-27 2019-02-15 四川大学 A kind of camera calibration image-pickup method and device based on augmented reality
CN109615659A (en) * 2018-11-05 2019-04-12 成都西纬科技有限公司 A kind of the camera parameters preparation method and device of vehicle-mounted multiple-camera viewing system
CN109559324A (en) * 2018-11-22 2019-04-02 北京理工大学 A kind of objective contour detection method in linear array images
CN109903272A (en) * 2019-01-30 2019-06-18 西安天伟电子系统工程有限公司 Object detection method, device, equipment, computer equipment and storage medium
CN110895806A (en) * 2019-07-25 2020-03-20 研祥智能科技股份有限公司 Method and system for detecting screen display defects
CN111340752A (en) * 2019-12-04 2020-06-26 京东方科技集团股份有限公司 Screen detection method and device, electronic equipment and computer readable storage medium
CN111091571A (en) * 2019-12-12 2020-05-01 珠海圣美生物诊断技术有限公司 Nucleus segmentation method and device, electronic equipment and computer-readable storage medium

Also Published As

Publication number Publication date
CN112184723A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN110060237B (en) Fault detection method, device, equipment and system
US9235762B2 (en) Iris data extraction
KR102113911B1 (en) Feature extraction and matching and template update for biometric authentication
EP1693782B1 (en) Method for facial features detection
CN109118473B (en) Angular point detection method based on neural network, storage medium and image processing system
Neto et al. Brazilian vehicle identification using a new embedded plate recognition system
Yoo et al. Image matching using peak signal-to-noise ratio-based occlusion detection
CN109492642B (en) License plate recognition method, license plate recognition device, computer equipment and storage medium
CN109447117B (en) Double-layer license plate recognition method and device, computer equipment and storage medium
CN111680690B (en) Character recognition method and device
CN112184723B (en) Image processing method and device, electronic equipment and storage medium
CN115018846B (en) AI intelligent camera-based multi-target crack defect detection method and device
CN111027541A (en) Flame detection method and system based on visible light and thermal imaging and storage medium
CN112464829B (en) Pupil positioning method, pupil positioning equipment, storage medium and sight tracking system
CN111898610B (en) Card unfilled corner detection method, device, computer equipment and storage medium
CN114266764A (en) Character integrity detection method and device for printed label
CN112884782A (en) Biological object segmentation method, apparatus, computer device and storage medium
CN112419207A (en) Image correction method, device and system
CN114674826A (en) Visual detection method and detection system based on cloth
CN117372487A (en) Image registration method, device, computer equipment and storage medium
CN110930358A (en) Solar panel image processing method based on self-adaptive algorithm
CN115619796A (en) Method and device for obtaining photovoltaic module template and nonvolatile storage medium
CN109690555B (en) Curvature-based face detector
CN115587966A (en) Method and system for detecting whether parts are missing or not under condition of uneven illumination
CN114882122A (en) Image local automatic calibration method and device and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant