CN112184723A - Image processing method and device, electronic device and storage medium - Google Patents

Image processing method and device, electronic device and storage medium Download PDF

Info

Publication number
CN112184723A
CN112184723A CN202010974907.3A CN202010974907A CN112184723A CN 112184723 A CN112184723 A CN 112184723A CN 202010974907 A CN202010974907 A CN 202010974907A CN 112184723 A CN112184723 A CN 112184723A
Authority
CN
China
Prior art keywords
image
binary
calibration
gray value
binary image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010974907.3A
Other languages
Chinese (zh)
Other versions
CN112184723B (en
Inventor
何滨
金顺楠
刘华水
周迪斌
陈汉清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Santan Medical Technology Co Ltd
Original Assignee
Hangzhou Santan Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Santan Medical Technology Co Ltd filed Critical Hangzhou Santan Medical Technology Co Ltd
Priority to CN202010974907.3A priority Critical patent/CN112184723B/en
Publication of CN112184723A publication Critical patent/CN112184723A/en
Application granted granted Critical
Publication of CN112184723B publication Critical patent/CN112184723B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method and device, electronic equipment and a storage medium. The method comprises the following steps: obtaining a calibration plate image, and performing binary segmentation on the calibration plate image by adopting a fixed threshold value to obtain a first binary image; dividing the calibration plate image into a plurality of sub-images, carrying out self-adaptive threshold processing on each sub-image, and combining each sub-image subjected to self-adaptive threshold processing into a second binary image; fusing the first binary image and the second binary image to obtain a fused image; and detecting a calibration mark in the fusion image. The method can filter the influence of the problems of uneven background light, shielding and the like possibly existing in the original image on the segmentation result, can better extract the calibration mark from the image, and reduces the false detection rate of the identification of the calibration mark.

Description

Image processing method and device, electronic device and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
In the applications of machine vision, image measurement, three-dimensional reconstruction and the like, the lens distortion is corrected; determining a conversion relation between the physical size and the pixel; and determining the mutual relation between the three-dimensional geometric position of a certain point on the surface of the space object and the corresponding point in the image, wherein a geometric model imaged by a camera needs to be established.
The camera shoots the calibration plate with the pattern display with fixed spacing, and the geometric model of the camera can be obtained through calculation of a calibration algorithm, so that high-precision measurement and reconstruction results are obtained. The premise of the calibration algorithm is that the position of the calibration point can be accurately obtained from the image.
Disclosure of Invention
The invention provides an image processing method and device, electronic equipment and a storage medium, which are used for accurately acquiring the position of a calibration point from an image.
Specifically, the invention is realized by the following technical scheme:
in a first aspect, an image processing method is provided, including:
obtaining a calibration plate image, and performing binary segmentation on the calibration plate image by adopting a fixed threshold value to obtain a first binary image;
dividing the calibration plate image into a plurality of sub-images, carrying out self-adaptive threshold processing on each sub-image, and combining each sub-image subjected to self-adaptive threshold processing into a second binary image;
fusing the first binary image and the second binary image to obtain a fused image;
and detecting a calibration mark in the fusion image.
Optionally, fusing the first binary image and the second binary image, including:
performing difference operation on the gray value of each pixel point in the first binary image and the gray value of the corresponding pixel point in the second binary image;
and taking the result of the difference operation as the gray value of each pixel point of the fused image.
Optionally, fusing the first binary image and the second binary image, including:
carrying out reverse binarization processing on the second binary image;
weighting and superposing the gray value of each pixel point in the second binary image subjected to reverse binarization and the gray value of the corresponding pixel point in the first binary image;
and taking the weighted superposition result as the gray value of each pixel point of the fusion image.
Optionally, detecting a calibration identifier in the fused image includes:
determining the position coordinates and/or the size of a region with the gray value within a preset range in the fused image;
identifying the area of which the position coordinates and/or the size meet the preset conditions as the area of the calibration identification; wherein the preset conditions include: the size is within a preset size range; the number of other position coordinates which are within a preset distance range from the position coordinates is at least 2.
Optionally, the position coordinates and/or size of the region are determined based on a hough circle detection algorithm.
Optionally, the number of other location coordinates is determined based on a gradient search algorithm.
Optionally, before determining the position coordinates and/or the size of the area, the method further includes:
optimizing the boundaries of the region based on a morphological algorithm.
In a second aspect, there is provided an image processing apparatus comprising:
the binary segmentation module is used for acquiring a calibration plate image and performing binary segmentation on the calibration plate image by adopting a fixed threshold value to obtain a first binary image;
the self-adaptive segmentation module is used for dividing the calibration plate image into a plurality of sub-images, performing self-adaptive threshold processing on each sub-image, and combining each sub-image subjected to the self-adaptive threshold processing into a second binary image;
the fusion module is used for fusing the first binary image and the second binary image to obtain a fused image;
and the detection module is used for detecting the calibration identification in the fusion image.
Optionally, the fusion module comprises:
the operation unit is used for performing difference operation on the gray value of each pixel point in the first binary image and the gray value of the corresponding pixel point in the second binary image;
and the determining unit is used for determining the result of the difference operation as the gray value of each pixel point of the fusion image.
Optionally, the fusion module comprises:
a reverse processing unit configured to perform reverse binarization processing on the second binary image;
the operation unit is used for weighting and superposing the gray value of each pixel point in the second binary image subjected to reverse binarization processing and the gray value of the corresponding pixel point in the first binary image;
and the determining unit is used for determining the weighted superposition result as the gray value of each pixel point of the fusion image.
Optionally, the detection module includes:
the determining unit is used for determining the position coordinates and/or the size of a region of which the gray value is within a preset range in the fused image;
the identification unit is used for identifying the area of which the position coordinates and/or the size meet the preset conditions as the area of the calibration mark; wherein the preset conditions include: the size is within a preset size range; the number of other position coordinates which are within a preset distance range from the position coordinates is at least 2.
Optionally, the determination unit determines the position coordinates and/or the size of the region based on a hough circle detection algorithm.
Optionally, the identification unit determines the number of other location coordinates based on a gradient search algorithm.
Optionally, the detection module further includes:
and the optimization unit is used for optimizing the boundary of the region based on a morphological algorithm.
In a third aspect, an electronic device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the image processing method according to any one of the above items when executing the computer program.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the image processing method of any of the above.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
in the embodiment of the invention, the segmentation result of the fixed threshold is used as a mask, and the segmentation result of the block self-adaptive threshold is fused with the mask, so that the influence of the segmentation result caused by the problems of uneven background light, shielding and the like possibly existing in the original image can be filtered, the calibration mark can be better extracted from the image, and the false detection rate of the identification of the calibration mark is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present invention.
Fig. 2 is a schematic diagram of a calibration plate shown in an exemplary embodiment of the invention.
Fig. 3a is a schematic diagram of a partial region of a fused image according to an exemplary embodiment of the present invention.
Fig. 3b is a graph illustrating the results of the etching operation performed on fig. 3a according to an exemplary embodiment of the present invention.
FIG. 3c is a graphical representation of the results of the expansion operation of FIG. 3b in accordance with an exemplary embodiment of the present invention.
Fig. 4 is a block diagram of an image processing apparatus according to an exemplary embodiment of the present invention.
Fig. 5 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The accurate acquisition of the position of the calibration point from the image is a precondition for establishing a geometric model for distortion correction of the lens. At present, the existing calibration point positioning algorithm usually uses an image matching mode, which usually needs an image without a calibration point as a basic image, when the calibration point is detected, the image is searched in a calibration plate image one by one, when the searched image is very coincident with the basic image shape and characteristic information, the two images are determined as matching images, the two images are subjected to difference, and the difference result is the calibration point position.
The above calibration point detection method is only suitable for calibration board images with clean background and sufficient light, and for images with complex background, such as uneven illumination and strong noise, it is difficult to effectively segment and locate the identification points in the images.
Based on the above situation, embodiments of the present invention provide an image processing method, which can effectively segment and locate an identification point in a calibration board image with interference factors such as uneven illumination and occlusion.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present invention, which may include the steps of:
step 101, obtaining a calibration board image, and performing binary segmentation on the calibration board image by adopting a fixed threshold value to obtain a first binary image.
The calibration plate image is an image obtained by shooting the calibration plate by using a lens to be corrected.
Referring to the calibration board shown in fig. 2, the calibration board includes a plurality of calibration marks, the plurality of calibration marks are arranged in a certain rule, and the calibration marks have a large color difference from the background of the calibration board. It should be noted that the calibration marks in the calibration plate are not limited to the circle shown in fig. 2, and may be an ellipse, a rectangle, or the like; the arrangement rule of the plurality of calibration marks is not limited to the rectangular array, and can be a circular array and the like.
The fixed threshold may be determined according to the gray value of the calibration mark, and is generally selected to be close to the gray value of the calibration mark. For example, if the calibration flag is a dark color, such as black, the fixed threshold may be selected from a range of 35 to 45; assuming that the fixed threshold is set to be 35, when binary segmentation is carried out, if the gray value of a pixel point in the calibration board image is less than or equal to 35, setting the gray value of the pixel point to be 0; and if the gray value of the pixel point in the calibration board image is larger than 35, setting the gray value of the pixel point to be 255. If the calibration mark is a light color system, such as white, the fixed threshold value can be selected within the range of 200-240; assuming that the fixed threshold is 200, when binary segmentation is carried out, if the gray value of a pixel point in the calibration board image is less than 200, setting the gray value of the pixel point to be 0; and if the gray value of the pixel point in the calibration board image is greater than or equal to 200, setting the gray value of the pixel point to be 255.
Due to the influence of ambient light, the situation that the shot calibration plate image has uneven light exists, and the calibration plate image is subjected to binary segmentation by adopting a fixed threshold value, so that brighter or darker light information in the calibration plate image can be removed.
In an embodiment, before performing binary segmentation on the calibration board image, gaussian filtering may be performed on the calibration board image to remove noise in the image and reduce the influence of the noise, and in step 101, the calibration board image subjected to the gaussian filtering is subjected to binary segmentation.
And 102, dividing the calibration board image into a plurality of sub-images, performing adaptive threshold processing on each sub-image, and combining the sub-images subjected to the adaptive threshold processing into a second binary image.
When the calibration plate image is divided, the size of each sub-image can be determined according to the size of the calibration mark in the image, and optionally, each divided sub-image is slightly larger than the size of the calibration mark, so that the sub-image containing the calibration mark contains the complete calibration mark.
In this embodiment, when the adaptive threshold segmentation is adopted, the whole calibration board image is not processed, but the calibration board image is divided into a plurality of sub-images, so that the illumination information of each sub-image is approximately uniform, and then the adaptive threshold processing is performed on each sub-image. The local threshold for each sub-image can be determined by, but not limited to, calculating the mean, median, and gaussian weighted average (gaussian filter) of the gray values of the pixel points in the sub-image region.
And 103, fusing the first binary image and the second binary image to obtain a fused image.
In an embodiment, when image fusion is performed, difference operation may be performed on the gray value of each pixel point in the first binary image and the gray value of the corresponding pixel point in the second binary image, and the result of the difference operation is used as the gray value of each pixel point of the fusion image.
In another embodiment, when image fusion is performed, reverse binarization processing may be performed on the second binary image, that is, the gray value of the pixel point with the gray value of 0 in the second binary image is converted into 255, and the gray value of the pixel point with the gray value of 255 is converted into 0. And performing weighted superposition on the gray value of each pixel point in the second binary image subjected to reverse binarization and the gray value of the corresponding pixel point in the first binary image, and taking the result of weighted superposition as the gray value of each pixel point of the fused image.
Through fixed threshold processing, a good segmentation effect can be theoretically achieved for calibration marks with constant gray values, but when the fixed threshold processing is applied to images with interference factors such as light rays and shielding, interference areas cannot be well distinguished from areas to be detected. In the embodiment of the invention, the fixed threshold segmentation result is used as a basic image (mask), and the segmentation result of the block self-adaptive threshold is fused with the basic image, so that the influence of the original image on the segmentation result caused by the problems of uneven background light, shielding and the like can be filtered, the calibration mark can be better determined from the image, and the region where the calibration mark in the calibration plate image is located can be segmented from the background.
And 104, detecting a calibration identifier in the fusion image.
The gray value of a pixel point in the fusion image is 0 or 255, and if the calibration mark is a light color system, the area of the pixel point with the pixel value of 255 is generally the calibration mark; if the mark is a dark color system, the area of the pixel with a pixel value of 0 is generally the mark. Whether the corresponding area in the fusion image is the area where the calibration mark is located can be preliminarily judged by judging whether the gray value is in the preset range.
In the calibration plate image obtained in the complex environment, the area with the gray value of 0 may be a shadow area, and the area with the gray value of 255 may be a reflection area, and in order to further determine that the area with the gray value within the preset range in the fused image is indeed the area where the calibration mark is located, further determination needs to be performed on the initial calibration mark, and interference points in the initial calibration mark are eliminated. The initial calibration mark is determined according to the gray value preliminary judgment.
In one embodiment, the size of the initial calibration identifier may be determined, and the initial calibration identifier may be excluded from the initial calibration identifier. Taking the calibration marks as circles, but not limited to, the radius of each initial calibration mark is determined by using a hough circle detection algorithm, the size of the calibration mark is represented by the radius, the initial calibration mark with the radius within a preset size range is determined as a final calibration mark, and the initial calibration mark with the radius not within the preset size range is excluded. The preset size range is determined according to the actual size of the calibration mark on the calibration plate.
In one embodiment, the number of adjacent initial calibration marks may be excluded from the condition by determining the number of adjacent initial calibration marks of the initial calibration marks. Whether the initial calibration marks are adjacent or not can be determined by the distance between the position coordinates of each initial calibration mark. The calibration marks on the calibration plate are usually displayed in a fixed rule, so that at least two other calibration marks exist in the vicinity of one calibration mark. Or taking the calibration marks as circles as an example, the calibration marks may be, but are not limited to, circle center coordinates of each initial calibration mark are determined by using a hough circle detection algorithm, position coordinates are represented by the circle center coordinates, a distance between each initial calibration mark is calculated by the circle center coordinates, and other initial calibration marks whose distance from one initial calibration mark is within a preset distance range are determined as adjacent initial calibration marks of the initial calibration mark. If one initial calibration mark has more than 2 adjacent initial calibration marks, determining the initial calibration mark as a final calibration mark; and if one initial calibration mark only has 1 adjacent initial calibration mark or has no adjacent initial calibration mark, excluding the initial calibration mark. The preset distance range is determined according to the distance between the calibration marks on the calibration plate.
When the adjacent initial calibration marks are determined, gradient search can be performed one by one through the circle center coordinates so as to quickly and accurately judge the number of the adjacent initial calibration marks of each initial calibration mark.
In another embodiment, two conditions of the size and the number of adjacent initial calibration marks may be simultaneously used as the conditions for determining the final calibration mark, that is, only when the size of the initial calibration mark is within the preset size range, and the initial calibration mark having at least two adjacent initial calibration marks is determined as the final calibration mark. Through 2 kinds of limiting conditions, the interference points in the initial calibration identification can be accurately filtered, more accurate calibration identification positions are obtained, and the false detection rate of the calibration identification is reduced.
In another embodiment, before determining the position coordinates and/or the size of the initial calibration marks, the region where the initial calibration marks are located may be optimized based on a morphological algorithm to accurately determine the region boundary of each initial calibration mark.
And performing morphological processing on the area where the initial calibration identifier is located, wherein morphological opening operation or morphological closing operation can be adopted. The morphological opening operation is an operation of corroding and then expanding the image, and can be used for eliminating small objects, separating the objects at fine points, smoothing the boundary of a larger object and not obviously changing the area of the larger object. The morphological closing operation is an operation of expanding the image and then corroding the image, and can eliminate small black holes (black areas).
The following describes a specific implementation process of performing morphological processing on the initial calibration identification area by taking a morphological open operation as an example:
mathematically, the dilation or erosion operation is to convolve the initial calibration identification region with a convolution kernel, and taking the schematic diagram of a partial region of the fused image shown in fig. 3a as an example, each small square in the diagram represents a pixel, the squares filled with oblique lines represent the initial calibration identification region, the white squares represent the background region, and the 3 × 3 dashed square regions represent the convolution kernel. The convolution kernel may be of any shape and size and is not limited to being composed of 3 x 3 squares as shown in the figure. There is a separately defined reference point in the convolution kernel, see the location of the region where "", is located in the figure.
Erosion is the operation of finding local minima: the convolution kernel is used for moving in the initial calibration identification area (fig. 3a), the minimum value of the pixel points in the area covered by the convolution kernel is calculated once every time of moving, and the minimum value is assigned to the pixel points where the reference points are located. Referring to fig. 3b, which is a schematic diagram of a result of the etching process performed on fig. 3a, it can be seen from a comparison between fig. 3a and fig. 3b that the diagonal filling area is reduced after the etching operation.
Dilation is the operation of finding local maxima: the convolution kernel is used to move in fig. 3b, the maximum value of the pixel points in the area covered by the convolution kernel is calculated once every movement, and the maximum value is assigned to the pixel point where the reference point is located, so that the oblique filling area in the image is gradually increased. Referring to fig. 3c, which is a schematic diagram of a result of the expansion processing performed on fig. 3c, the oblique line filling area in the diagram is the optimized initial calibration mark area, and its boundary is also the boundary of the initial calibration mark area.
Corresponding to the embodiment of the image processing method, the invention also provides an embodiment of the image processing device.
Fig. 4 is a schematic structural diagram of an image processing apparatus according to an exemplary embodiment of the present invention, which may include:
a binary segmentation module 41, configured to obtain a calibration board image, and perform binary segmentation on the calibration board image by using a fixed threshold to obtain a first binary image;
the adaptive segmentation module 42 is configured to divide the calibration board image into a plurality of sub-images, perform adaptive threshold processing on each sub-image, and merge each sub-image subjected to the adaptive threshold processing into a second binary image;
a fusion module 43, configured to fuse the first binary image and the second binary image to obtain a fused image;
and the detection module 44 is configured to detect the calibration identifier in the fused image.
Optionally, the fusion module comprises:
the operation unit is used for performing difference operation on the gray value of each pixel point in the first binary image and the gray value of the corresponding pixel point in the second binary image;
and the determining unit is used for determining the result of the difference operation as the gray value of each pixel point of the fusion image.
Optionally, the fusion module comprises:
a reverse processing unit configured to perform reverse binarization processing on the second binary image;
the operation unit is used for weighting and superposing the gray value of each pixel point in the second binary image subjected to reverse binarization processing and the gray value of the corresponding pixel point in the first binary image;
and the determining unit is used for determining the weighted superposition result as the gray value of each pixel point of the fusion image.
Optionally, the detection module includes:
the determining unit is used for determining the position coordinates and/or the size of a region of which the gray value is within a preset range in the fused image;
the identification unit is used for identifying the area of which the position coordinates and/or the size meet the preset conditions as the area of the calibration mark; wherein the preset conditions include: the size is within a preset size range; the number of other position coordinates which are within a preset distance range from the position coordinates is at least 2.
Optionally, the determination unit determines the position coordinates and/or the size of the region based on a hough circle detection algorithm.
Optionally, the identification unit determines the number of other location coordinates based on a gradient search algorithm.
Optionally, the detection module further includes:
and the optimization unit is used for optimizing the boundary of the region based on a morphological algorithm.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
Fig. 5 is a schematic diagram of an electronic device according to an exemplary embodiment of the present invention, and illustrates a block diagram of an exemplary electronic device 50 suitable for implementing embodiments of the present invention. The electronic device 50 shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 5, the electronic device 50 may be embodied in the form of a general purpose computing device, which may be, for example, a server device. The components of the electronic device 50 may include, but are not limited to: the at least one processor 51, the at least one memory 52, and a bus 53 connecting the various system components (including the memory 52 and the processor 51).
The bus 53 includes a data bus, an address bus, and a control bus.
The memory 52 may include volatile memory, such as Random Access Memory (RAM)521 and/or cache memory 522, and may further include Read Only Memory (ROM) 523.
Memory 52 may also include a program tool 525 (or utility) having a set (at least one) of program modules 524, such program modules 524 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The processor 51 executes various functional applications and data processing, such as the methods provided by any of the above embodiments, by running a computer program stored in the memory 52.
The electronic device 50 may also communicate with one or more external devices 54 (e.g., a keyboard, a pointing device, etc.). Such communication may be through an input/output (I/O) interface 55. Moreover, the model-generated electronic device 50 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via a network adapter 56. As shown, network adapter 56 communicates with the other modules of model-generated electronic device 50 over bus 53. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the model-generating electronic device 50, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, and data backup storage systems, etc.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the electronic device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
The embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method provided in any of the above embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (13)

1. An image processing method, comprising:
obtaining a calibration plate image, and performing binary segmentation on the calibration plate image by adopting a fixed threshold value to obtain a first binary image;
dividing the calibration plate image into a plurality of sub-images, carrying out self-adaptive threshold processing on each sub-image, and combining each sub-image subjected to self-adaptive threshold processing into a second binary image;
fusing the first binary image and the second binary image to obtain a fused image;
and detecting a calibration mark in the fusion image.
2. The image processing method according to claim 1, wherein fusing the first binary image and the second binary image comprises:
performing difference operation on the gray value of each pixel point in the first binary image and the gray value of the corresponding pixel point in the second binary image;
and taking the result of the difference operation as the gray value of each pixel point of the fused image.
3. The image processing method according to claim 1, wherein fusing the first binary image and the second binary image comprises:
carrying out reverse binarization processing on the second binary image;
weighting and superposing the gray value of each pixel point in the second binary image subjected to reverse binarization and the gray value of the corresponding pixel point in the first binary image;
and taking the weighted superposition result as the gray value of each pixel point of the fusion image.
4. The image processing method according to claim 1, wherein detecting the calibration flag in the fused image comprises:
determining the position coordinates and/or the size of a region with the gray value within a preset range in the fused image;
identifying the area of which the position coordinates and/or the size meet the preset conditions as the area of the calibration identification; wherein the preset conditions include: the size is within a preset size range; the number of other position coordinates which are within a preset distance range from the position coordinates is at least 2.
5. The image processing method according to claim 4, characterized in that the position coordinates and/or the size of the region are determined based on a Hough circle detection algorithm.
6. The image processing method according to claim 4, characterized in that the number of other location coordinates is determined based on a gradient search algorithm.
7. The image processing method according to claim 4, wherein before determining the position coordinates and/or the size of the region, further comprising:
optimizing the boundaries of the region based on a morphological algorithm.
8. An image processing apparatus characterized by comprising:
the binary segmentation module is used for acquiring a calibration plate image and performing binary segmentation on the calibration plate image by adopting a fixed threshold value to obtain a first binary image;
the self-adaptive segmentation module is used for dividing the calibration plate image into a plurality of sub-images, performing self-adaptive threshold processing on each sub-image, and combining each sub-image subjected to the self-adaptive threshold processing into a second binary image;
the fusion module is used for fusing the first binary image and the second binary image to obtain a fused image;
and the detection module is used for detecting the calibration identification in the fusion image.
9. The image processing apparatus according to claim 8, wherein the fusion module comprises:
the operation unit is used for performing difference operation on the gray value of each pixel point in the first binary image and the gray value of the corresponding pixel point in the second binary image;
and the determining unit is used for determining the result of the difference operation as the gray value of each pixel point of the fusion image.
10. The image processing apparatus according to claim 8, wherein the fusion module comprises:
a reverse processing unit configured to perform reverse binarization processing on the second binary image;
the operation unit is used for weighting and superposing the gray value of each pixel point in the second binary image subjected to reverse binarization processing and the gray value of the corresponding pixel point in the first binary image;
and the determining unit is used for determining the weighted superposition result as the gray value of each pixel point of the fusion image.
11. The image processing apparatus according to claim 8, wherein the detection module comprises:
the determining unit is used for determining the position coordinates and/or the size of a region of which the gray value is within a preset range in the fused image;
the identification unit is used for identifying the area of which the position coordinates and/or the size meet the preset conditions as the area of the calibration mark; wherein the preset conditions include: the size is within a preset size range; the number of other position coordinates which are within a preset distance range from the position coordinates is at least 2.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the image processing method of any one of claims 1 to 7 when executing the computer program.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the image processing method of any one of claims 1 to 7.
CN202010974907.3A 2020-09-16 2020-09-16 Image processing method and device, electronic equipment and storage medium Active CN112184723B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010974907.3A CN112184723B (en) 2020-09-16 2020-09-16 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010974907.3A CN112184723B (en) 2020-09-16 2020-09-16 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112184723A true CN112184723A (en) 2021-01-05
CN112184723B CN112184723B (en) 2024-03-26

Family

ID=73921351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010974907.3A Active CN112184723B (en) 2020-09-16 2020-09-16 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112184723B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762266A (en) * 2021-09-01 2021-12-07 北京中星天视科技有限公司 Target detection method, device, electronic equipment and computer readable medium
US20220254065A1 (en) * 2021-02-09 2022-08-11 Shenzhen GOODIX Technology Co., Ltd. Camera calibration method and apparatus and electronic device

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013128617A1 (en) * 2012-03-01 2013-09-06 株式会社日本マイクロニクス Display unevenness detection method and device for display device
CN104966302A (en) * 2015-07-09 2015-10-07 深圳中科智酷机器人科技有限公司 Detecting and positioning method for laser cross at any angle
CN105160652A (en) * 2015-07-10 2015-12-16 天津大学 Handset casing testing apparatus and method based on computer vision
CN105719275A (en) * 2015-12-10 2016-06-29 中色科技股份有限公司 Parallel combination image defect segmentation method
CN108036929A (en) * 2017-12-27 2018-05-15 上海玮舟微电子科技有限公司 A kind of detection method of display device row graph parameter, apparatus and system
CN108171756A (en) * 2017-12-27 2018-06-15 苏州多比特软件科技有限公司 Self-adapting calibration method, apparatus and terminal
CN109345597A (en) * 2018-09-27 2019-02-15 四川大学 A kind of camera calibration image-pickup method and device based on augmented reality
CN109559324A (en) * 2018-11-22 2019-04-02 北京理工大学 A kind of objective contour detection method in linear array images
CN109615659A (en) * 2018-11-05 2019-04-12 成都西纬科技有限公司 A kind of the camera parameters preparation method and device of vehicle-mounted multiple-camera viewing system
CN109903272A (en) * 2019-01-30 2019-06-18 西安天伟电子系统工程有限公司 Object detection method, device, equipment, computer equipment and storage medium
US20190206052A1 (en) * 2017-12-29 2019-07-04 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Carpal segmentation and recognition method and system, terminal and readable storage medium
KR20200000953A (en) * 2018-06-26 2020-01-06 주식회사 수올리나 Around view monitoring system and calibration method for around view cameras
WO2020010945A1 (en) * 2018-07-11 2020-01-16 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device and computer-readable storage medium
CN110879131A (en) * 2018-09-06 2020-03-13 舜宇光学(浙江)研究院有限公司 Imaging quality testing method and imaging quality testing device for visual optical system, and electronic apparatus
CN110895806A (en) * 2019-07-25 2020-03-20 研祥智能科技股份有限公司 Method and system for detecting screen display defects
CN111091571A (en) * 2019-12-12 2020-05-01 珠海圣美生物诊断技术有限公司 Nucleus segmentation method and device, electronic equipment and computer-readable storage medium
CN111340752A (en) * 2019-12-04 2020-06-26 京东方科技集团股份有限公司 Screen detection method and device, electronic equipment and computer readable storage medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013128617A1 (en) * 2012-03-01 2013-09-06 株式会社日本マイクロニクス Display unevenness detection method and device for display device
CN104966302A (en) * 2015-07-09 2015-10-07 深圳中科智酷机器人科技有限公司 Detecting and positioning method for laser cross at any angle
CN105160652A (en) * 2015-07-10 2015-12-16 天津大学 Handset casing testing apparatus and method based on computer vision
CN105719275A (en) * 2015-12-10 2016-06-29 中色科技股份有限公司 Parallel combination image defect segmentation method
CN108036929A (en) * 2017-12-27 2018-05-15 上海玮舟微电子科技有限公司 A kind of detection method of display device row graph parameter, apparatus and system
CN108171756A (en) * 2017-12-27 2018-06-15 苏州多比特软件科技有限公司 Self-adapting calibration method, apparatus and terminal
US20190206052A1 (en) * 2017-12-29 2019-07-04 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Carpal segmentation and recognition method and system, terminal and readable storage medium
KR20200000953A (en) * 2018-06-26 2020-01-06 주식회사 수올리나 Around view monitoring system and calibration method for around view cameras
WO2020010945A1 (en) * 2018-07-11 2020-01-16 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device and computer-readable storage medium
CN110879131A (en) * 2018-09-06 2020-03-13 舜宇光学(浙江)研究院有限公司 Imaging quality testing method and imaging quality testing device for visual optical system, and electronic apparatus
CN109345597A (en) * 2018-09-27 2019-02-15 四川大学 A kind of camera calibration image-pickup method and device based on augmented reality
CN109615659A (en) * 2018-11-05 2019-04-12 成都西纬科技有限公司 A kind of the camera parameters preparation method and device of vehicle-mounted multiple-camera viewing system
CN109559324A (en) * 2018-11-22 2019-04-02 北京理工大学 A kind of objective contour detection method in linear array images
CN109903272A (en) * 2019-01-30 2019-06-18 西安天伟电子系统工程有限公司 Object detection method, device, equipment, computer equipment and storage medium
CN110895806A (en) * 2019-07-25 2020-03-20 研祥智能科技股份有限公司 Method and system for detecting screen display defects
CN111340752A (en) * 2019-12-04 2020-06-26 京东方科技集团股份有限公司 Screen detection method and device, electronic equipment and computer readable storage medium
CN111091571A (en) * 2019-12-12 2020-05-01 珠海圣美生物诊断技术有限公司 Nucleus segmentation method and device, electronic equipment and computer-readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220254065A1 (en) * 2021-02-09 2022-08-11 Shenzhen GOODIX Technology Co., Ltd. Camera calibration method and apparatus and electronic device
CN113762266A (en) * 2021-09-01 2021-12-07 北京中星天视科技有限公司 Target detection method, device, electronic equipment and computer readable medium
CN113762266B (en) * 2021-09-01 2024-04-26 北京中星天视科技有限公司 Target detection method, device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
CN112184723B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
CN110060237B (en) Fault detection method, device, equipment and system
CN111612781B (en) Screen defect detection method and device and head-mounted display equipment
CN108369650B (en) Method for identifying possible characteristic points of calibration pattern
CN109859226B (en) Detection method of checkerboard corner sub-pixels for graph segmentation
KR20130030220A (en) Fast obstacle detection
JP6483168B2 (en) System and method for efficiently scoring a probe in an image with a vision system
CN109447117B (en) Double-layer license plate recognition method and device, computer equipment and storage medium
CN112734761B (en) Industrial product image boundary contour extraction method
CN111695373B (en) Zebra stripes positioning method, system, medium and equipment
CN115018846B (en) AI intelligent camera-based multi-target crack defect detection method and device
CN112184723B (en) Image processing method and device, electronic equipment and storage medium
CN107341793A (en) A kind of target surface image processing method and device
CN115587966A (en) Method and system for detecting whether parts are missing or not under condition of uneven illumination
CN117557565B (en) Detection method and device for lithium battery pole piece
CN114674826A (en) Visual detection method and detection system based on cloth
CN114612418A (en) Method, device and system for detecting surface defects of mouse shell and electronic equipment
CN110175999A (en) A kind of position and posture detection method, system and device
CN110288040A (en) A kind of similar evaluation method of image based on validating topology and equipment
CN111178111A (en) Two-dimensional code detection method, electronic device, storage medium and system
CN109741370B (en) Target tracking method and device
CN116740062A (en) Defect detection method and system based on irregular rubber ring
CN116385567A (en) Method, device and medium for obtaining color card ROI coordinate information
CN113723432B (en) Intelligent identification and positioning tracking method and system based on deep learning
CN114882122A (en) Image local automatic calibration method and device and related equipment
CN113128499B (en) Vibration testing method for visual imaging device, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant