CN117710352A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117710352A
CN117710352A CN202311786617.6A CN202311786617A CN117710352A CN 117710352 A CN117710352 A CN 117710352A CN 202311786617 A CN202311786617 A CN 202311786617A CN 117710352 A CN117710352 A CN 117710352A
Authority
CN
China
Prior art keywords
target object
point cloud
cloud data
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311786617.6A
Other languages
Chinese (zh)
Inventor
董飞
董其波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Mega Technology Co Ltd
Original Assignee
Suzhou Mega Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Mega Technology Co Ltd filed Critical Suzhou Mega Technology Co Ltd
Priority to CN202311786617.6A priority Critical patent/CN117710352A/en
Publication of CN117710352A publication Critical patent/CN117710352A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an image processing method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring first point cloud data of a target to be detected, wherein the target to be detected comprises at least one target object; adjusting coordinates of each data point in the first point cloud data in a first direction to obtain adjusted first point cloud data, and taking the adjusted first point cloud data as second point cloud data; converting the height information of at least part of data points in the second point cloud data into brightness information so as to obtain a brightness image corresponding to the target to be detected; based on the luminance image, a target object region in which each of the at least one target object is located is determined. The scheme has stronger robustness and precision.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technology, and more particularly, to an image processing method, an image processing apparatus, an electronic device, and a storage medium.
Background
The semiconductor chip is used as a basic stone for industrial development in the digital economic age, is related to national strategic safety and industrial development safety, and is an important support for realizing self-standing and self-strengthening of science and technology. The application field of semiconductor chips mainly relates to computers, communications, automobiles, consumer electronics, industry, etc. With the development of semiconductor technology, improving chip performance and reliability and reducing chip cost have become an important trend in the development of chip technology. Because the repair rework cost required for fault discovery at successive stages in the electronic product manufacturing process is geometrically incremental, defects in the package need to be discovered as early as possible in the chip production process to ensure that the defective product does not flow into the next process.
Ball Grid Array (BGA) is a type of chip in which array solder balls (pins) are fabricated on the bottom of a package substrate as input/output (I/O) terminals for circuits that interconnect with a printed wiring board (Printed circuitboard, PCB). The height and coplanarity of the solder balls are the most important indicators of whether the BGA chip is qualified or not, and the poor contact of the solder balls and the PCB board is directly caused, so that missing connection and virtual connection are caused. In the prior art, the conventional detection method mainly detects the height and coplanarity of each solder ball by means of manual visual observation of a microscope. The method has extremely high requirements on users and is seriously dependent on the self judgment of the users, so that the detection precision is not high. In addition, the height and coplanarity of each solder ball can be detected by utilizing X rays, but the equipment using the detection algorithm related to the X rays has higher manufacturing cost and smaller application range.
Disclosure of Invention
The present invention has been made in view of the above-described problems. The invention provides an image processing method, an image processing apparatus, an electronic device, and a storage medium.
According to an aspect of the present invention, there is provided an image processing method including: acquiring first point cloud data of a target to be detected, wherein the target to be detected comprises at least one target object; adjusting coordinates of each data point in the first point cloud data in a first direction to obtain adjusted first point cloud data, and taking the adjusted first point cloud data as second point cloud data; converting the height information of at least part of data points in the second point cloud data into brightness information so as to obtain a brightness image corresponding to the target to be detected; based on the luminance image, a target object region in which each of the at least one target object is located is determined.
The target to be measured further includes a substrate, at least one target object is located on the substrate, and before adjusting coordinates of each data point in the first point cloud data in the first direction to obtain adjusted first point cloud data, and taking the adjusted first point cloud data as second point cloud data, the method further includes: fitting a plane where the matrix is based on coordinates of at least part of data points in the first point cloud data to obtain a first reference plane; wherein the first direction is a direction perpendicular to the first reference plane.
Illustratively, adjusting coordinates of each data point in the first point cloud data in the first direction to obtain adjusted first point cloud data, and taking the adjusted first point cloud data as second point cloud data includes: constructing a first depth image representing the first reference plane based on parameters in a plane equation of the first reference plane; and subtracting the coordinates of each data point in the first depth image in the first direction from the coordinates of each data point in the first point cloud data in the first direction to obtain second point cloud data.
Illustratively, fitting the plane in which the substrate lies based on coordinates of at least some of the data points in the first point cloud data to obtain a first reference plane includes: extracting data points located in a preset edge area of a target to be detected from first point cloud data, wherein the preset edge area is an annular area, the outer edge of the annular area is the outer edge of the target to be detected, and the inner edge of the annular area is an edge obtained by shrinking the outer edge of the annular area inwards by a preset distance; and fitting the plane of the matrix based on the coordinates of the extracted data points to obtain a first reference plane.
Illustratively, determining a target object region in which each of the at least one target object is located based on the luminance image includes: acquiring a template image of a target to be detected, wherein the template image comprises preset characteristic information, and the preset characteristic information is used for indicating the positions of regions of interest corresponding to at least one target object respectively; at least one target object is identified in the luminance image based on the template image to determine a target image area to which the at least one target object corresponds, respectively.
Illustratively, prior to acquiring the template image of the object under test, the method further comprises: determining a plurality of initial grid areas in the template image according to the area reference information; wherein each initial network area contains at most one target object, and the area reference information comprises: the space of the initial grid area, the arrangement line number of the initial grid area, the arrangement column number of the initial grid area and the existence information of the object; the object presence information is used for indicating whether the initial grid area at the corresponding position contains a target object or not; binarizing pixel values of pixels in the plurality of initial grid areas, and determining a highlight area contained in each of the plurality of initial grid areas based on a binarization result, wherein the highlight area contains pixels with pixel values larger than a preset brightness threshold value; for each initial grid region in the plurality of initial grid regions, if the area of the highlight region contained in the initial grid region is larger than a preset area threshold value, determining that the initial network region is the region of interest containing the target object, otherwise, determining that the initial network region is not the region of interest containing the target object.
Illustratively, determining a target object region in which each of the at least one target object is located based on the luminance image includes: identifying at least one target object in the luminance image to determine an initial object region in which each of the at least one target object is located; for each target object in at least one target object, performing contour fitting on an initial object area corresponding to the target object, determining a circumcircle of the initial object area corresponding to the target object based on a contour fitting result, and determining the radius of the circumcircle as a fitting radius corresponding to the target object; determining a comprehensive fitting radius based on the fitting radius corresponding to each of the at least one target object; and for each target object in at least one target object, adjusting an initial object area corresponding to the target object to obtain a target object area corresponding to the target object, wherein when the initial object area is adjusted, the center of the initial object area is kept unchanged, and the comprehensive fitting radius is determined as a new radius of the initial object area.
Illustratively, after determining a target object region in which each of the at least one target object is located based on the luminance image, the method further comprises: adjusting coordinates of each data point in the second point cloud data in the second direction to obtain adjusted second point cloud data, and taking the adjusted second point cloud data as third point cloud data; for each target object in the at least one target object, determining the height of the target object based on the height information of at least part of data points in the third point cloud data, which are located in the target object area corresponding to the target object.
The target to be measured further includes a substrate, at least one target object is located on the substrate, and before adjusting coordinates of each data point in the second point cloud data in the second direction to obtain adjusted second point cloud data, and taking the adjusted second point cloud data as third point cloud data, the method further includes: deleting data points contained in the target object areas corresponding to at least one target object from the second point cloud data, and acquiring plane point cloud data; fitting the plane where the matrix is based on coordinates of each data point in the plane point cloud data to obtain a second reference plane; wherein the second direction is a direction perpendicular to the second reference plane.
Illustratively, adjusting coordinates of each data point in the second point cloud data in the second direction to obtain adjusted second point cloud data, and taking the adjusted second point cloud data as third point cloud data, including: constructing a second depth image representing the second reference plane based on parameters in a plane equation of the second reference plane; and subtracting the coordinates of each data point in the second depth image in the second direction from the coordinates of each data point in the second point cloud data in the second direction to obtain third point cloud data.
Illustratively, after determining, for each of the at least one target object, the height of the target object based on height information of at least a portion of the data points in the third point cloud data that are located within the target object region to which the target object corresponds, the method further comprises: a coplanarity of the at least one target object is determined based on the respective heights of the at least one target object, the coplanarity being indicative of a degree of difference between the respective heights of the at least one target object.
For each target object of the at least one target object, determining the height of the target object based on the height information of at least part of data points in the third point cloud data, which are located in the target object area corresponding to the target object, includes: for each target object in at least one target object, determining object point cloud data corresponding to the target object, wherein the object point cloud data comprises data points in a target object area corresponding to the target object in third point cloud data; deleting invalid data points in the object point cloud data corresponding to the target object; determining a height of the target object based on at least some of the data points in the deleted object point cloud data; the invalid data points comprise data points positioned behind the abrupt data points in the order from the small height information to the large height information, wherein the abrupt data points are the first data points with the difference value of the height information between the order from the small height information to the large height information and the previous data points being larger than a preset height threshold value.
Illustratively, determining the height of the target object based on at least a portion of the data points in the deleted object point cloud data includes: averaging the height information of at least part of data points in the deleted object point cloud data, and determining the average result as the height of the target object; at least part of data points in the deleted object point cloud data are data points with the maximum preset quantity or preset proportion of height information in the deleted object point cloud data.
Illustratively, converting the height information of at least part of the data points in the second point cloud data into brightness information to obtain a brightness image corresponding to the object to be measured includes: and carrying out normalization processing on the height information of the data points, the height information of which is in the preset height range, in the second point cloud data so as to obtain a brightness image.
According to another aspect of the present invention, there is also provided an image processing apparatus including: the acquisition module is used for acquiring first point cloud data of a target to be detected, wherein the target to be detected comprises at least one target object; the adjusting module is used for adjusting the coordinates of each data point in the first point cloud data in the first direction to obtain adjusted first point cloud data serving as second point cloud data; the conversion module is used for converting the height information of at least part of data points in the second point cloud data into brightness information so as to obtain a brightness image corresponding to the target to be detected; and the determining module is used for determining a target object area where each target object in the at least one target object is located based on the brightness image.
According to yet another aspect of the present invention, there is also provided an electronic device comprising a processor and a memory, the memory having stored therein computer program instructions which, when executed by the processor, are adapted to carry out the above-described image processing method.
According to still another aspect of the present invention, there is also provided a storage medium storing a computer program/instruction for executing the above-described image processing method when executed.
According to the image processing method, the image processing device, the electronic equipment and the storage medium provided by the embodiment of the invention, the coordinates of each data point in the first point cloud data in the first direction are adjusted, so that adjusted second point cloud data can be obtained. And converting the height information of at least part of data points in the second point cloud data into brightness information, so that a brightness image corresponding to the target to be detected can be obtained. A target object region in which each of the at least one target object is located may then be determined based on the luminance image. According to the scheme, the first point cloud data is adjusted according to the first direction, so that the height difference between the target object and the plane in which the target object is located can be adjusted, and the contrast between the height of the target object and the height of the plane in which the target object is located is further adjusted, the target object area corresponding to each target object in at least part of areas is determined, and the scheme has high robustness and precision.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following more particular description of embodiments of the present invention, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, and not constitute a limitation to the invention. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 shows a schematic flow chart of an image processing method according to one embodiment of the invention;
FIG. 2 shows a schematic diagram of a luminance image according to one embodiment of the invention;
FIG. 3 illustrates a partial side view of a target under test according to one embodiment of the invention;
FIG. 4 shows a schematic diagram of an initial depth image according to one embodiment of the invention;
FIG. 5 illustrates a schematic diagram of a plurality of initial grid areas in accordance with one embodiment of the invention;
FIG. 6 shows a schematic diagram of the binarization result according to an embodiment of the present invention;
FIG. 7 shows a schematic diagram of a template image according to one embodiment of the invention;
FIG. 8 illustrates a schematic view of coplanarity of at least one target object in accordance with one embodiment of the present invention;
FIG. 9 shows a schematic representation of coplanarity of at least one target object according to another embodiment of the present invention;
fig. 10 shows a schematic block diagram of an image processing apparatus according to an embodiment of the present invention; and
fig. 11 shows a schematic block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present invention and not all embodiments of the present invention, and it should be understood that the present invention is not limited by the example embodiments described herein. Based on the embodiments of the invention described in the present application, all other embodiments that a person skilled in the art would have without inventive effort shall fall within the scope of the invention.
In order to solve the above problems at least in part, an embodiment of the present invention provides an image processing method. Fig. 1 shows a schematic flow chart of an image processing method 100 according to an embodiment of the invention. As shown in fig. 1, the method 100 may include the following steps S110, S120, S130, and S140.
In step S110, first point cloud data of a target to be measured is acquired, where the target to be measured may include at least one target object.
By way of example, the object to be measured may be any type of object, such as a wafer, a chip, an electronic component, etc. The following description will mainly take a chip as an example of an object to be measured. In one embodiment, an initial depth image of the object under test may be acquired using a depth camera (3D camera). And acquiring first point cloud data of the target to be detected according to the first depth image. For example, the surface of the object to be measured may be scanned with a line laser sensor, and an initial depth image (point cloud image) of the object to be measured may be obtained. In another embodiment, a first to-be-measured image of the to-be-measured object may be acquired by using any two-dimensional camera (2D) camera such as a color camera, and then the first to-be-measured image is three-dimensionally reconstructed to obtain first point cloud data corresponding to an initial depth image obtained by three-dimensional reconstruction. In step S110, for a target to be detected, corresponding first point cloud data is acquired. For example, all point cloud data corresponding to the initial depth image Img1 collected or generated for the target to be measured may be used as the first point cloud data, or part of point cloud data corresponding to the initial depth image Img1 collected or generated for the target to be measured may be selected as the first point cloud data. Optionally, after all the point cloud data corresponding to the initial depth image Img1 is acquired, all the point cloud data may be preprocessed, at least part of the preprocessed point cloud data is used as the first point cloud data, and a subsequent step (for example, a step starting from step S120 and going backward) may be performed on the preprocessed point cloud data. Of course, the preprocessed point cloud data may also be directly acquired in step S110. Such pre-treatments may include, but are not limited to: denoising, smoothing, enhancement, etc. The first point cloud data may be acquired in real time by a 3D camera or a 2D camera, or may be extracted from a storage device of an upper computer. The object to be measured may comprise at least one object. For example, when the object to be tested is a chip, an array of solder balls is fabricated on the bottom of the chip substrate, and each solder ball can be used as a target object.
In step S120, coordinates of each data point in the first point cloud data in the first direction are adjusted to obtain adjusted first point cloud data, and the adjusted first point cloud data is used as second point cloud data.
Illustratively, the first direction may be any direction, e.g., a horizontal direction or a vertical direction, etc. The coordinates of each data point in the first point cloud data may represent (x 1i ,y 1i ,z 1i ). Because the plane of the target to be measured is not completely parallel to the plane of the camera, a certain inclination may exist, and therefore coordinates of each data point in the first point cloud data in the first direction are adjusted, and adjusted first point cloud data can be obtained. The first direction may be a direction in which a photographing path of the camera is located when an image of the object to be measured is acquired. The adjusted first point cloud data may be used as second point cloud data. The coordinates of each data point in the second point cloud data may represent (x 2i ,y 2i ,z 2i )。
Step S130, converting the height information of at least part of data points in the second point cloud data into brightness information to obtain a brightness image corresponding to the target to be detected.
Illustratively, the height information of at least a partial data point may be scaled to be within the range of [0, 255] by means of a linear mapping, such that the converted numerical value may be regarded as a gray value of the data point at a corresponding position in the luminance image.
For example, converting the height information of at least part of the data points in the second point cloud data into brightness information to obtain a brightness image corresponding to the object to be measured may include: and carrying out normalization processing on the height information of the data points, the height information of which is in the preset height range, in the second point cloud data so as to obtain a brightness image.
In one embodiment, the altitude information in the second point cloud data may be within a preset altitude range [ m, n ]]Height information of data points within is normalized to [0, 255]To obtain a luminance image Img2.m and n may be set according to respective heights of at least one target object, which is not limited in the present invention. The height information of the data point in the second point cloud data may be determined by the coordinate z in the direction of the data point z 2i And (5) determining. Fig. 2 shows a schematic diagram of a luminance image according to an embodiment of the invention. As shown in fig. 2, the coordinate z in the direction z of the data point in the second point cloud data 2i The larger the data point, the larger the converted luminance value of the data point, i.e., the brighter the data point.
According to the technical scheme, the brightness image can be obtained by carrying out normalization processing on the height information of the data points, the height information of which is in the preset height range, in the second point cloud data. The method can increase the contrast ratio of at least one target object and the plane where the target object to be measured is positioned by utilizing the height difference of the at least one target object and the plane where the target object to be measured is positioned, so that the target object area where each target object is positioned can be accurately obtained later.
Step S140, determining a target object area where each of the at least one target object is located based on the luminance image.
Illustratively, according to the obtained brightness image Img2, at least one target object in the brightness image Img2 may be subjected to target detection by a target detection algorithm to determine a target object region in which each of the at least one target object is located. The target detection may be implemented using any suitable target detection algorithm that may occur in the present or future. The target object region in which each of the at least one target object is located may also be determined by means of template matching, for example.
According to the technical scheme, the coordinates of each data point in the first point cloud data in the first direction are adjusted, so that adjusted second point cloud data can be obtained. And converting the height information of at least part of data points in the second point cloud data into brightness information, so that a brightness image corresponding to the target to be detected can be obtained. A target object region in which each of the at least one target object is located may then be determined based on the luminance image. According to the scheme, the first point cloud data is adjusted according to the first direction, so that the height difference between the target object and the plane in which the target object is located can be adjusted, and the contrast between the height of the target object and the height of the plane in which the target object is located is further adjusted, the target object area corresponding to each target object in at least part of areas is determined, and the scheme has high robustness and precision.
The target to be measured may further include a substrate, at least one target object is located on the substrate, and before adjusting coordinates of each data point in the first point cloud data in the first direction to obtain adjusted first point cloud data, and taking the adjusted first point cloud data as the second point cloud data, the method may further include: fitting a plane where the matrix is based on coordinates of at least part of data points in the first point cloud data to obtain a first reference plane; wherein the first direction is a direction perpendicular to the first reference plane.
In one embodiment, FIG. 3 illustrates a partial side view of a target under test according to one embodiment of the invention. As shown in fig. 3, gray spheres indicated by arrows 310 may represent the target object, and areas indicated by white rectangular boxes 320 may represent where the substrate is located. Each target object in the targets to be measured is located on the substrate. Based on coordinates of all or part of data points in the first point cloud dataFitting the plane of the substrate to obtain a first reference plane P a1 . The first direction is the same as the first reference plane P a1 A vertical direction. For example, the user may set a region of interest ROI1, and fit the plane in which the substrate is located based on coordinates of a portion of data points in the first point cloud data that are within the region of interest ROI 1. For another example, the plane in which the base is located may be fitted based on coordinates of all data points in the first point cloud data. In one embodiment of the present invention, the plane in which the substrate is located may be fitted based on the coordinates of the partial data points in the first point cloud data that are within the region of interest ROI 1. Acquiring coordinates (x 11 ,y 11 ,z 11 ),(x 12 ,y 12 ,z 12 ),...(x 1j ,y 1j ,z 1j ). Illustratively, the first reference plane P may be obtained by fitting using a least squares fit method based on the coordinates of all data points within the region of interest ROI1 a1 . First reference plane P a1 The plane equation of (2) can be expressed as: ax+by+cy+z=0.
According to the technical scheme, the first reference plane can be obtained by fitting the plane where the matrix is located based on the coordinates of at least part of data points in the first point cloud data. Therefore, the first reference plane can be simply, conveniently and rapidly acquired, and the efficiency is high.
Illustratively, adjusting coordinates of each data point in the first point cloud data in the first direction to obtain adjusted first point cloud data, and taking the adjusted first point cloud data as second point cloud data may include: constructing a first depth image representing the first reference plane based on parameters in a plane equation of the first reference plane; and subtracting the coordinates of each data point in the first depth image in the first direction from the coordinates of each data point in the first point cloud data in the first direction to obtain second point cloud data.
In one embodiment, a first reference plane P a1 The parameters A, B, C in the plane equation of (2) can be constructed to represent the first reference plane P a1 Is a first depth image ImgS of the image. For example, the image size of the first depth image ImgS may be the same as the image size of the initial depth image Img1 corresponding to the target to be measured. The process of constructing the first depth image ImgS may be expressed as: imageS (r, C) =a (r-Row) +b (C-Column) +c, wherein Row may represent the abscissa of the center point of the region of interest ROI1, column may represent the ordinate of the center point of the region of interest ROI1, and (r, C) represents the coordinate point in the first depth image ImgS. The coordinates of each data point in the first depth image in the first direction are subtracted from the coordinates of each data point in the first point cloud data in the first direction, and second point cloud data can be obtained. Corrected depth image imgc=img1-ImgS. The corrected depth image ImgC includes the second point cloud data.
In one embodiment, the corrected depth image ImgC is normalized, and a luminance image Img2 may be obtained. For example, if the height information of any one data point (r ', c') in the corrected depth image ImgC is g, the luminance information g '=g of the corresponding data point in the converted luminance image Img2 is mutt+add, where mutt=255/(m' -n '), add= -mutt m'.
According to the above technical solution, the first depth image is constructed based on parameters in a plane equation of the first reference plane. And then subtracting the coordinates of each data point in the first depth image in the first direction from the coordinates of each data point in the first point cloud data in the first direction to obtain second point cloud data. The method can obtain the second point cloud data more accurately and simply.
Illustratively, fitting the plane in which the substrate is located based on coordinates of at least a portion of the data points in the first point cloud data to obtain a first reference plane may include: extracting data points located in a preset edge area of a target to be detected from first point cloud data, wherein the preset edge area is an annular area, the outer edge of the annular area is the outer edge of the target to be detected, and the inner edge of the annular area is an edge obtained by shrinking the outer edge of the annular area inwards by a preset distance; and fitting the plane of the matrix based on the coordinates of the extracted data points to obtain a first reference plane.
In one embodiment, data points located in a preset edge area of the object to be measured can be extracted from the first point cloud data, and a plane in which the substrate is located is fitted based on coordinates of the extracted data points. Fig. 4 shows a schematic diagram of an initial depth image according to an embodiment of the invention. As shown in fig. 4, the initial depth image Img1 may include first point cloud data therein. The annular region 410 shown in fig. 4 is a predetermined edge region. The outer edge of the annular region 410 is the outer edge of the object to be measured. The inner edge of the annular region 410 is an edge obtained by shrinking the outer edge of the annular region inward by a predetermined distance. The preset distance may be set to any value less than or equal to w/2 according to the image width w of the initial depth image Img1, which is not limited by the present invention. It will be appreciated that the region of interest ROI1 described above may be the annular region 410 herein. The implementation manner of fitting the plane where the base body is located based on the coordinates of the extracted data points may refer to the above description about "fitting the plane where the base body is located based on the coordinates of at least some data points in the first point cloud data" to obtain the first reference plane ", which is not described herein for brevity. Fitting the plane of the substrate to obtain a first reference plane P a1
According to the technical scheme, data points located in the preset edge area of the target to be detected are extracted from the first point cloud data, and then the plane where the matrix is located is fitted based on coordinates of the data points in the preset edge area, so that a first reference plane is obtained. The method can avoid taking the point cloud data corresponding to the target object as the data point of plane fitting so as to ensure the accuracy of the obtained first reference plane.
Illustratively, determining a target object region in which each of the at least one target object is located based on the luminance image may include: acquiring a template image of a target to be detected, wherein the template image comprises preset characteristic information, and the preset characteristic information is used for indicating the positions of regions of interest corresponding to at least one target object respectively; at least one target object is identified in the luminance image based on the template image to determine a target image area to which the at least one target object corresponds, respectively.
In one embodiment, the template image may be an RGB image or a grayscale image. The template image can be a static image or any video frame in a dynamic video. In addition, the template image can be an original image acquired by the image acquisition device, or an image after preprocessing the original image acquired by the image acquisition device, wherein the preprocessing can comprise the operations of digitizing, geometrically transforming, normalizing, filtering and the like on the original image. The template image may include preset feature information. The preset feature information may indicate a location of a region of interest to which the at least one target object corresponds, respectively. For example, a labeling frame of the region of interest corresponding to each target object may be included in the template image, or image coordinates of the region of interest corresponding to each target object may be included in the template image. Template matching of the luminance image Img2 with the template image makes it possible to identify at least one target object from the luminance image Img 2. Based on the at least one object obtained by the recognition, a respective target image area of each object in the luminance image Img2 may be determined.
According to the technical scheme, the template image of the target to be detected is obtained, and then at least one target object is identified in the brightness image based on the template image, so that the target image area corresponding to each at least one target object is determined. The method is simple to operate, easy to implement and high in efficiency.
Illustratively, prior to acquiring the template image of the object to be measured, the method may further include: determining a plurality of initial grid areas in the template image according to the area reference information; wherein each initial network area contains at most one target object, the area reference information may include: the space of the initial grid area, the arrangement line number of the initial grid area, the arrangement column number of the initial grid area and the existence information of the object; the object presence information is used for indicating whether the initial grid area at the corresponding position contains a target object or not; binarizing pixel values of pixels in the plurality of initial grid areas, and determining a highlight area contained in each of the plurality of initial grid areas based on a binarization result, wherein the highlight area contains pixels with pixel values larger than a preset brightness threshold value; for each initial grid region in the plurality of initial grid regions, if the area of the highlight region contained in the initial grid region is larger than a preset area threshold value, determining that the initial network region is the region of interest containing the target object, otherwise, determining that the initial network region is not the region of interest containing the target object.
In one embodiment, a plurality of initial grid regions may be determined in the template image based on the region reference information. Illustratively, the region reference information may include: the space between the initial grid areas, the arrangement line number of the initial grid areas, the arrangement column number of the initial grid areas and the existence information of the objects. For example, if the horizontal pitch and the vertical pitch of the initial grid area are both h, the number of rows of the initial grid area is 20, the number of columns of the initial grid area is 10, and the first column of the first row does not include the target object, 199 (20×10-1) initial grid areas can be generated, and each grid area has a size of h×h. FIG. 5 illustrates a schematic diagram of a plurality of initial grid areas in accordance with one embodiment of the present invention. As shown in fig. 6, the number of arrangement rows of the initial mesh region is 22, and the number of arrangement columns of the initial mesh region is 14. Based on a preset brightness threshold, pixel values of pixels within a plurality of initial grid regions may be binarized. The preset brightness threshold may be empirically set, and the present invention is not limited in this regard. Fig. 6 shows a schematic diagram of the binarization result according to an embodiment of the present invention. As shown in fig. 6, a highlight region, such as the region indicated by arrow 610, included in each of the plurality of initial grid regions may be determined based on the binarization result. The highlight region contains pixels having pixel values greater than a preset brightness threshold. Based on the area of the highlight region contained in each of the plurality of initial mesh regions, it is determined whether the initial network region is a region of interest containing the target object. For example, if the area a of the highlight region included in the initial grid region is greater than or equal to the preset area threshold T a It may be determined that the initial network region is a region of interest containing a target object. Preset area threshold T a May be set according to the area of the initial mesh region. For example, if the area of the initial grid region is b, the area threshold T is preset a May be equal to 0.8×b or 0.9×b, etc., and the present invention is not limited thereto. FIG. 7 shows a schematic diagram of a template image according to one embodiment of the invention. After whether each initial mesh region is a region of interest containing a target object, a template image may be obtained. In the obtained template image, part of the initial mesh region that does not belong to the region of interest containing the target object has been deleted.
According to the technical scheme, a plurality of initial grid areas can be determined in the template image according to the area reference information. The pixel values of the pixels in the plurality of initial grid regions are binarized, and the highlight regions contained in the plurality of initial grid regions are determined based on the binarization results. By comparing the area of the highlight region contained in each initial mesh region with a preset area threshold value, respectively, it can be determined whether the initial network region is a region of interest containing the target object. The method can accurately and quickly filter out the initial network area which does not contain the target object.
Illustratively, determining a target object region in which each of the at least one target object is located based on the luminance image may include: identifying at least one target object in the luminance image to determine an initial object region in which each of the at least one target object is located; for each target object in at least one target object, performing contour fitting on an initial object area corresponding to the target object, determining a circumcircle of the initial object area corresponding to the target object based on a contour fitting result, and determining the radius of the circumcircle as a fitting radius corresponding to the target object; determining a comprehensive fitting radius based on the fitting radius corresponding to each of the at least one target object; and for each target object in at least one target object, adjusting an initial object area corresponding to the target object to obtain a target object area corresponding to the target object, wherein when the initial object area is adjusted, the center of the initial object area is kept unchanged, and the comprehensive fitting radius is determined as a new radius of the initial object area.
In one embodiment, the template matching may be performed on the luminance image using the obtained template image to identify at least one target object in the luminance image. The template matching result may include an initial object region in which each of the at least one target object is located. And performing contour fitting on the initial object region corresponding to each target object in at least one target object to obtain a circumcircle of the initial object region corresponding to the target object. By way of example and not limitation, the initial object region may be contour fitted using a contour fitting algorithm in the OpenCV library. The radius of the circumscribing circle obtained by contour fitting can be used as the fitting radius corresponding to the corresponding target object. The composite fitting radius may be determined based on the respective fitting radius of the at least one target object. For example, the comprehensive fitting radius may be a mean value of fitting radii corresponding to at least one target object, or may be a maximum fitting radius or a minimum fitting radius of fitting radii corresponding to at least one target object, which is not limited in the present invention. And adjusting the initial object area corresponding to each target object based on the determined comprehensive fitting radius so that the radius of the initial object area after adjustment is equal to the comprehensive fitting radius. The adjusted initial object region may be used as a target object region corresponding to the target object.
According to the technical scheme, based on the initial object region where each target object is located, contour fitting is performed on the initial object region corresponding to the target object. And then determining the circumcircle of the initial object area corresponding to the target object based on the contour fitting result. And determining the comprehensive fitting radius according to the radius determination of the circumscribing circle of each initial object region, and adjusting the initial object region corresponding to each target object based on the comprehensive fitting radius to obtain the target object region corresponding to the target object. The method does not need complex operation, can simply and quickly determine the target object area corresponding to each target object, and is accurate.
Illustratively, after determining the target object region in which each of the at least one target object is located based on the luminance image, the method may further include: adjusting coordinates of each data point in the second point cloud data in the second direction to obtain adjusted second point cloud data, and taking the adjusted second point cloud data as third point cloud data; for each target object in the at least one target object, determining the height of the target object based on the height information of at least part of data points in the third point cloud data, which are located in the target object area corresponding to the target object.
In an embodiment, the manner of obtaining the third point cloud data is similar to that of obtaining the second point cloud data in the previous embodiment, and for brevity, a description thereof will be omitted. The coordinates of each data point in the third point cloud data may be expressed as (x) 3k ,y 3k ,z 3k ). For each of the at least one target object, a height of the target object may be determined based on height information of at least a portion of data points in the third point cloud data that are located within a target object region to which the target object corresponds. For example, the coordinate z in the z-direction of all data points within each target object region may be calculated 3k As the height of the target object corresponding to the target object region, or the coordinate z in the z direction of all data points in each target object region 3k The maximum value of (2) is taken as the height of the target object corresponding to the target object area.
According to the technical scheme, the coordinates of each data point in the second point cloud data in the second direction are adjusted, so that the third point cloud data can be obtained. Then, for each target object in the at least one target object, determining the height of the target object based on the height information of at least part of data points in the third point cloud data, which are located in the target object area corresponding to the target object. The method can carry out secondary adjustment on the point cloud data so as to ensure the accuracy of the obtained third point cloud data. Further, the height of the corresponding target object is determined based on the height information of at least part of data points located in each target object area in the third point cloud data, so that the obtained height of the target object can be ensured to be more accurate and reliable.
The target to be measured may further include a substrate, at least one target object is located on the substrate, and before adjusting coordinates of each data point in the second point cloud data in the second direction to obtain adjusted second point cloud data, and taking the adjusted second point cloud data as third point cloud data, the method may further include: deleting data points contained in the target object areas corresponding to at least one target object from the second point cloud data, and acquiring plane point cloud data; fitting the plane where the matrix is based on coordinates of each data point in the plane point cloud data to obtain a second reference plane; wherein the second direction is a direction perpendicular to the second reference plane.
In one embodiment, deleting data points included in the target object area corresponding to each of the at least one target object from the second point cloud data may acquire planar point cloud data. The planar point cloud data may represent point cloud data corresponding to the substrate. Fitting the plane of the matrix based on the coordinates of each data point in the plane point cloud data to obtain a second reference plane P a2 The implementation of (a) and the previous implementation of obtaining a first reference plane P a1 In a similar manner, and for brevity, will not be described in detail herein. The second direction is a direction perpendicular to the second reference plane. It will be appreciated that if the first reference plane P a1 With a second reference plane P a2 Parallel, the first direction and the second direction represent the same direction.
According to the technical scheme, data points contained in the target object areas corresponding to at least one target object are deleted from the second point cloud data, and plane point cloud data are obtained. And then fitting the plane where the matrix is based on the coordinates of each data point in the plane point cloud data so as to obtain a second reference plane. The method determines the planar point cloud data based on the adjusted second point cloud data, so that the reliability of a second reference plane obtained by performing plane fitting based on the planar point cloud data can be ensured.
Illustratively, adjusting coordinates of each data point in the second point cloud data in the second direction to obtain adjusted second point cloud data, and taking the adjusted second point cloud data as third point cloud data may include: constructing a second depth image representing the second reference plane based on parameters in a plane equation of the second reference plane; and subtracting the coordinates of each data point in the second depth image in the second direction from the coordinates of each data point in the second point cloud data in the second direction to obtain third point cloud data.
In one embodiment, one of ordinary skill in the art can construct a first depth image representing the first reference plane by reading the parameters in the plane equation based on the first reference plane described above; the coordinates of each data point in the first depth image in the first direction are subtracted from the coordinates of each data point in the first point cloud data in the first direction to obtain a related description of the second point cloud data', and the implementation manner and the technical effect of constructing the second depth image and obtaining the third point cloud data are understood herein, so that details are omitted for brevity.
Illustratively, after determining, for each of the at least one target object, the height of the target object based on the height information of at least a portion of the data points in the third point cloud data that are located within the target object region to which the target object corresponds, the method may further include: a coplanarity of the at least one target object is determined based on the respective heights of the at least one target object, the coplanarity being indicative of a degree of difference between the respective heights of the at least one target object.
In one embodiment, the respective heights of the at least one target object may be represented as Z1, Z2,..zn. FIG. 8 illustrates a schematic diagram of coplanarity of at least one target object in accordance with one embodiment of the present invention. Referring to fig. 8, the coplanarity Co1 of at least one target object may be represented by a difference between a maximum height Zmax and a minimum height Zmin in heights corresponding to the at least one target object, respectively. That is, co1=max ([ Z1, Z2,..zn ]) -min ([ Z1, Z2,..zn). Fig. 9 shows a schematic representation of coplanarity of at least one target object according to another embodiment of the present invention. Referring to fig. 9, the coplanarity Co2 of at least one target object may also be represented by a difference between the respective height of the at least one target object and the average height Zmean. That is, co2= [ Z1, Z2,..zn ] -mean ([ Z1, Z2,..zn ]).
According to the above technical solution, the coplanarity of the at least one target object can be determined based on the respective heights of the at least one target object. The method can simply and intuitively determine the degree of difference between the respective heights of the at least one target object.
For example, for each target object of the at least one target object, determining the height of the target object based on the height information of at least part of the data points in the third point cloud data located in the target object area corresponding to the target object may include: for each target object in at least one target object, determining object point cloud data corresponding to the target object, wherein the object point cloud data can comprise data points in a target object area corresponding to the target object in third point cloud data; deleting invalid data points in the object point cloud data corresponding to the target object; determining a height of the target object based on at least some of the data points in the deleted object point cloud data; the invalid data points may include data points located after the abrupt data points in the order of the height information from small to large, and the abrupt data points are data points where the difference of the height information between the first data point and the previous data point in the order of the height information from small to large is greater than a preset height threshold.
In one embodiment, for object point cloud data corresponding to each target object, invalid data points in the object point cloud data may be deleted first. The height of the target object is then determined based on at least some of the data points in the deleted object point cloud data. Invalid data points may include data points that follow abrupt data points in order of height information from small to large. The abrupt data points are the first data points whose difference in height information from the previous data point is greater than a preset height threshold value in the order of the height information from small to large. For example, the height information of each data point in the object point cloud data corresponding to the current target object is Z11, Z12, and..z1n in order. The heights of the n data points are ordered in order from large to small or from small to large, and if the difference value of the height information between any data point and the previous data point is greater than a preset height threshold value, the data point can be determined to be a suddenly changed data point. The preset height threshold may be any value set empirically, which is not limited by the present invention. In another embodiment, the first derivative may also be derived for the ordered object point cloud data. When the rate of change of any one data point is greater than or equal to the preset rate of change, that data point can be considered a sudden change data point. For the object point cloud data corresponding to each target object, the mutation data points contained in the object point cloud data can be deleted in a similar manner, and for brevity, the description is omitted here.
According to the technical scheme, for each target object in at least one target object, the object point cloud data corresponding to the target object is determined. And deleting invalid data points in the object point cloud data corresponding to the target object, and determining the height of the target object based on at least part of data points in the deleted object point cloud data. Because the height of each data point in the target object area is gradually increased, the method can effectively delete the interference data points in the target point cloud data, and avoid the influence of invalid data points on the height of the determined target object.
Illustratively, determining the height of the target object based on at least a portion of the data points in the deleted object point cloud data may include: averaging the height information of at least part of data points in the deleted object point cloud data, and determining the average result as the height of the target object; at least part of data points in the deleted object point cloud data are data points with the maximum preset quantity or preset proportion of height information in the deleted object point cloud data.
In one embodiment, for example, the deleted object point cloud data includes 100 data points, and the height information corresponding to the 20 data points with the largest height information in the 100 data points may be averaged. For another example, the height information corresponding to the partial data points with the height information located in the first 10% of the 100 data points may be averaged. The preset number or the preset proportion can be set arbitrarily according to the number of all data points in the object point cloud data, and the invention is not limited to this. The calculated mean of at least some of the data points may be taken as the height of the current target object. In a similar manner, the height of each target object may be calculated, and for brevity, will not be described in detail herein.
According to the technical scheme, the average value of the height information of at least part of data points in the deleted object point cloud data is calculated, and the result of the average value calculation is determined to be the height of the target object. The height of the target object determined by the method has better stability.
According to still another aspect of the present invention, there is also provided an image processing apparatus. Fig. 10 shows a schematic block diagram of an image processing apparatus 1000 according to an embodiment of the present invention. As shown in fig. 10, the image processing apparatus 1000 may include an acquisition module 1010, an adjustment model 1020, a conversion module 1030, and a determination module 1040.
The obtaining module 1010 is configured to obtain first point cloud data of a target to be measured, where the target to be measured includes at least one target object.
The adjustment module 1020 is configured to adjust coordinates of each data point in the first point cloud data in a first direction, so as to obtain adjusted first point cloud data as second point cloud data.
The conversion module 1030 is configured to convert the height information of at least some data points in the second point cloud data into brightness information, so as to obtain a brightness image corresponding to the target to be detected.
A determining module 1040 is configured to determine, based on the luminance image, a target object area where each of the at least one target objects is located.
Those skilled in the art will understand the implementation manner and technical effects of the image processing apparatus by reading the above description about the image processing method 100, and for brevity, the description is omitted here.
According to still another aspect of the present invention, an electronic device is also provided. Fig. 11 shows a schematic block diagram of an electronic device according to an embodiment of the invention. As shown in fig. 11, the electronic device 1100 includes a processor 1110 and a memory 1120, where the memory 1120 stores a computer program of instructions for performing the image processing method described above when executed by the processor 1110.
According to yet another aspect of the present invention, there is also provided a storage medium storing a computer program/instructions, the storage medium may include, for example, a storage component of a tablet computer, a hard disk of a personal computer, an erasable programmable read-only memory (EPROM), a portable read-only memory (CD-ROM), a USB memory, or any combination of the foregoing storage media. The storage medium may be any combination of one or more computer readable storage media. The computer program/instructions are used by the processor when running to perform the image processing method described above.
Those skilled in the art will understand the specific implementation of the electronic device and the storage medium by reading the above description about the image processing method, and for brevity, the description is omitted here.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present invention thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the invention and aid in understanding one or more of the various inventive aspects, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the invention. However, the method of the present invention should not be construed as reflecting the following intent: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the blocks in an image processing apparatus according to an embodiment of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing description is merely illustrative of specific embodiments of the present invention and the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the scope of the present invention. The protection scope of the invention is subject to the protection scope of the claims.

Claims (17)

1. An image processing method, comprising:
acquiring first point cloud data of a target to be detected, wherein the target to be detected comprises at least one target object;
adjusting coordinates of each data point in the first point cloud data in a first direction to obtain adjusted first point cloud data, and taking the adjusted first point cloud data as second point cloud data;
converting the height information of at least part of data points in the second point cloud data into brightness information so as to obtain a brightness image corresponding to the target to be detected;
and determining a target object area where each target object in the at least one target object is located based on the brightness image.
2. The method of claim 1, wherein the target to be measured further comprises a substrate on which the at least one target object is located, and wherein before the adjusting the coordinates of each data point in the first point cloud data in the first direction to obtain adjusted first point cloud data, and taking the adjusted first point cloud data as second point cloud data, the method further comprises:
fitting a plane where the matrix is based on coordinates of at least part of data points in the first point cloud data to obtain a first reference plane;
Wherein the first direction is a direction perpendicular to the first reference plane.
3. The method of claim 2, wherein adjusting coordinates of each data point in the first point cloud data in a first direction to obtain adjusted first point cloud data and using the adjusted first point cloud data as second point cloud data comprises:
constructing a first depth image representing the first reference plane based on parameters in a plane equation of the first reference plane;
and subtracting the coordinates of each data point in the first depth image in the first direction from the coordinates of each data point in the first point cloud data in the first direction to obtain the second point cloud data.
4. The method of claim 2, wherein fitting the plane in which the substrate is located based on coordinates of at least some of the data points in the first point cloud data to obtain a first reference plane comprises:
extracting data points located in a preset edge area of the object to be detected from the first point cloud data, wherein the preset edge area is an annular area, the outer edge of the annular area is the outer edge of the object to be detected, and the inner edge of the annular area is an edge obtained by shrinking the outer edge of the annular area by a preset distance;
And fitting the plane of the matrix based on the coordinates of the extracted data points to obtain the first reference plane.
5. The method of any of claims 1-4, wherein the determining, based on the luminance image, a target object region in which each of the at least one target object is located comprises:
acquiring a template image of the target to be detected, wherein the template image comprises preset feature information, and the preset feature information is used for indicating the positions of the regions of interest corresponding to the at least one target object respectively;
and identifying the at least one target object in the brightness image based on the template image so as to determine the target image area corresponding to each at least one target object.
6. The method of claim 5, wherein prior to the acquiring the template image of the object under test, the method further comprises:
determining a plurality of initial grid areas in the template image according to the area reference information; wherein each initial network area contains at most one target object, and the area reference information comprises: the space of the initial grid area, the arrangement line number of the initial grid area, the arrangement column number of the initial grid area and the existence information of the object; the object presence information is used for indicating whether the initial grid area at the corresponding position contains a target object or not;
Binarizing pixel values of pixels in the plurality of initial grid areas, and determining highlight areas contained in the plurality of initial grid areas respectively based on binarization results, wherein the highlight areas contain pixels with pixel values larger than a preset brightness threshold value;
for each initial grid region in the plurality of initial grid regions, if the area of the highlight region contained in the initial grid region is larger than a preset area threshold value, determining that the initial network region is a region of interest containing the target object, otherwise, determining that the initial network region is not the region of interest containing the target object.
7. The method of any of claims 1-4, wherein the determining, based on the luminance image, a target object region in which each of the at least one target object is located comprises:
identifying the at least one target object in the luminance image to determine an initial object region in which each of the at least one target object is located;
for each target object in the at least one target object, performing contour fitting on an initial object area corresponding to the target object, determining a circumcircle of the initial object area corresponding to the target object based on a contour fitting result, and determining the radius of the circumcircle as a fitting radius corresponding to the target object;
Determining a comprehensive fitting radius based on the fitting radius corresponding to each of the at least one target object;
and for each target object in the at least one target object, adjusting an initial object area corresponding to the target object to obtain a target object area corresponding to the target object, wherein when the initial object area is adjusted, the center of the initial object area is kept unchanged, and the comprehensive fitting radius is determined as a new radius of the initial object area.
8. The method of any of claims 1-4, wherein after said determining a target object region in which each of the at least one target object is located based on the luminance image, the method further comprises:
adjusting coordinates of each data point in the second point cloud data in a second direction to obtain adjusted second point cloud data, and taking the adjusted second point cloud data as third point cloud data;
and determining the height of each target object in the at least one target object based on the height information of at least part of data points in the target object area corresponding to the target object in the third point cloud data.
9. The method of claim 8, wherein the target to be measured further comprises a substrate on which the at least one target object is located, and wherein before the adjusting the coordinates of each data point in the second point cloud data in the second direction to obtain adjusted second point cloud data, and taking the adjusted second point cloud data as third point cloud data, the method further comprises:
deleting data points contained in the target object areas corresponding to the at least one target object from the second point cloud data, and obtaining plane point cloud data;
fitting the plane where the matrix is based on coordinates of each data point in the plane point cloud data to obtain a second reference plane;
wherein the second direction is a direction perpendicular to the second reference plane.
10. The method of claim 9, wherein adjusting coordinates of each data point in the second point cloud data in the second direction to obtain adjusted second point cloud data and using the adjusted second point cloud data as third point cloud data comprises:
constructing a second depth image representing the second reference plane based on parameters in a plane equation of the second reference plane;
And subtracting the coordinates of each data point in the second depth image in the second direction from the coordinates of each data point in the second point cloud data in the second direction to obtain the third point cloud data.
11. The method of claim 8, wherein after determining the height of each of the at least one target object based on the height information of at least some data points in the third point cloud data that are located within the target object region to which the target object corresponds, the method further comprises:
a coplanarity of the at least one target object is determined based on the respective heights of the at least one target object, the coplanarity being indicative of a degree of difference between the respective heights of the at least one target object.
12. The method of claim 8, wherein for each of the at least one target object, determining the height of the target object based on height information of at least some data points in the third point cloud data that are located within a target object region to which the target object corresponds comprises:
for each of the at least one target object,
Determining object point cloud data corresponding to the target object, wherein the object point cloud data comprises data points positioned in a target object area corresponding to the target object in the third point cloud data;
deleting invalid data points in the object point cloud data corresponding to the target object;
determining a height of the target object based on at least some of the data points in the deleted object point cloud data;
the invalid data points comprise data points positioned after the abrupt data points in the order from the small height information to the large height information, wherein the abrupt data points are the data points with the difference value of the height information between the first data point and the previous data point larger than a preset height threshold value.
13. The method of claim 12, wherein determining the height of the target object based on at least some of the data points in the deleted object point cloud data comprises:
averaging the height information of at least part of data points in the deleted object point cloud data, and determining the average result as the height of the target object;
at least some data points in the deleted object point cloud data are data points with the largest preset number or preset proportion of the height information in the deleted object point cloud data.
14. The method of any one of claims 1-4, wherein converting the height information of at least some data points in the second point cloud data into luminance information to obtain a luminance image corresponding to the target to be measured includes:
and normalizing the height information of the data points with the height information in the second point cloud data in a preset height range to obtain the brightness image.
15. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring first point cloud data of a target to be detected, wherein the target to be detected comprises at least one target object;
the adjustment module is used for adjusting the coordinates of each data point in the first point cloud data in the first direction to obtain adjusted first point cloud data serving as second point cloud data;
the conversion module is used for converting the height information of at least part of data points in the second point cloud data into brightness information so as to obtain a brightness image corresponding to the target to be detected;
and the determining module is used for determining a target object area where each target object in the at least one target object is located based on the brightness image.
16. An electronic device comprising a processor and a memory, wherein the memory has stored therein computer program instructions which, when executed by the processor, are adapted to carry out the image processing method of any of claims 1-14.
17. A storage medium storing a computer program/instruction which, when executed, is adapted to carry out the image processing method of any one of claims 1 to 14.
CN202311786617.6A 2023-12-22 2023-12-22 Image processing method and device, electronic equipment and storage medium Pending CN117710352A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311786617.6A CN117710352A (en) 2023-12-22 2023-12-22 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311786617.6A CN117710352A (en) 2023-12-22 2023-12-22 Image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117710352A true CN117710352A (en) 2024-03-15

Family

ID=90162133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311786617.6A Pending CN117710352A (en) 2023-12-22 2023-12-22 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117710352A (en)

Similar Documents

Publication Publication Date Title
CN109087274B (en) Electronic device defect detection method and device based on multi-dimensional fusion and semantic segmentation
CN113570605B (en) Defect detection method and system based on liquid crystal display panel
TWI432699B (en) Method for inspecting measurement object
US8699784B2 (en) Inspection recipe generation and inspection based on an inspection recipe
CN112651968A (en) Wood board deformation and pit detection method based on depth information
CN109584215A (en) A kind of online vision detection system of circuit board
CN112767399B (en) Semiconductor bonding wire defect detection method, electronic device and storage medium
CN116433672B (en) Silicon wafer surface quality detection method based on image processing
CN116777877A (en) Circuit board defect detection method, device, computer equipment and storage medium
CN116503388A (en) Defect detection method, device and storage medium
JP2011007728A (en) Method, apparatus and program for defect detection
CN117274258A (en) Method, system, equipment and storage medium for detecting defects of main board image
CN117710352A (en) Image processing method and device, electronic equipment and storage medium
CN116908185A (en) Method and device for detecting appearance defects of article, electronic equipment and storage medium
CN111563883B (en) Screen vision positioning method, positioning equipment and storage medium
CN107123105A (en) Images match defect inspection method based on FAST algorithms
CN107358655B (en) Identification method of hemispherical surface and conical surface models based on discrete stationary wavelet transform
CN110874837A (en) Automatic defect detection method based on local feature distribution
CN117808754A (en) Target object detection method, target object detection system, electronic equipment and storage medium
CN117808752A (en) Defect detection method, device, electronic equipment and storage medium
CN115496754B (en) Curvature detection method and device of SSD, readable storage medium and electronic equipment
CN117808751A (en) Defect detection method, device, electronic equipment and storage medium
CN112712499B (en) Object detection method and device and computer readable storage medium
CN118071677A (en) Image processing method and device, electronic equipment and storage medium
JP2009145161A (en) Method and apparatus for detecting defect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination