CN117808754A - Target object detection method, target object detection system, electronic equipment and storage medium - Google Patents

Target object detection method, target object detection system, electronic equipment and storage medium Download PDF

Info

Publication number
CN117808754A
CN117808754A CN202311788442.2A CN202311788442A CN117808754A CN 117808754 A CN117808754 A CN 117808754A CN 202311788442 A CN202311788442 A CN 202311788442A CN 117808754 A CN117808754 A CN 117808754A
Authority
CN
China
Prior art keywords
area
determining
target object
point cloud
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311788442.2A
Other languages
Chinese (zh)
Inventor
田松
董其波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Mega Technology Co Ltd
Original Assignee
Suzhou Mega Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Mega Technology Co Ltd filed Critical Suzhou Mega Technology Co Ltd
Priority to CN202311788442.2A priority Critical patent/CN117808754A/en
Publication of CN117808754A publication Critical patent/CN117808754A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the application provides a target object detection method, a target object detection system, electronic equipment and a storage medium. The detection method comprises the steps of determining a depth image related to a target object according to three-dimensional coordinates of each point in a point cloud of the acquired target object, determining a first area in the depth image according to pixel values of each pixel in the depth image, determining a second area in the depth image according to a preset position relation between a to-be-detected part and a main body part and the first area, and determining a value of a measurement item according to three-dimensional coordinates of points corresponding to at least part of pixels in the first area and the second area, wherein the measurement item represents a relative position relation between the main body part and a detection area of the to-be-detected part. According to the scheme, three-dimensional data are converted into two-dimensional image data, so that the calculation of the related measurement items of the part to be measured of the target object is realized, the dimension of calculation is reduced, the calculation efficiency is remarkably improved, and the calculation resources can be saved.

Description

Target object detection method, target object detection system, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of product detection technologies, and in particular, to a target object detection method, a target object detection system, an electronic device, and a storage medium.
Background
In the field of product inspection technology, it is often necessary to take a point cloud of a product to be inspected with a three-dimensional camera, and to measure a target inspection site of a target object such as various electronic components by processing point cloud data. Thereby determining whether the target object is defective or not based on the detected value of the measurement item.
Take a chip as an example. In general, after QFP chips are produced, it is necessary to ensure that the height and angle of the leads are fixed within a specific range, and an out of limit is regarded as a defect. In order to detect device quality, it is also often necessary in vision inspection equipment to configure image processing algorithms capable of pin height and pin angle measurements to detect such defects. In recent years, image processing technology has advanced greatly, various image processing algorithms are layered endlessly, detection for various purposes such as target recognition, character recognition and object tracking can be performed, and defect detection is also an important application scene of the image processing algorithms.
However, in many production test scenarios, there is a high requirement on the rate (throughput) of device inspection, and sorting and blanking of good and bad products needs to be performed immediately after the inspection is completed, which requires that the algorithm complete measurement of the pin height and angle of the device in one image in a short time, so as to determine that the device is good and bad before blanking. Although the existing image processing algorithms have the capability of measuring the pin height and angle, the existing algorithms often need to perform complex calculation in a three-dimensional space, detect through processing and calculation of point clouds, have the problems of large calculation amount, more occupied resources, long time consumption or complex parameter adjustment, and are not suitable for scenes in which the calculation is performed in real time and the result is output in a short time.
Disclosure of Invention
In order to at least partially solve the problems existing in the prior art, according to a first aspect of the present application, there is provided a method for detecting a target object, comprising determining a depth image with respect to the target object according to three-dimensional coordinates of each point in a point cloud of the acquired target object, wherein the target object comprises a main body portion and a portion to be detected, the portion to be detected being connected to an edge of the main body portion and extending outward, each point in the point cloud corresponding to one pixel in the depth image; determining a first region in the depth image according to pixel values of pixels in the depth image, wherein each pixel in the first region corresponds to a point in a first point cloud part, and the first point cloud part is a point cloud part of the point cloud and related to the main body part; determining a second area in the depth image according to the preset position relation between the to-be-detected part and the main body part and the first area, wherein each pixel in the second area corresponds to a point in a second point cloud part, and the second point cloud part is a point cloud part of the point cloud and related to the to-be-detected part; and determining the value of a measurement item according to the three-dimensional coordinates of points corresponding to at least part of pixels in the first area and the second area, wherein the measurement item represents the relative position relationship between the main body part and the detection area of the part to be detected.
Illustratively, the measurement item includes a relative distance between the first face region of the main body portion and the end of the portion to be measured and/or a relative angle between the first face region and the second face region of the portion to be measured, at least some of the pixels include first pixels and/or second pixels, determining a value of the measurement item includes: determining a first plane where the first area is located according to the three-dimensional coordinates of a first point corresponding to each pixel in the first area; for the case where the measurement item includes a relative distance, determining the relative distance from the distance between the first plane and the third point corresponding to each third pixel in the second region; for the case that the measurement item comprises a relative angle, determining a second plane where the second panel is located according to the three-dimensional coordinates of a second point corresponding to each second pixel in the second area, and taking the included angle between the first plane and the second plane as the relative angle.
Illustratively, determining a first region in the depth image includes: performing target segmentation on the depth image to obtain a target area in the depth image, wherein the target area is an area where a target object is located; performing edge extraction on the target area to determine a first position of an edge of the main body part in the depth image; and determining a first region according to the first position.
Illustratively, edge extraction of the target region includes: determining edge pixels in the directions respectively from an image center of the depth image, wherein a gradient of a pixel value of the determined edge pixel in each direction relative to a pixel value of an adjacent pixel of the edge pixel is greater than a gradient threshold; and determining a fitting edge line representing the edge of the first region according to the determined positions of the edge pixels, and taking the position of the fitting edge line as a first position.
Illustratively, determining the second region in the depth image includes: determining a reference position of the to-be-detected part according to a preset position relation between the to-be-detected part and the main body part and a position of a first edge of a first area, wherein the first edge is an edge where a connection position of the to-be-detected part and the main body part is located, and the reference position is a position of the to-be-detected part in a depth image; determining an interested region according to the reference position and the preset size of the part to be detected, wherein the interested region comprises the reference position, and the size of the interested region is larger than or equal to the preset size; and determining a region formed by a plurality of fourth pixels in the region of interest as a second region, wherein the fourth pixels are pixels in the region of interest, and the pixel values of the pixels are larger than the pixel threshold value.
Illustratively, determining the reference position of the part under test includes: determining the simulation position of the part to be tested according to the preset position relation between the part to be tested and the main body part and the position of the first edge of the first area; determining the position difference between the part to be tested and the simulation position; and determining a reference position based on the position difference and the simulated position.
The target object includes a first number of first portions to be tested, the first portions to be tested are connected to the first edge, and the first number of first portions to be tested are uniformly arranged along an extending direction of the first edge, and determining a simulation position of the portions to be tested includes: calculating a first distance between the simulation position of each first part to be tested and one end of the first edge; determining the simulation position of each first part to be tested according to the position of one end of the first edge and the first distance; wherein the first distance is calculated using the formula:
P i =(W-(n-1)*w)/2+(i-1)*w
wherein P is i The first distance between the simulation position of the ith first part to be tested and one end of the first edge is represented by W, the length of the first edge is represented by n, the first number is represented by n, the interval between two adjacent first parts to be tested is represented by W, i is more than or equal to 1 and less than or equal to n, and n is more than or equal to 2.
Illustratively, the first edge is a straight line and the portion under test is perpendicular to the first edge, the method further comprising: calculating a second included angle between the first edge and the image width direction; the depth image is rotated by a second angle in a counter-clockwise direction such that the first edge is parallel to the image width direction.
Illustratively, determining a depth image for the target object includes: generating a depth image related to the target object according to the three-dimensional coordinates of each point in the target point cloud, the first length and the second length of the target point cloud in the first dimension and the second dimension respectively, and the resolution of the target point cloud in the three dimension directions respectively; wherein the three dimensions include a first dimension, a second dimension, and a third dimension; the image length and the image width of the depth image are respectively equal to the first length and the second length; the first position and the second position of each first pixel in the depth image are respectively equal to the coordinate components of the point of the target point cloud corresponding to the first pixel in the first dimension direction and the second dimension direction, and the first position and the second position are respectively the position components of the first pixel in the image length direction and the image width direction; the pixel value of each first pixel in the depth image is positively correlated with the coordinate component of the point of the target point cloud corresponding to the first pixel in the third dimension direction.
Illustratively, the method further comprises: and shooting the target object from the direction perpendicular to the first area by using a three-dimensional camera to obtain a target point cloud.
Illustratively, the method further comprises: at least some of the pixels in the second region are determined from the information of the measurement.
Illustratively, determining at least a portion of the pixels in the second region includes: for the case that the measurement item comprises a relative distance, determining pixels meeting a first preset condition in the second area as first pixels, wherein the distance between each side of the first pixels and the second area is greater than or equal to a first offset distance corresponding to the side; and determining pixels meeting a second preset condition in the second area as second pixels when the measurement item comprises a relative angle, wherein the distance between the second pixels and each side of the second area is greater than or equal to a second offset distance corresponding to the side.
Illustratively, the target object is a chip, the body portion is a chip body, and the portion to be tested is a pin located around the chip body.
Illustratively, the measurement item includes a height of the chip pins and/or an angle of the chip pins, the method further comprising: and determining whether the chip is a qualified product according to the determined height of the chip pins and/or the determined angle of the chip pins.
According to another aspect of the present application, there is also provided a detection system of a target object, including: the first determining module is used for determining a depth image related to the target object according to the three-dimensional coordinates of each point in the obtained point cloud of the target object, wherein the target object comprises a main body part and a part to be detected, the part to be detected is connected with the edge of the main body part and extends outwards, and each point in the point cloud corresponds to one pixel in the depth image; a second determining module, configured to determine a first region in the depth image according to pixel values of respective pixels in the depth image, where each pixel in the first region corresponds to a point in a first point cloud portion, and the first point cloud portion is a point cloud portion related to the main body portion in the point cloud; a third determining module, configured to determine a second area in the depth image according to a preset positional relationship between the portion to be detected and the main body portion and the first area, where each pixel in the second area corresponds to a point in a second point cloud portion, and the second point cloud portion is a point cloud portion related to the portion to be detected in the point cloud; and a fourth determining module, configured to determine a value of a measurement item according to three-dimensional coordinates of points corresponding to at least some pixels in the first area and the second area, where the measurement item represents a relative positional relationship between the main body portion and a detection area of the portion to be detected.
According to another aspect of the present application, there is also provided an electronic device, including a processor and a memory, wherein the memory stores computer program instructions for executing the above-mentioned target object detection method when the computer program instructions are executed by the processor.
According to another aspect of the present application, there is also provided a storage medium, on which program instructions are stored, the program instructions being configured to perform the above-described target object detection method when executed.
According to the target object detection method, the obtained point cloud of the target object is generated into the depth image, the first area and the second area which correspond to the main body part and the to-be-detected part of the target object in the image respectively are sequentially determined according to the depth image, and finally, the value of the measurement item is calculated according to the corresponding relation between the points in the first area and the second area and the point cloud data. According to the scheme, three-dimensional data are converted into two-dimensional image data, so that the calculation of the related measurement items of the part to be measured of the target object is realized, the dimension of calculation is reduced, the calculation efficiency is remarkably improved, and the calculation resources can be saved. Therefore, the accuracy and the efficiency of calculation are considered, and the method can be suitable for scenes with high flux requirements.
In the summary, a series of concepts in a simplified form are introduced, which will be further described in detail in the detailed description section. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Advantages and features of the present application are described in detail below with reference to the accompanying drawings.
Drawings
The following drawings of the present application are included to provide an understanding of the present application as part of the present application. Embodiments of the present application and descriptions thereof are shown in the drawings to explain the principles of the present application. In the drawings of which there are shown,
FIG. 1 shows a schematic flow chart of a method of detecting a target object according to one embodiment of the present application;
FIG. 2a shows a schematic diagram of a point cloud of a target object according to one embodiment of the present application;
FIG. 2b illustrates a depth image of a target object according to one embodiment of the present application;
FIG. 3a shows a simplified schematic diagram of the principle of measurement of chip pin height;
FIG. 3b shows a simplified schematic diagram of the principle of measurement of the chip pin angle;
FIG. 4 illustrates a schematic diagram of a method of determining an edge of a first region according to one embodiment of the present application;
FIG. 5 shows a simplified schematic diagram of a reference location and a region of interest of a part under test according to one embodiment of the present application;
FIG. 6 illustrates a schematic diagram of simulated positions of a part under test according to one embodiment of the present application;
FIG. 7 illustrates a schematic diagram of a principle of determination of a second pixel according to one embodiment of the present application;
FIG. 8 illustrates a flow chart of a method of detecting a target object according to another embodiment of the present application;
FIG. 9 shows a schematic block diagram of a detection system for a target object according to an embodiment of the present application; and
fig. 10 shows a schematic block diagram of an electronic device according to an embodiment of the application.
Detailed Description
In the following description, numerous details are provided to provide a thorough understanding of the present application. However, it will be understood by those skilled in the art that the following description illustrates preferred embodiments of the present application by way of example only and that the present application may be practiced without one or more of these details. In addition, some technical features that are known in the art have not been described in detail in order to avoid obscuring the present application.
In order to solve the above technical problems at least in part, according to one aspect of the present application, a method for detecting a target object is provided. The method can effectively solve the problems of large calculation amount and large occupied resources when the point cloud data is utilized to detect the target object, can finish detection in a short time and output results, and can realize real-time and accurate detection of the target object.
Fig. 1 shows a schematic flow chart of a method 100 of detecting a target object according to one embodiment of the present application. As shown, the method 100 includes step S120, step S140, and step S160.
Step S120, determining a depth image about the target object according to the three-dimensional coordinates of each point in the acquired point cloud of the target object. The target object comprises a main body part and a part to be detected, the part to be detected is connected with the edge of the main body part and extends outwards, and each point in the point cloud corresponds to one pixel in the depth image.
According to embodiments of the present application, the target object may be any suitable object, which is not limited by the present application. The detection scheme can be suitable for various scenes needing to detect the target object. Illustratively, the target object may be a chip, the body portion may be a chip body, and the portion to be tested may be respective pins located around the chip body.
FIG. 2a shows a schematic diagram of a point cloud of a target object according to one embodiment of the present application. Fig. 2b shows a depth image of a target object according to one embodiment of the present application. Fig. 2a and 2b are images of a point cloud of a chip to be detected. Fig. 2a shows the depth image of the chip determined in this step. The target object is a chip, the main body part corresponds to the chip main body, and the part to be tested corresponds to the chip pin. A point cloud is a collection of coordinates of points of a large number of object surfaces. The chip point cloud may be generated by one or more three-dimensional cameras photographing the surface of the chip to be detected.
In this step, the point cloud of the target object may be determined by a variety of suitable methods.
Illustratively, determining a depth image for the target object may include:
and generating a depth image about the target object according to the three-dimensional coordinates of each point in the target point cloud, the first length and the second length of the target point cloud in the first dimension and the second dimension respectively and the resolution of the target point cloud in the three dimension directions respectively. Wherein the three dimensions include a first dimension, a second dimension, and a third dimension; the image length and the image width of the depth image are respectively equal to the first length and the second length; the first position and the second position of each first pixel in the depth image are respectively equal to the coordinate components of the point of the target point cloud corresponding to the first pixel in the first dimension direction and the second dimension direction, and the first position and the second position are respectively the position components of the first pixel in the image length direction and the image width direction; the pixel value of each first pixel in the depth image is positively correlated with the coordinate component of the point of the target point cloud corresponding to the first pixel in the third dimension direction.
In some examples, the image length and the image width of the depth image may also have a linear relationship with the first length and the second length, respectively; the first position and the second position of each first pixel in the depth image have a linear relationship with the coordinate components in the first dimension direction and the second dimension direction of the point of the target point cloud corresponding to the first pixel respectively.
As shown in fig. 2a, most points in the chip point cloud fall on the surface of the chip. The coordinates of each point in the point cloud include coordinate information of the point in the direction of X, Y, Z three dimensions. The first dimension, the second dimension and the third dimension respectively correspond to the directions of the X, Y, Z three dimensions. The three-dimensional coordinates of each point can be used for calculating the width and length information (such as the length of the coverage area of each point in the X, Y direction) of the whole point cloud, and the resolution of the point cloud in the X/Y/Z directions can be obtained. From this information, a depth image of the chip can be generated. Taking the chip surface as an example, the plane on which the chip surface is located may correspond to a plane formed in the X direction and the Y direction, and the protrusion or depression on the plane may be regarded as the Z direction of the chip, i.e., Z may represent depth information.
As shown in fig. 2b, the depth image is a two-dimensional image, the width of the depth image may be equal to the length of the point cloud, and the height of the depth image may be equal to the width of the point cloud. For each point in the point cloud, its corresponding first pixel in the depth image may be found from its coordinates in the X, Y direction. And the pixel value of the first pixel may be determined from the z-coordinate of the point. For example, the z-coordinate value of the point cloud midpoint is equal to the pixel value of the corresponding first pixel. It will be appreciated that there may also be pixels that do not correspond to points in the point cloud, and that it may be determined to have pixel values of these pixels at a preset pixel value, for example 0. Because of the gray value range (0, 255), and the z-coordinate value of the point in the point cloud may be outside this interval range, the z-coordinate value of each point may be normalized by the following formula:
Wherein value is a gray value, Z is a depth value corresponding to each pixel position, and Zmin and Zmax are respectively a maximum value and a minimum value of the depth values in the point cloud data.
Z coordinate data of each point in the point cloud of the chip can be extracted as a pixel value of each pixel in a depth image, the depth image is a 2D image, the width of the depth image can be equal to the width of the point cloud, the height of the depth image can be equal to the length of the point cloud, and the depth image is sequentially assigned to the 2D image according to rows, so that the depth image is generated, and the conversion from the three-dimensional point cloud to the two-dimensional image is realized.
It can be appreciated that the depth map has a smaller data size and occupies less memory than the point cloud data, and thus the calculation speed is faster.
Step S140, determining a first region in the depth image according to pixel values of pixels in the depth image. Wherein each pixel in the first region corresponds to a point in a first point cloud portion, which is a point cloud portion with respect to the main body portion, of the point clouds.
Step S160, determining a second area in the depth image according to the preset position relation between the to-be-detected part and the main body part and the first area. Wherein each pixel in the second region corresponds to a point in a second point cloud portion, which is a point cloud portion of the point cloud with respect to the portion to be measured. The second region may correspond to an image of the lead portion of the chip.
Taking the chip point cloud and the depth image of the chip as an example, the first region may correspond to an imaging region of the main body portion of the chip. The first point cloud portion may be a point cloud of a chip body portion of the chip point clouds described above. The second region may correspond to an imaging region of the chip pins. The second point cloud portion may be a point cloud of a pin portion in the chip point cloud described above.
In step S140, the location of the chip body region in the depth image of the chip may be determined using a variety of suitable methods, including, but not limited to, various methods of edge extraction and methods of object segmentation.
The preset positional relationship between each part to be measured and the main body part can be any positional parameter capable of representing the positional relationship between the two parts. For example, it may include the relative distance of the center of the part under test and the center of the main body part in a preset direction (e.g., the chip length and the chip width direction). Alternatively, the relative distance between the center or the left and right boundaries of the portion to be measured and one edge position of the main body portion in the preset direction may be included. According to an embodiment of the present application, the preset direction is perpendicular to a depth direction of the target object represented by the depth image. For example, the preset direction may be a chip length direction and/or a chip width direction.
In step S160, taking as an example that the relative positional relationship between each portion to be measured and the main body portion is the relative distance between the center of the portion to be measured and the center of the main body portion in the chip length direction and the chip width direction. The center position coordinates of the chip main body region and the first direction corresponding to the chip length direction and the second direction corresponding to the chip width direction in the depth image may be determined first, and then, the center position coordinates of each portion to be measured may be calculated. The imaging area of the part to be measured can be further determined. Of course, the location of the second region in the depth image may also be determined using a variety of suitable methods.
In step S180, a value of the measurement item is determined according to the three-dimensional coordinates of the points corresponding to at least some pixels in the first region and the second region. The measurement item represents a relative positional relationship between the main body portion and the detection area of the portion to be detected, where the relative positional relationship may include a relative distance, a relative angle, and the like between any suitable position of the main body portion and a target detection position of the portion to be detected.
Illustratively, each measurement item may correspond to one detection zone of the body portion (e.g., referred to as a body portion detection zone) and one detection zone of the portion under test (e.g., referred to as a portion under test detection zone). The main body portion detection area and the portion detection area to be detected corresponding to different measurement items may be different. The relative positional relationship between the main body portion and the detection area of the portion to be detected may be a relative positional relationship along a preset detection surface or detection direction. The detection surface or detection direction may be perpendicular to the predetermined direction. For example, the detection direction may correspond to a depth direction of the target object.
Illustratively, the measurement item may include a height of the chip pins and/or an angle of the chip pins, and the detection method further includes determining whether the chip is a good based on the determined height of the chip pins and/or the determined angle of the chip pins. The height of the chip pins is the height difference between the pins and the chip main body, and the angle of the chip pins is the bending degree of the pins. Fig. 3a shows a simple schematic of the principle of measuring the chip pin height. Fig. 3b shows a simple schematic of the principle of measurement of the chip pin angle. It will be appreciated that too high or too low a pin height may result in a chip being placed on the circuit board that cannot be laid flat or the pins do not contact the pads, resulting in cold solder joints, etc. Pin angle defects may also affect the quality of the weld, resulting in product failure.
For example, at least a portion of the pixels in the second region may be determined from the information of the measurement. As shown in fig. 2a and 2b, there are more noise points at the edges of the chip body and the chip pins, whether the point cloud or the depth map generated according to the point cloud, which may be caused by reflection of light during image acquisition. While the areas away from the edges are imaged with better quality. Therefore, a part of pixels far away from the edge in the first area can be selected, data of point clouds corresponding to the part of pixels can be obtained, calculation is carried out on the data of the point clouds corresponding to the part of pixels far away from the edge in the second area, and accordingly height and angle information can be obtained, and whether the chip has defects or not is judged.
According to the target object detection method, the obtained point cloud of the target object is generated into the depth image, the first area and the second area which correspond to the main body part and the to-be-detected part of the target object in the image respectively are sequentially determined according to the depth image, and finally, the value of the measurement item is calculated according to the corresponding relation between the points in the first area and the second area and the point cloud data. According to the scheme, three-dimensional data are converted into two-dimensional image data, so that the calculation of the related measurement items of the part to be measured of the target object is realized, the dimension of calculation is reduced, the calculation efficiency is remarkably improved, and the calculation resources can be saved. Therefore, the accuracy and the efficiency of calculation are considered, and the method can be suitable for scenes with high flux requirements.
Illustratively, the target object may be a chip, the body portion being a chip body, and the portion to be tested being a pin located around the chip body. Taking QFP chips as an example, such packaged chips typically include a generally rectangular body portion, and peripheral leads. When the quality of the device is detected, the height and the angle of the pins need to be fixed within a specific range, and the defect is regarded as a defect if the height and the angle of the pins exceed the limit. As described above, in many production test scenarios, there is a high requirement for the rate (throughput) of device inspection. The detection method of the target object can be used for rapidly and accurately detecting the height and the angle of the pin, can greatly improve the detection rate and meets the production requirement.
Illustratively, the detection method includes capturing a target object with a three-dimensional camera from a direction perpendicular to the first area to obtain a target point cloud. The first area is the top plane of the chip body. The three-dimensional camera is perpendicular to the first surface area, the chip is horizontally placed for example, measured data are the distance from the camera to the top plane of the chip main body, and the distance from the camera to each position of the pin area, no included angle exists in the depth Z direction, and the top plane of the chip main body, namely the first surface area, can be considered as a horizontal plane. Therefore, the height can be obtained by simply adding or subtracting the Z-direction data, and the included angle between the pin and the horizontal plane can be obtained by the Z-direction data corresponding to different pixel positions of the pin, namely the included angle between the pin and the first area of the chip.
Shooting the target object from the direction perpendicular to the first area by the three-dimensional camera can greatly simplify calculation and shorten detection time, thereby improving detection efficiency.
Illustratively, the measurement item includes a relative distance between the first face region of the main body portion and the end of the portion to be measured and/or a relative angle between the first face region and the second face region of the portion to be measured, at least some of the pixels include first pixels and/or second pixels, determining a value of the measurement item includes: determining a first plane where the first area is located according to the three-dimensional coordinates of a first point corresponding to each first pixel in the first area; for the case where the measurement item includes a relative distance, determining the relative distance from the distance between the first plane and the third point corresponding to each third pixel in the second region; for the case that the measurement item comprises a relative angle, determining a second plane where the second panel is located according to the three-dimensional coordinates of a second point corresponding to each second pixel in the second area, and taking the included angle between the first plane and the second plane as the relative angle.
In practical testing, it may happen that the chip is not laid flat, for example, stains exist on the bottom surface of the chip, the platform is uneven, etc. At this time, if the first area is still considered to be a horizontal plane for detection, serious deviation of the detection result may occur. Thus, a first plane can be fitted as a reference plane according to the three-dimensional coordinates of the first point of the first region By using the least square method, resulting in a plane equation of ax+by+cz+d=0, where the plane normal vector is: (A, B, C).
Illustratively, the pin portion includes an end portion and a connection portion connecting the end portion to the chip body, where the relative distance and relative angle are calculated using the second panel as the end portion. A part of pixels can be selected from the second panel, and according to the point cloud data of the part of pixels, the distance between the pin area and the first plane is determined, and the calculation formula is as follows:
the median value of all calculated distance values can be taken as the height of the pin, so that interference caused by imaging noise can be better resisted, and the stability and accuracy of measurement are ensured.
For the relative angle, point cloud data corresponding to pixels in the second area can be obtained, and then a least square method is utilized to fit a pin plane, so that a plane equation is obtained
A1x+b1y+c1z+d=0, wherein the planar normal vector is: (A1, B1, C1). The plane equation of the second plane is determined, and the plane equations of the first plane and the second plane are calculated as follows:
the angle θ is the angle of the pin.
The reference plane can be determined by calculating the plane equation, so that the height and the angle of the pins of the chip are adaptively calculated, and the influence on the detection result caused by uneven placement of the chip is avoided. By selecting a part of pixels, the relative distance is calculated according to the point cloud data corresponding to the part of pixels, so that the calculated amount can be reduced, and interference caused by imaging noise can be better resisted. And a plane equation of the second plane is calculated by adopting part of pixels, so that the pin angle is calculated, and the calculation is more accurate.
Illustratively, determining a first region in the depth image includes performing target segmentation on the depth image to obtain a target region in the depth image, wherein the target region is a region in which a target object is located; performing edge extraction on the target area to determine a first position of an edge of the main body part in the depth image; and determining a first region according to the first position. The target object and the portions other than the target object may be segmented, for example, by a method such as threshold segmentation, so as to avoid interference at the time of edge extraction. The edge extraction can be performed on the target object, and the edge extraction can adopt proper edge extraction methods such as threshold segmentation, semantic segmentation and the like, so that a main body region of the chip is obtained.
Therefore, the position and the angle of the main body part can be determined through the edge points and the edge lines, so that the probability of error occurrence in the subsequent recognition and extraction of the part to be detected is reduced.
Illustratively, edge extraction of the target region includes: determining edge pixels in the directions respectively from an image center of the depth image, wherein a gradient of a pixel value of the determined edge pixel in each direction relative to a pixel value of an adjacent pixel of the edge pixel is greater than a gradient threshold; and determining a fitting edge line representing the edge of the first region according to the determined positions of the edge pixels, and taking the position of the fitting edge line as a first position. The gradient threshold can be set according to actual requirements.
Fig. 4 shows a schematic diagram of a method for determining an edge of a first area according to an embodiment of the present application. As shown in fig. 4, there is a gradient change in pixel value at the chip body and pin connection portion. Pins are arranged in four directions of the chip main body, and edge points can be searched in the directions of up, down, left and right from the center of the chip main body according to the gradient threshold. The chip in the figure is generally rectangular, and 4 edge lines fitted by the edge points can be obtained. The edge line is a straight line, i.e. if some edge points deviate too much from the straight line, it will be discarded and the straight line is fitted by a larger number of edge points.
The searching outwards from the center of the image can ensure that most of the searched edge points are positioned in the main body part, thereby ensuring the accuracy of edge extraction. The gradient threshold is adopted for edge extraction, the extraction algorithm is simple, the system time is short, and the requirement of detection speed can be met.
The first edge is a straight line and the portion to be measured is perpendicular to the first edge, where the first edge is an edge where a connection position of the portion to be measured and the main body portion is located, and the detection method further includes: calculating a second included angle between the first edge and the image width direction; the depth image is rotated by a second angle in a counter-clockwise direction such that the first edge is parallel to the image width direction.
As shown in fig. 4, 4 edges are extracted for the main area of the depth image of the chip in the foregoing step. Taking the example that the first edge is the upper edge, the upper edge may have an included angle with the image width direction, for example, the calculated second included angle is 5 °. For ease of calculation, the depth image is rotated 5 ° in the clockwise direction so that the upper edge is parallel to the image width direction. Thus, the width direction of the chip body corresponds to the height direction of the image, and the length direction of the chip body corresponds to the width direction of the image. Thus, during the measurement process
Illustratively, determining the second region in the depth image includes: determining a reference position of the to-be-detected part according to a preset position relation between the to-be-detected part and the main body part and the position of the first edge of the first area, wherein the reference position is the position of the to-be-detected part in the depth image; determining an interested region according to the reference position and the preset size of the part to be detected, wherein the interested region comprises the reference position, and the size of the interested region is larger than or equal to the preset size; and determining a region formed by a plurality of fourth pixels in the region of interest as a second region, wherein the fourth pixels are pixels in the region of interest, and the pixel values of the pixels are larger than the pixel threshold value.
Fig. 5 shows a simple schematic of the reference position of the part under test and the region of interest according to one embodiment of the present application. For example, a user may input a model or a specification of the chip, according to the model or the specification, a preset positional relationship between the lead portion and the main body portion may be determined, for example, 11 pins are disposed on one side of the chip, the pins are uniformly distributed, and according to the determined first edge, an approximate position, that is, a reference position, of the 11 pins may be determined. The size of the pins is also known and thus a region of interest larger than the size of the pins can be defined based on the reference position to ensure that the pins are fully contained within the region of interest. The pin area within the region of interest may then be further determined according to a preset pixel threshold. I.e. the pixels corresponding to the background area can be filtered out.
The position of each second area can be estimated through the reference position, so that dislocation among a plurality of second areas is avoided. And defining an interested region, and accurately extracting the part to be detected in the interested region according to a preset pixel threshold value, so as to ensure the accuracy and reliability of the position extraction of the part to be detected.
Illustratively, determining the reference position of the part under test includes: determining the simulation position of the part to be tested according to the preset position relation between the part to be tested and the main body part and the position of the first edge of the first area; determining the position difference between the part to be tested and the simulation position; and determining a reference position based on the position difference and the simulated position.
It can be understood that the calculated simulated positions of the portions to be measured are positions of the respective portions to be measured in the image under ideal conditions. The actual position of the portion to be measured in the depth image is likely to have a positional shift, subject to the limitations of various conditions. Fig. 6 shows a schematic diagram of the simulated position of the part under test according to one embodiment of the present application. As shown in fig. 6, the simulated position (shown as the cross center position in the figure) determined for each lead area is offset from the center of the lead area. In this case, the analog position may be further corrected with the corrected position as the reference position.
The analog position may be corrected using a variety of suitable methods. For example, as shown in fig. 6, the center positions of the pin areas in the depth image may be obtained by a method such as threshold segmentation, semantic segmentation, etc., so as to obtain a position difference between each center position and a corresponding simulation position, and further determine the reference position.
The reference position determined by the scheme can accurately represent the position of the part to be detected, so that the detection precision can be improved, and the calculated amount is smaller.
Illustratively, the target object includes a plurality of measurands. The first to-be-measured portions of the first number are connected with the first edge, and the first to-be-measured portions of the first number are uniformly arranged along the extending direction of the first edge, so as to determine the simulation position of the to-be-measured portions, including: calculating a first distance between the simulation position of each first part to be tested and one end of the first edge; determining the simulation position of each first part to be tested according to the position of one end of the first edge and the first distance; wherein the first distance is calculated using the formula:
P i =(W-(n-1)*w)/2+(i-1)*w
wherein P is i The first distance between the simulation position of the ith first part to be tested and one end of the first edge is represented by W, the length of the first edge is represented by n, the first number is represented by n, the interval between two adjacent first parts to be tested is represented by W, i is more than or equal to 1 and less than or equal to n, and n is more than or equal to 2.
The simulated position may be, for example, a center position of each first portion to be measured. The length of the first edge may also be obtained, for example, during the edge extraction described above. Take the chip image in fig. 6 as an example. The 4 edges of the chip are correspondingly distributed with 11 pins. Take the upper edge of the chip body as an example. The length of the upper edge of the chip main body is 15mm, 11 pins are correspondingly arranged at the upper edge, and the interval between two adjacent pins is 1mm. The pin pitch is the distance from one pin center to another, and the position of the first pin is (15 mm- (11-1) 1 mm)/2+0 1 mm=2.5 mm. The 11 pins have 10 pitches, each 1mm pitch, so that the sum of the distance from the center of the first pin to the first end of the first edge and the distance from the center of the last pin to the second end of the first edge is 5mm. Since the chip is axisymmetric, the two distances are equal, so the simulation position of the first part to be tested is 2.5mm from the first end of the first edge. Subsequently, the location of the center of each pin is increased by the pitch distance.
Thus, the reference position of each part to be measured can be estimated according to the preset position relation between the target object and the part to be measured, and the position of the part to be measured can be accurately determined. The analog position obtained by the method has simple logic, and can be mutually corrected with the reference position determined by other modes, thereby further increasing the accuracy.
Illustratively, determining at least a portion of the pixels in the second region comprises: for the case that the measurement item comprises a relative distance, determining pixels meeting a first preset condition in the second area as first pixels, wherein the distance between each side of the first pixels and the second area is greater than or equal to a first offset distance corresponding to the side; and determining pixels meeting a second preset condition in the second area as second pixels when the measurement item comprises a relative angle, wherein the distance between the second pixels and each side of the second area is greater than or equal to a second offset distance corresponding to the side.
It can be understood that the region formed by the first pixel and the region formed by the second pixel represent the detection region of the portion to be measured corresponding to the measurement item.
Continuing with the example of pins above the chip body, as described above, to avoid imaging noise from affecting the measurement, a portion of the pixels are selected for measurement. The first pixels for measuring the relative distance may be spaced apart from each other by a small distance, so that the height measuring region where the first pixels are located may be spaced apart from each side of the second region by a distance greater than or equal to the first offset distance so as to avoid the edge region and reduce interference of noise. For the angle measurement area, the pixel points of the second pixels are at least spaced a certain distance in the up-down direction, so that the calculated second plane can be ensured to be more accurate. The distance between the second pixel and each side of the second area is greater than or equal to the second offset distance corresponding to the side.
Fig. 7 shows a schematic diagram of the principle of determination of a second pixel according to one embodiment of the present application. As shown in the figure, 4 boundaries of the angle measurement region in each lead region may be determined by preset offset distances, and then the angle measurement region may be determined according to the positions of the 4 boundaries. Each pixel in the angle measurement area is a second pixel. As shown, the positions of the upper, left, and right edges of each lead area may be first determined. Then, the upper and lower boundaries of each lead area may be determined according to the upper edge and preset upper offset distances 1 and 2, respectively. The left boundary of each lead area may be determined based on the left edge of the lead area and a preset left offset distance. The right boundary of each lead area may be determined based on the right edge of the lead area and a preset right offset distance.
In the scheme, the influence of imaging noise can be effectively avoided, the standard of the generation of the height measurement area is unified, the standard of the generation of the angle measurement area is also unified, and misjudgment caused by different measurement positions can be avoided.
Fig. 8 shows a flowchart of a method of detecting a target object according to another embodiment of the present application. As shown, first, point cloud data of a chip may be acquired. And may generate a depth image from the point cloud data. And normalizing the depth image to obtain a normalized depth image. The chip body region may then be segmented using any suitable threshold segmentation method. And according to a preset gradient threshold value, starting from the center of the main body area, searching edge points conforming to the gradient from inside to outside, and fitting into an edge straight line by utilizing the edge points. And the three-dimensional data of the points of the point cloud falling on the chip main body area can be further determined according to the edge straight line, and the first plane of the chip main body surface can be represented by using least square fitting. And then, determining the reference position of each pin area according to the first edge of the chip main body area fitted before and the preset position relation of the chip main body and the pins. The region of interest of the pin area may then be determined from the reference location. Further, the pin area in each region of interest may be determined by thresholding. And the height measurement area in each of the lead areas may be determined according to the edge of the lead area and a preset first offset distance, and the angle measurement area in each of the lead areas may be determined according to the edge of the lead area and a preset second offset distance. And obtaining the height of each pin by calculating the distance from the three-dimensional data of the height measurement area to the first plane. And fitting a second plane of the angle measurement area of each pin area by a least square method, and calculating an included angle between the first plane and the second plane to obtain the pin angle.
According to another aspect of the present application, there is also provided a detection system of a target object, and fig. 9 shows a schematic block diagram of a detection system 900 of a target object according to an embodiment of the present application. As shown, the detection system 900 includes:
a first determining module 910, configured to determine a depth image related to a target object according to three-dimensional coordinates of each point in an acquired point cloud of the target object, where the target object includes a main body portion and a portion to be measured, and the portion to be measured is connected to an edge of the main body portion and extends outwards, and each point in the point cloud corresponds to one pixel in the depth image;
a second determining module 920, configured to determine a first region in the depth image according to pixel values of pixels in the depth image, where each pixel in the first region corresponds to a point in a first point cloud portion, and the first point cloud portion is a point cloud portion related to the main body portion in the point cloud;
a third determining module 930, configured to determine a second area in the depth image according to the preset positional relationship between the portion to be measured and the main body portion and the first area, where each pixel in the second area corresponds to a point in a second point cloud portion, and the second point cloud portion is a point cloud portion related to the portion to be measured in the point cloud; and
And a fourth determining module 940, configured to determine a value of a measurement item according to three-dimensional coordinates of points corresponding to at least some of the pixels in the first area and the second area, where the measurement item represents a relative positional relationship between the main body portion and the detection area of the portion to be detected.
According to another aspect of the present application, there is also provided an electronic device. Fig. 10 shows a schematic block diagram of an electronic device 1000 according to an embodiment of the application. As shown, the electronic device 1000 includes a processor 1010 and a memory 1020. The memory 1020 has stored therein computer program instructions that, when executed by the processor 1010, are configured to perform the method 100 of detecting a target object described above.
According to another aspect of the present application, there is also provided a storage medium. Program instructions are stored on the storage medium for executing the above-described target object detection method 100 at run-time. The storage medium may include, for example, erasable programmable read-only memory (EPROM), portable read-only memory (CD-ROM), USB memory, or any combination of the foregoing storage media. The storage medium may be any combination of one or more computer readable storage media.
Those skilled in the art can understand the specific implementation schemes and beneficial effects of the target object detection system, the electronic device and the storage medium by reading the above description about the target object detection method, and for brevity, the description is omitted here.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the application and aid in understanding one or more of the various inventive aspects, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the application. However, the method of this application should not be construed to reflect the following intent: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the present application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules in the target object detection system according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be embodied as device programs (e.g., computer programs and computer program products) for performing part or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing is merely illustrative of specific embodiments of the present application and the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are intended to be covered by the scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A method of detecting a target object, comprising:
determining a depth image related to the target object according to the three-dimensional coordinates of each point in the obtained point cloud of the target object, wherein the target object comprises a main body part and a part to be detected, the part to be detected is connected with the edge of the main body part and extends outwards, and each point in the point cloud corresponds to one pixel in the depth image;
determining a first region in the depth image according to pixel values of pixels in the depth image, wherein each pixel in the first region corresponds to a point in a first point cloud part, and the first point cloud part is a point cloud part of the point cloud and related to the main body part;
determining a second region in the depth image according to the preset position relation between the to-be-detected part and the main body part and the first region, wherein each pixel in the second region corresponds to a point in a second point cloud part, and the second point cloud part is a point cloud part of the point cloud, related to the to-be-detected part; and
and determining the value of a measurement item according to the three-dimensional coordinates of points corresponding to at least part of pixels in the first area and the second area, wherein the measurement item represents the relative position relationship between the main body part and the detection area of the part to be detected.
2. The method of claim 1, wherein the measurement items include a relative distance between a first area of the main body portion and an end of the portion to be measured and/or a relative angle between the first area and a second area of the portion to be measured, the at least some pixels include first pixels and/or second pixels,
the determining a value of the measurement includes:
determining a first plane where the first area is located according to three-dimensional coordinates of a first point corresponding to each pixel in the first area;
for the case that the measurement item includes the relative distance, determining the relative distance according to a distance between a third point corresponding to each third pixel in the second region and the first plane;
and for the case that the measurement item comprises the relative angle, determining a second plane where the second panel is located according to the three-dimensional coordinates of a second point corresponding to each second pixel in the second area, and taking the included angle between the first plane and the second plane as the relative angle.
3. The method of detecting a target object according to claim 2, wherein the determining the first region in the depth image includes:
Performing target segmentation on the depth image to obtain a target region in the depth image, wherein the target region is a region where the target object is located;
performing edge extraction on the target area to determine a first position of an edge of the main body part in the depth image; and
and determining the first area according to the first position.
4. The method for detecting a target object according to claim 3, wherein the performing edge extraction on the target area includes:
determining edge pixels in each direction from an image center of the depth image, wherein a gradient of pixel values of the determined edge pixels in each direction relative to pixel values of neighboring pixels of the edge pixels is greater than a gradient threshold; and
and determining a fitting edge line representing the edge of the first area according to the determined positions of the edge pixels, and taking the position of the fitting edge line as the first position.
5. The method of claim 1, wherein the determining the second region in the depth image comprises:
determining a reference position of the to-be-detected part according to a preset position relation between the to-be-detected part and the main body part and a position of a first edge of the first area, wherein the first edge is an edge where a connection position of the to-be-detected part and the main body part is located, and the reference position is a position of the to-be-detected part in the depth image;
Determining a region of interest according to the reference position and a preset size of the portion to be detected, wherein the region of interest comprises the reference position, and the size of the region of interest is greater than or equal to the preset size;
and determining a region formed by a plurality of fourth pixels in the region of interest as the second region, wherein the fourth pixels are pixels in the region of interest, and the pixel value of the pixels is larger than a pixel threshold value.
6. The method according to claim 5, wherein determining the reference position of the portion to be measured includes:
determining the simulation position of the to-be-tested part according to the preset position relation between the to-be-tested part and the main body part and the position of the first edge of the first area;
determining a position difference between the part to be tested and the simulation position; and
and determining the reference position according to the position difference and the simulation position.
7. The method according to claim 6, wherein the target object includes a plurality of first portions to be measured, the first portions to be measured are connected to the first edge, and the first portions to be measured are uniformly arranged along an extending direction of the first edge, and the determining the simulated position of the portions to be measured includes:
Calculating a first distance between the simulation position of each first part to be tested and one end of the first edge;
determining the simulation position of each first part to be tested according to the position of one end of the first edge and the first distance;
wherein the first distance is calculated using the formula:
P i =(W-(n-1)*w)/2+(i-1)*w
wherein P is i A first distance between the simulation position of the ith first part to be tested and one end of the first edge is represented by W, the length of the first edge is represented by n, the first number is represented by n, the interval between two adjacent first parts to be tested is represented by W, and 1≤i≤n,n≥2。
8. The method for detecting a target object according to claim 5, wherein the first edge is a straight line and the portion to be detected is perpendicular to the first edge,
the method further comprises the steps of:
calculating a second included angle between the first edge and the image width direction;
the depth image is rotated in a counter-clockwise direction by a second angle such that the first edge is parallel to the image width direction.
9. The method of detecting a target object according to any one of claims 1 to 8, wherein the determining a depth image with respect to the target object includes:
generating a depth image about the target object according to three-dimensional coordinates of each point in the target point cloud, first length and second length of the target point cloud in a first dimension and a second dimension respectively, and resolution of the target point cloud in three dimension directions respectively;
Wherein the three dimensions include the first dimension, the second dimension, and a third dimension; the image length and the image width of the depth image are respectively equal to the first length and the second length; the first position and the second position of each first pixel in the depth image are respectively equal to the coordinate components of the point of the target point cloud corresponding to the first pixel in the first dimension direction and the second dimension direction, and the first position and the second position are respectively the position components of the first pixel in the image length direction and the image width direction; the pixel value of each first pixel in the depth image is positively correlated with the coordinate component of the point of the target point cloud corresponding to the first pixel in the third dimension direction.
10. A method of detecting a target object as claimed in claim 9 when dependent on claim 2, the method further comprising:
and shooting the target object from a direction perpendicular to the first area by using a three-dimensional camera so as to obtain the target point cloud.
11. The method of detecting a target object according to any one of claims 1 to 7, further comprising:
at least a portion of the pixels in the second region are determined from the information of the measurement.
12. A method of detecting a target object as claimed in claim 11 when dependent on claim 2, wherein said determining at least part of the pixels in the second region comprises:
for the case that the measurement item comprises the relative distance, determining pixels meeting a first preset condition in the second area as the first pixels, wherein the distance between the first pixels and each side of the second area is greater than or equal to a first offset distance corresponding to the side;
and determining pixels meeting a second preset condition in the second area as the second pixels when the measurement item comprises the relative angle, wherein the distance between the second pixels and each side of the second area is larger than or equal to a second offset distance corresponding to the side.
13. The method according to any one of claims 1 to 8, wherein the target object is a chip, the main body portion is a chip main body, and the portion to be measured is a pin located around the chip main body.
14. The method of claim 13, wherein the measurement item includes a height of a chip pin and/or an angle of a chip pin, the method further comprising:
And determining whether the chip is a qualified product according to the determined height of the chip pins and/or the determined angle of the chip pins.
15. A system for detecting a target object, comprising:
a first determining module, configured to determine a depth image related to the target object according to three-dimensional coordinates of each point in the obtained point cloud of the target object, where the target object includes a main body portion and a portion to be measured, the portion to be measured is connected to an edge of the main body portion and extends outward, and each point in the point cloud corresponds to one pixel in the depth image;
a second determining module, configured to determine a first region in the depth image according to pixel values of respective pixels in the depth image, where each pixel in the first region corresponds to a point in a first point cloud portion, the first point cloud portion being a point cloud portion of the point cloud with respect to the main body portion;
a third determining module, configured to determine a second area in the depth image according to the preset positional relationship between the portion to be measured and the main body portion and the first area, where each pixel in the second area corresponds to a point in a second point cloud portion, and the second point cloud portion is a point cloud portion related to the portion to be measured in the point cloud; and
And a fourth determining module, configured to determine a value of a measurement item according to three-dimensional coordinates of points corresponding to at least some pixels in the first area and the second area, where the measurement item represents a relative positional relationship between the main body portion and the detection area of the portion to be detected.
16. An electronic device comprising a processor and a memory, wherein the memory has stored therein computer program instructions which, when executed by the processor, are adapted to carry out the method of detecting a target object according to any one of claims 1 to 14.
17. A storage medium having stored thereon program instructions for performing the method of detection of a target object according to any of claims 1 to 14 when run.
CN202311788442.2A 2023-12-22 2023-12-22 Target object detection method, target object detection system, electronic equipment and storage medium Pending CN117808754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311788442.2A CN117808754A (en) 2023-12-22 2023-12-22 Target object detection method, target object detection system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311788442.2A CN117808754A (en) 2023-12-22 2023-12-22 Target object detection method, target object detection system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117808754A true CN117808754A (en) 2024-04-02

Family

ID=90428991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311788442.2A Pending CN117808754A (en) 2023-12-22 2023-12-22 Target object detection method, target object detection system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117808754A (en)

Similar Documents

Publication Publication Date Title
US6173070B1 (en) Machine vision method using search models to find features in three dimensional images
US9704232B2 (en) Stereo vision measurement system and method
CN102589435B (en) Efficient and accurate detection method of laser beam center under noise environment
JP6115012B2 (en) Inspection device, inspection method, and inspection program
US10841561B2 (en) Apparatus and method for three-dimensional inspection
JP2004340832A (en) Method and system for visual inspection of circuit board
EP1236017B1 (en) X-ray tomography bga ( ball grid array ) inspections
CN107271445B (en) Defect detection method and device
CN110426395B (en) Method and device for detecting surface of solar EL battery silicon wafer
Li et al. Stereo vision based automated solder ball height and substrate coplanarity inspection
CN117218062A (en) Defect detection method and device, electronic equipment and storage medium
CN117808754A (en) Target object detection method, target object detection system, electronic equipment and storage medium
WO2002029357A2 (en) Method and apparatus for evaluating integrated circuit packages having three dimensional features
US7251348B2 (en) Land appearance inspecting device, and land appearance inspecting method
US7747066B2 (en) Z-axis optical detection of mechanical feature height
JP2010256275A (en) Shape inspection apparatus and shape inspection program
CN111220410A (en) Deep water sediment sampling system capable of rapidly sampling
CN117571721B (en) Method and device for detecting surface defects of circuit board bonding pad and storage medium
CN110874837A (en) Automatic defect detection method based on local feature distribution
CN117808751A (en) Defect detection method, device, electronic equipment and storage medium
CN117808752A (en) Defect detection method, device, electronic equipment and storage medium
JPH0372203A (en) Checking method of outer appearance of soldering part
CN117853427A (en) Device appearance defect detection method and device, electronic equipment and storage medium
JP2793947B2 (en) Appearance inspection method of soldering condition
CN117710352A (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination