CN117455984B - Method and device for determining acquisition point of arm-following camera - Google Patents

Method and device for determining acquisition point of arm-following camera Download PDF

Info

Publication number
CN117455984B
CN117455984B CN202311808181.6A CN202311808181A CN117455984B CN 117455984 B CN117455984 B CN 117455984B CN 202311808181 A CN202311808181 A CN 202311808181A CN 117455984 B CN117455984 B CN 117455984B
Authority
CN
China
Prior art keywords
point
dimensional array
vector
arm
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311808181.6A
Other languages
Chinese (zh)
Other versions
CN117455984A (en
Inventor
胡亘谦
于洋
赵佳南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Original Assignee
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinrun Fulian Digital Technology Co Ltd filed Critical Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority to CN202311808181.6A priority Critical patent/CN117455984B/en
Publication of CN117455984A publication Critical patent/CN117455984A/en
Application granted granted Critical
Publication of CN117455984B publication Critical patent/CN117455984B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method and a device for determining acquisition points of an arm-following camera, wherein the method comprises the following steps: traversing a two-dimensional array with a value of 1 in the two-dimensional array for each point cloud in a preset 3D digital model corresponding to the target equipment; determining a vector included angle based on a normal vector of the point cloud corresponding to the two-dimensional array with the value of 1 and an attitude vector corresponding to the two-dimensional array with the value of 1; when the arm-following camera is positioned at a point, the attitude vector is a vector formed by a main point coordinate obtained through internal reference calibration and a camera optical center origin under a mechanical arm coordinate system; and determining whether a point cloud corresponding to the point cloud serial number has a point position for acquiring an image with the arm-mounted camera or not based on a comparison result of the vector included angle and the preset angle. Through this application, solved among the prior art be because the too big image quality who leads to gathering of collection angle lower problem.

Description

Method and device for determining acquisition point of arm-following camera
Technical Field
The present disclosure relates to the field of determination of acquisition points of an arm-following camera, and in particular, to a method and an apparatus for determining acquisition points of an arm-following camera.
Background
In the industrial production process, various surface defects are generated sporadically due to the limitation of the production process, a solution for providing manual visual detection by carrying a 2D industrial camera to collect the surface image of the workpiece and carrying AI detection through a mechanical arm is gradually provided, and the detection effect of the solution is stable and the efficiency is higher. Aiming at each workpiece, a unique mechanical arm point position sequence needs to be planned, the mechanical arm can sequentially move according to the point position sequence in actual work, and each point position is subjected to 2D acquisition and AI identification. However, at present, AI identification has high requirements on image effect, and if the acquisition angle is not proper, the detection omission rate and the detection exceeding rate can be greatly influenced. The current process (i.e. teaching) of planning the mechanical arm point position sequence on the workpiece is generally completed manually, the selected acquisition angle is unsuitable for the current acquisition area during manual operation, and the condition that the acquired image is unsuitable due to overlarge included angle exists. The points shown in fig. 1 represent the positions where defects appear, the left shooting angle in fig. 1 is too large, the acquired image effect is definitely poor, and the right acquisition angle in fig. 1 is proper.
There is currently no effective solution to the above-described problems in the related art.
Disclosure of Invention
The application provides a method and a device for determining acquisition points of an arm-following camera, which are used for solving the problem that acquired image quality is low due to overlarge acquisition angle in the prior art.
In a first aspect, the present application provides a method for determining an acquisition point of an arm-mounted camera, including: traversing a two-dimensional array with a value of 1 in the two-dimensional array for each point cloud in a preset 3D digital model corresponding to the target equipment; the first dimension in the two-dimensional array is a point cloud sequence number in the 3D digital-to-analog, and the second dimension in the two-dimensional array is a point position sequence; the value of the two-dimensional array is 1, and the two-dimensional image is collected by the arm-following camera at the point position corresponding to the point cloud serial number; determining a vector included angle based on a normal vector of a point cloud corresponding to a two-dimensional array with a value of 1 and an attitude vector corresponding to the two-dimensional array with the value of 1; the attitude vector is a vector of a vector formed by a main point coordinate obtained by internal reference calibration and a camera optical center origin under a mechanical arm coordinate system when the arm-following camera is at the point location; and determining whether a point cloud corresponding to the point cloud serial number exists or not according to a comparison result of the vector included angle and a preset angle, wherein the point cloud is used for acquiring an image by the arm-following camera.
In a second aspect, the present application provides a device for determining an acquisition point of an arm-mounted camera, including: the first processing module is used for traversing a two-dimensional array with a value of 1 in the two-dimensional array for each point cloud in a preset 3D digital model corresponding to the target equipment; the first dimension in the two-dimensional array is a point cloud sequence number in the 3D digital-to-analog, and the second dimension in the two-dimensional array is a point position sequence; the value of the two-dimensional array is 1, and the two-dimensional image is collected by the arm-following camera at the point position corresponding to the point cloud serial number; the second processing module is used for determining a vector included angle based on a normal vector of the point cloud corresponding to the two-dimensional array with the value of 1 and an attitude vector corresponding to the two-dimensional array with the value of 1; the attitude vector is a vector of a vector formed by a main point coordinate obtained by internal reference calibration and a camera optical center origin under a mechanical arm coordinate system when the arm-following camera is at the point location; the first determining module is used for determining whether the point cloud corresponding to the point cloud serial number exists a point position for the arm-following camera to collect an image or not based on a comparison result of the vector included angle and a preset angle.
In a third aspect, the present application provides an electronic device, including: at least one communication interface; at least one bus connected to the at least one communication interface; at least one processor coupled to the at least one bus; at least one memory coupled to the at least one bus, wherein the processor is configured to perform the method of determining an acquisition point of an arm-mounted camera of the first aspect of the present application.
In a fourth aspect, the present application further provides a computer storage medium storing computer executable instructions for performing the method for determining an acquisition point of an arm-mounted camera according to the first aspect of the present application.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: according to the method provided by the embodiment of the application, the two-dimensional array with the value of 1 is traversed, so that the comparison result of the included angle between the normal vector of the corresponding point cloud with the value of 1 and the gesture vector corresponding to the two-dimensional array with the value of 1 and the preset angle is determined, whether the point cloud corresponding to the point cloud serial number has the point position for acquiring the image along with the arm camera or not is determined, the point position with the smaller included angle can be acquired for image acquisition, the quality of the acquired image is improved, and the problem that the acquired image quality is lower due to the overlarge acquisition angle although the corresponding image can be acquired in the prior art is avoided.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
FIG. 1 is a schematic diagram of a prior art 2D camera-based image acquisition;
fig. 2 is a flowchart of a method for determining an acquisition point of an arm-following camera according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a 3D digital-to-analog diagram according to an embodiment of the present application;
fig. 4 is a schematic diagram of a device for determining an acquisition point of an arm-following camera according to an embodiment of the present application;
fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
The following disclosure provides many different embodiments, or examples, for implementing different structures of the invention. In order to simplify the present disclosure, components and arrangements of specific examples are described below. They are, of course, merely examples and are not intended to limit the invention. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
Fig. 2 is a flowchart of a method for determining an acquisition point of an arm-following camera according to an embodiment of the present application, where, as shown in fig. 2, the method includes the steps of:
step 201, traversing a two-dimensional array with a value of 1 in the two-dimensional array for each point cloud in a preset 3D digital-analog corresponding to the target equipment; the first dimension in the two-dimensional array is a point cloud sequence number in the 3D digital-analog, and the second dimension in the two-dimensional array is a point position sequence; the value of the two-dimensional array is 1, and the two-dimensional image is collected by the arm-following camera at the point position corresponding to the point cloud serial number;
it should be noted that, the arm-following type camera in the embodiment of the present application refers to a 2D industrial camera installed at the tail end of the mechanical arm, and the arm-following type camera and the mechanical arm are converted into a unified coordinate system of the mechanical arm through hand-eye calibration. The target device refers to a device required in an industrial production process in a specific example, and a 3D digital-analog schematic diagram of the target device is shown in fig. 3.
Furthermore, in a specific example, E [305 ] is a two-dimensional array in an embodiment of the present application][4]Representing points in a 3D digital-to-analog point cloudWhether it can be mapped in the image pic4 acquired for the point sequence r4 of the manipulator, if E305][4]The value of 0 indicates that it cannot be mapped to, i.e. the 2D image does not take the point, if E [305 ]][4]A value of 1 indicates that a beat is made. That is, in the embodiment of the present application, the image is acquired at each point by the arm-following camera in advance.
Step 202, determining a vector included angle based on a normal vector of a point cloud corresponding to the two-dimensional array with the value of 1 and an attitude vector corresponding to the two-dimensional array with the value of 1; when the arm-following camera is positioned at a point, the attitude vector is a vector formed by a main point coordinate obtained through internal reference calibration and a camera optical center origin under a mechanical arm coordinate system;
in a specific example, for each manipulator sequence point location, the normal direction of the current camera is calculated, and since the camera has already made an internal reference calibration in advance, the principal point coordinate obtained by directly passing through the internal reference calibration and the camera optical center origin form a vector, which is generally (0, 1) in the camera coordinate system, but since in the embodiment of the application, the vector is in the coordinate system of the manipulator, the vector also needs to be converted into the manipulator coordinate system by the hand-eye matrix, and when the manipulator is in a different sequence point location, the value of the vector in the manipulator coordinate system is also different.
Step 203, determining whether a point cloud corresponding to the point cloud serial number has a point position for acquiring an image with the arm-mounted camera based on a comparison result of the vector included angle and the preset angle.
As can be seen, in this embodiment of the present application, through the steps 201 to 203, by traversing the two-dimensional array with the value of 1 in the two-dimensional array, further determining the comparison result of the included angle between the normal vector of the corresponding point cloud with the value of 1 in the two-dimensional array and the gesture vector corresponding to the two-dimensional array with the value of 1 and the preset angle, determining whether the point cloud corresponding to the point cloud serial number has a point position for capturing an image with the arm camera, further capturing the point position with the smaller included angle for capturing the image, thereby improving the quality of the captured image, and avoiding the problem that the captured image quality is lower due to the overlarge capturing angle although the corresponding image can be captured.
In an optional implementation manner of the embodiment of the present application, for each point cloud in the preset 3D digital-to-analog corresponding to the target device, the traversing a two-dimensional array with a value of 1 in the two-dimensional array may further include:
step 11, establishing a two-dimensional array based on a preset 3D digital-to-analog, wherein a first dimension in the two-dimensional array is a point cloud sequence number in the 3D digital-to-analog, and a second dimension is a point sequence; setting the value of a corresponding two-dimensional array to be 1 when a two-dimensional image is acquired by an arm-following camera at a point position corresponding to the point cloud serial number;
and step 21, traversing the two-dimensional array with the value of 1 in the two-dimensional arrays.
For each point cloud in 3D digital-analog, for each point and its corresponding 2D image, the sequence number k of the corresponding three-dimensional point in 3D digital-analog for each image pixel can be obtained, E [ k ] in two-dimensional array E][j]Set to 1, represent the three-dimensional point in a 2D imageWhen the acquisition point is reached, the acquisition point needs to be judged whether to be suitable later. If the value of the two-dimensional array is 0, the three-dimensional array indicates that the corresponding 2D image does not exist in the three-dimensional point.
In an optional implementation manner of the embodiment of the present application, before traversing the two-dimensional array with a value of 1 in the two-dimensional array, the method of the embodiment of the present application may further include:
step 301, determining a point location sequence set of the follow-arm camera, wherein the point location sequence set comprises a plurality of point locations, and the point locations are acquisition point locations corresponding to the images of the follow-arm camera acquisition target equipment;
step 302, collecting two-dimensional images at each point in a point sequence set based on an arm-following camera, and storing the collected two-dimensional images in an image sequence;
step 303, when the follow-up arm camera is at different points in the point sequence set, acquiring a value of the target vector under the mechanical arm coordinate system, and putting the value into the vector set.
For the above steps 301 to 303, in a specific example, it may be: the manual teaching mechanical arm carries a mechanical arm point position sequence set r { r0, r1, … …, rn } acquired by a 2D camera, each point position is in the form of 6D coordinates (x, y, z, rx, ry, rz), and angle information is contained. After the teaching is completed to obtain the complete mechanical arm sequence point position r, starting from r0, each point position arrives once, each time arrives, a 2D camera is called to collect an image, an image sequence pic { pic0, pic1, … …, pic n }, and corresponding 2D image data collected at each point position are stored. And a vector set L { } is established, for each mechanical arm sequence point location ri, the normal direction acquired by the current camera is calculated, the camera is calibrated by an internal reference in advance, a main point coordinate obtained by directly calibrating the internal reference and the camera optical center origin point form a vector (target vector), generally the vector is (0, 1) under the camera coordinate system, but because the problem at present is under the mechanical arm coordinate system, the vector also needs to be converted into the mechanical arm coordinate system through a hand-eye matrix, when the mechanical arm is in different sequence point locations ri, the values of the vector under the mechanical arm coordinate system are also different, and each value is sequentially put into the set L { L0, L1, … …, ln } with the same element number as the mechanical arm point location number.
In an optional implementation manner of this embodiment of the present application, for the method of determining the vector included angle based on the normal vector of the point cloud corresponding to the two-dimensional array with 1 and the gesture vector corresponding to the two-dimensional array with 1 in the step 202, the method may further include:
step 21, determining a point cloud sequence number of a two-dimensional image in a preset 3D digital model based on the points in the point sequence set and the two-dimensional image in the image sequence;
step 22, determining a normal vector corresponding to each point cloud in the preset 3D digital-to-analog, and storing the normal vector into a point set;
and step 23, determining a vector included angle based on a cosine value between a normal vector of the point cloud and a corresponding vector in the vector set.
For the above steps 21 to 24, in a specific example, it may be: firstly, an empty set N { } is established, and then the normal vector of each point in the point cloud T is calculatedAnd then, calculating an overall normal vector based on fit within a certain range of radius to serve as a normal vector of the point, and finally storing each obtained normal vector into a point set N according to a corresponding sequence. For each point in the 3D digital-analog point cloud T +.>First traversing dynamic two-dimensional array E [ i ]][]In E [ i ]][j]When 1, the normal vector N [ i ] of the point is calculated]And gesture vector L [ j ]]For theta, it is required to determine whether the vector included angle θ is smaller than a maximum acquisition angle α preset by a user, and for the maximum acquisition angle α may be selected according to a scene and experience, and in some cases, whether the vector included angle θ is larger than a specified angle is determined in addition to determining whether the maximum acquisition angle α is smaller than α, because for an object with a reflected light intensity, the vertical beat is not necessarily good, and typically α is 22.5 °.
In an optional implementation manner of this embodiment of the present application, for the determining, according to the comparison result of the vector included angle and the preset angle, whether the point cloud corresponding to the point cloud serial number is the acquisition point of the arm-following camera in step 203, the method may further include: and under the condition that the vector included angle is smaller than the preset angle, determining that the point cloud corresponding to the vector included angle has corresponding points in the point sequence set.
Therefore, in the embodiment of the application, although the corresponding image may be obtained from the point location corresponding to the point cloud in the 3D digital-analog, the corresponding image may be poor in quality due to the shooting angle, so that even if the position can obtain the corresponding image, the position is not a good position, other points capable of shooting defects need to be found, and the shooting angle is more suitable for shooting, so that the shooting quality of the image is improved.
In addition, in the embodiment of the application, if the vector included angle is greater than or equal to the preset angle, it is indicated that the image quality shot by the point location is poor, and the defect part may not be actually detected, so that it is required to search for a proper position from other point locations to perform shooting, that is, it is determined whether a point cloud in a preset 3D digital model corresponding to the next value of 1 in the two-dimensional array has a corresponding point location in the point location sequence set.
The following explains the present application in conjunction with a specific implementation manner of an embodiment of the present application, where the specific implementation manner provides a method for intelligently determining whether an acquisition gesture sequence of an arm-following camera meets standards, and the steps of the method include:
step 31, obtaining a 3D standard digital-analog T of a workpiece to be detected;
step 32, the manual teaching mechanical arm carries a mechanical arm point position sequence set r { r0, r1, … …, rn } acquired by a 2D camera, wherein each point position is in the form of 6D coordinates (x, y, z, rx, ry, rz), and the 6D coordinates comprise angle information;
and step 33, after the teaching is completed to obtain the complete mechanical arm sequence point position r, starting from r0, each point position arrives once, each arrival calls a 2D camera to acquire an image, an image sequence pic { pic0, pic1, … …, pic n } is obtained, and corresponding 2D image data acquired at each point position are stored.
And 34, establishing a vector set L { }, calculating the normal direction acquired by the current camera for each mechanical arm sequence point ri, performing internal reference calibration on the camera in advance, directly forming a vector by a main point coordinate obtained by the internal reference calibration and a camera optical center origin, converting the vector into a mechanical arm coordinate system by a hand-eye matrix, when the mechanical arm is positioned at different sequence point ri, and sequentially placing each value into the set L { L0, L1, … …, ln } with the same element number as the mechanical arm point number.
Step 35, for 3D digital-to-analog T, build dynamic two-dimensional array E [ [][]All values are 0, the first dimension is the total point number of T, the second dimension is the number of the point position sequences of the mechanical arm, E305][4]Representing points in a digital-to-analog point cloudWhether the image pic4 acquired in the manipulator sequence r4 can be mapped or not, if 0 indicates that the image pic4 cannot be mapped, i.e. the 2D image does not take the point, and if 1 indicates that the image pic4 is taken.
In step 36, since the 3D of the existing workpiece is T and the point position of the mechanical arm is known, the hand-eye calibration is performed, so that the area of the object surface actually acquired by each 2D image can be obtained.
Step 37, in the 3D digital-to-analog point cloud T, for each point thereofBased on the method of mapping the 2D image to the corresponding 3D digital-analog region, for each point location +.>And its corresponding 2D image +.>The serial number k of the corresponding three-dimensional point of each image pixel in the 3D digital-analog T can be obtained, and E [ k ] is set in the two-dimensional array E][j]Set to 1, indicating that the three-dimensional point is in 2D image +.>When the acquisition point is reached, the acquisition point needs to be judged whether to be suitable later.
Step 38, the three-dimensional point information contained in each image acquired by each mechanical arm sequence point location is correspondingly recorded in the E, and screening is started to determine whether a proper 2D acquisition gesture exists in each point location and the surrounding area thereof.
Step 39, firstly establishing an empty set N { }, and then solving the normal vector of each point in the point cloud TThe overall normal vector is typically calculated as the normal vector for that point based on a range of intra-radius fits, and then each calculated normal vector is stored in the point set N in a corresponding order.
Step 40, for each point in the digital-to-analog point cloud TFirst traversing dynamic two-dimensional array E [ i ]][]In E [ i ]][j]When 1, the normal vector N [ i ] of the point is calculated]And gesture vector L [ j ]]The cos value can be used for obtaining a vector included angle theta, and for theta, whether the vector included angle theta is smaller than a maximum acquisition angle alpha preset by a user or not needs to be judged;
if the condition is not satisfied, step 41, go on to traverse E [ i ], if the result is not proper after the traversing, this indicates that the point lacks a proper acquisition angle, and the user can be prompted on software or otherwise to prompt the user that the point does not have a proper acquisition angle, so as to guide the user to acquire again.
It should be noted that, after the user acquires again, the information such as the new acquisition point position needs to be updated immediately, including the set r and the set L, because the new acquisition gesture may increase the appropriate acquisition angle for more three-dimensional points t where the appropriate acquisition angle does not exist, so as to avoid the user from repeating the newly-increased extremely similar acquisition gesture.
Step 42, if the condition is satisfied, it is indicated that the appropriate acquisition point exists in the acquisition sequence points, and the next acquisition point is startedThe above steps are repeated.
And 43, after the steps are finished, whether the proper acquisition gesture exists at all points of the 3D digital-analog T is detected, and meanwhile, the mechanical arm acquisition point position sequence r is updated, wherein the sequence r is the final proper full-automatic acquisition track sequence point position.
Corresponding to fig. 2, the embodiment of the present application further provides a device for determining an acquisition point of an arm-following camera, as shown in fig. 4, where the device includes:
a first processing module 402, configured to traverse, for each point cloud in a preset 3D digital-to-analog corresponding to the target device, a two-dimensional array with a value of 1 in the two-dimensional array; the first dimension in the two-dimensional array is a point cloud sequence number in the 3D digital-analog, and the second dimension in the two-dimensional array is a point position sequence; the value of the two-dimensional array is 1, and the two-dimensional image is collected by the arm-following camera at the point position corresponding to the point cloud serial number;
the second processing module 404 is configured to determine a vector included angle based on a normal vector of the point cloud corresponding to the two-dimensional array with a value of 1 and an attitude vector corresponding to the two-dimensional array with a value of 1; when the arm-following camera is positioned at a point, the attitude vector is a vector formed by a main point coordinate obtained through internal reference calibration and a camera optical center origin under a mechanical arm coordinate system;
the first determining module 406 is configured to determine whether a point cloud corresponding to the point cloud serial number has a point location for capturing an image with the arm camera based on a comparison result of the vector angle and the preset angle.
In an alternative implementation manner of the embodiment of the present application, the first processing module 402 in the embodiment of the present application may further include: the establishing unit is used for establishing a two-dimensional array based on the preset 3D digital-to-analog, wherein the first dimension in the two-dimensional array is a point cloud sequence number in the 3D digital-to-analog, and the second dimension is a point position sequence; setting the value of a corresponding two-dimensional array to be 1 when a two-dimensional image is acquired by an arm-following camera at a point position corresponding to the point cloud serial number; and the traversing unit is used for traversing the two-dimensional array with the value of 1 in the two-dimensional arrays.
In an optional implementation manner of the embodiment of the present application, before traversing the two-dimensional array with a value of 1 in the two-dimensional array, the apparatus of the embodiment of the present application may further include: the second determining module is used for determining a point location sequence set of the follow-arm camera, wherein the point location sequence set comprises a plurality of point locations, and the point locations are acquisition point locations corresponding to the image of the follow-arm camera acquisition target equipment; the third processing module is used for collecting two-dimensional images at each point in the point sequence set based on the follow-up arm camera and storing the collected two-dimensional images into the image sequence; and the fourth processing module is used for acquiring the value of the target vector under the mechanical arm coordinate system when the follow-up arm camera is positioned at different points in the point sequence set, and putting the value into the vector set.
In an alternative implementation of the embodiment of the present application, the second processing module 404 in the embodiment of the present application may further include: the first determining unit is used for determining a point cloud sequence number of the two-dimensional image in a preset 3D digital model based on the point in the point sequence set and the two-dimensional image in the image sequence; the second determining unit is used for determining normal vectors corresponding to each point cloud in the preset 3D digital-to-analog, and storing the normal vectors into the point set; and the third determining unit is used for determining a vector included angle based on the cosine value between the normal vector of the point cloud and the corresponding vector in the vector set.
In an optional implementation manner of the embodiment of the present application, the first determining module in the embodiment of the present application may further include: and the fourth determining unit is used for determining that the point cloud corresponding to the vector included angle has a corresponding point in the point sequence set under the condition that the vector included angle is smaller than the preset angle.
In an alternative implementation manner of the embodiment of the present application, the apparatus in the embodiment of the present application may include: and the fifth processing module is used for determining whether a corresponding point position exists in the point position sequence set in the point cloud in the preset 3D digital model corresponding to the next value of 1 in the two-dimensional array under the condition that the vector included angle is larger than or equal to the preset angle.
As shown in fig. 5, the embodiment of the present application provides an electronic device, which includes a processor 511, a communication interface 512, a memory 513, and a communication bus 514, wherein the processor 511, the communication interface 512, and the memory 513 perform communication with each other through the communication bus 514,
a memory 513 for storing a computer program;
in one embodiment of the present application, the processor 511 is configured to implement the method for determining the acquisition point of the arm-following camera according to any one of the foregoing method embodiments when executing the program stored in the memory 513, and the function of the method is similar, and will not be described herein.
The present application further provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method for determining an acquisition point of an arm-following camera provided in any one of the method embodiments described above.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Based on such understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the related art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the method described in the respective embodiments or some parts of the embodiments.
It is to be understood that the terminology used herein is for the purpose of describing particular example embodiments only, and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," "includes," "including," and "having" are inclusive and therefore specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order described or illustrated, unless an order of performance is explicitly stated. It should also be appreciated that additional or alternative steps may be used.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. The method for determining the acquisition point of the arm-following camera is characterized by comprising the following steps of:
traversing a two-dimensional array element with a value of 1 in the two-dimensional array for each point cloud in a preset 3D digital model corresponding to target equipment; the first dimension in the two-dimensional array is a point cloud sequence number in the 3D digital-to-analog, and the second dimension in the two-dimensional array is a point position sequence; the value of the two-dimensional array element is 1, which represents that a two-dimensional image is acquired by the arm-following camera at a point position corresponding to the point cloud serial number;
determining a vector included angle based on a normal vector of a point cloud corresponding to a two-dimensional array element with a value of 1 and a gesture vector corresponding to the two-dimensional array element with the value of 1; the attitude vector is a vector of a vector formed by a main point coordinate obtained by internal reference calibration and a camera optical center origin under a mechanical arm coordinate system when the arm-following camera is at the point location;
determining whether a point cloud corresponding to the point cloud serial number exists or not according to a comparison result of the vector included angle and a preset angle, wherein the point cloud is used for acquiring an image by the arm-following camera;
for each point cloud in a preset 3D digital-to-analog corresponding to the target device, traversing the two-dimensional array element with the value of 1 in the two-dimensional array comprises: establishing a two-dimensional array based on the preset 3D digital-to-analog, wherein a first dimension in the two-dimensional array is a point cloud sequence number in the 3D digital-to-analog, and a second dimension is a point sequence; setting the value of the corresponding two-dimensional array element to be 1 when the two-dimensional image is acquired by the arm-following camera at the point position corresponding to the point cloud serial number; traversing the two-dimensional array element with the value of 1 in the two-dimensional array.
2. The method of claim 1, wherein prior to traversing the two-dimensional array elements having a value of 1 in the two-dimensional array, the method further comprises:
determining a point location sequence set of the arm-following camera, wherein the point location sequence set comprises a plurality of point locations, and the point locations are acquisition point locations corresponding to the image of the target equipment acquired by the arm-following camera;
each point in the point sequence set is based on the arm-following camera to collect two-dimensional images, and the collected two-dimensional images are stored in an image sequence;
and when the arm-following type camera is at different points in the point location sequence set, acquiring a value of a target vector under a mechanical arm coordinate system, and putting the value into a vector set, wherein the target vector is a vector formed by a main point coordinate obtained by internal reference calibration and a camera optical center origin when the arm-following type camera is at the point location.
3. The method of claim 2, wherein determining a vector angle based on a normal vector of a point cloud corresponding to a 1 valued two-dimensional array element and a pose vector corresponding to the 1 valued two-dimensional array element comprises:
determining a point cloud sequence number of the two-dimensional image in the preset 3D digital-to-analog based on the point in the point sequence set and the two-dimensional image in the image sequence;
determining a normal vector corresponding to each point cloud in the preset 3D digital model, and storing the normal vector into a point set;
and determining a vector included angle based on a cosine value between a normal vector of the point cloud and a corresponding vector in the vector set.
4. The method of claim 3, wherein determining whether the point cloud corresponding to the point cloud serial number is the acquisition point of the arm-mounted camera based on the comparison result of the vector included angle and the preset angle comprises:
and under the condition that the vector included angle is smaller than a preset angle, determining that a point cloud corresponding to the vector included angle has a corresponding point in the point sequence set.
5. The method according to claim 4, wherein the method further comprises:
and under the condition that the vector included angle is larger than or equal to the preset angle, determining whether a point cloud in a preset 3D digital model corresponding to a next two-dimensional array element with a value of 1 in the two-dimensional array has a corresponding point in the point sequence set.
6. A device for determining an acquisition point of an arm-following camera, comprising:
the first processing module is used for traversing the two-dimensional array element with the value of 1 in the two-dimensional array for each point cloud in the preset 3D digital model corresponding to the target equipment; the first dimension in the two-dimensional array is a point cloud sequence number in the 3D digital-to-analog, and the second dimension in the two-dimensional array is a point position sequence; the value of the two-dimensional array element is 1, which represents that a two-dimensional image is acquired by the arm-following camera at a point position corresponding to the point cloud serial number;
the second processing module is used for determining a vector included angle based on a normal vector of the point cloud corresponding to the two-dimensional array element with the value of 1 and an attitude vector corresponding to the two-dimensional array element with the value of 1; the attitude vector is a vector of a vector formed by a main point coordinate obtained by internal reference calibration and a camera optical center origin under a mechanical arm coordinate system when the arm-following camera is at the point location;
the first determining module is used for determining whether point clouds corresponding to the point cloud serial numbers exist points for acquiring images by the arm-following camera or not based on a comparison result of the vector included angle and a preset angle;
the first processing module includes: the establishing unit is used for establishing a two-dimensional array based on the preset 3D digital-to-analog, wherein a first dimension in the two-dimensional array is a point cloud sequence number in the 3D digital-to-analog, and a second dimension is a point location sequence; setting the value of the corresponding two-dimensional array element to be 1 when the two-dimensional image is acquired by the arm-following camera at the point position corresponding to the point cloud serial number; and the traversing unit is used for traversing the two-dimensional array element with the value of 1 in the two-dimensional array.
7. An electronic device, comprising: at least one communication interface; at least one bus connected to the at least one communication interface; at least one processor coupled to the at least one bus; at least one memory connected to the at least one bus, wherein the processor is configured to perform the method of determining an acquisition point of an arm-mounted camera of any one of the preceding claims 1 to 5.
8. A computer storage medium storing computer executable instructions for performing the method of determining an acquisition point of an arm-mounted camera according to any one of claims 1 to 5.
CN202311808181.6A 2023-12-26 2023-12-26 Method and device for determining acquisition point of arm-following camera Active CN117455984B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311808181.6A CN117455984B (en) 2023-12-26 2023-12-26 Method and device for determining acquisition point of arm-following camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311808181.6A CN117455984B (en) 2023-12-26 2023-12-26 Method and device for determining acquisition point of arm-following camera

Publications (2)

Publication Number Publication Date
CN117455984A CN117455984A (en) 2024-01-26
CN117455984B true CN117455984B (en) 2024-03-26

Family

ID=89589683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311808181.6A Active CN117455984B (en) 2023-12-26 2023-12-26 Method and device for determining acquisition point of arm-following camera

Country Status (1)

Country Link
CN (1) CN117455984B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107218930A (en) * 2017-05-05 2017-09-29 山东大学 The sextuple position and attitude active measuring method of space circle based on monocular hand-eye system
CN114660579A (en) * 2022-03-17 2022-06-24 深圳市千乘机器人有限公司 Full-automatic laser radar and camera calibration method
CN115273071A (en) * 2022-08-12 2022-11-01 上海节卡机器人科技有限公司 Object identification method and device, electronic equipment and storage medium
CN116766194A (en) * 2023-07-07 2023-09-19 宝鸡文理学院 Binocular vision-based disc workpiece positioning and grabbing system and method
CN116958146A (en) * 2023-09-20 2023-10-27 深圳市信润富联数字科技有限公司 Acquisition method and device of 3D point cloud and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230044371A1 (en) * 2021-08-06 2023-02-09 Faro Technologies, Inc. Defect detection in a point cloud

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107218930A (en) * 2017-05-05 2017-09-29 山东大学 The sextuple position and attitude active measuring method of space circle based on monocular hand-eye system
CN114660579A (en) * 2022-03-17 2022-06-24 深圳市千乘机器人有限公司 Full-automatic laser radar and camera calibration method
CN115273071A (en) * 2022-08-12 2022-11-01 上海节卡机器人科技有限公司 Object identification method and device, electronic equipment and storage medium
CN116766194A (en) * 2023-07-07 2023-09-19 宝鸡文理学院 Binocular vision-based disc workpiece positioning and grabbing system and method
CN116958146A (en) * 2023-09-20 2023-10-27 深圳市信润富联数字科技有限公司 Acquisition method and device of 3D point cloud and electronic device

Also Published As

Publication number Publication date
CN117455984A (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN110555889B (en) CALTag and point cloud information-based depth camera hand-eye calibration method
KR100914211B1 (en) Distorted image correction apparatus and method
CN112907676A (en) Calibration method, device and system of sensor, vehicle, equipment and storage medium
CN112949478B (en) Target detection method based on tripod head camera
CN111627072A (en) Method and device for calibrating multiple sensors and storage medium
JP2022515225A (en) Sensor calibration methods and equipment, storage media, calibration systems and program products
CN110009687A (en) Color three dimension imaging system and its scaling method based on three cameras
CN112907675B (en) Calibration method, device, system, equipment and storage medium of image acquisition equipment
CN109443200B (en) Mapping method and device for global visual coordinate system and mechanical arm coordinate system
CN109658497B (en) Three-dimensional model reconstruction method and device
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN112381847A (en) Pipeline end head space pose measuring method and system
CN112686950A (en) Pose estimation method and device, terminal equipment and computer readable storage medium
CN112929626A (en) Three-dimensional information extraction method based on smartphone image
CN116958146A (en) Acquisition method and device of 3D point cloud and electronic device
CN114140429A (en) Real-time parking space detection method and device for vehicle end
CN110706288A (en) Target detection method, device, equipment and readable storage medium
JPH1079029A (en) Stereoscopic information detecting method and device therefor
CN112446926B (en) Relative position calibration method and device for laser radar and multi-eye fish-eye camera
CN111476062A (en) Lane line detection method and device, electronic equipment and driving system
CN111724432B (en) Object three-dimensional detection method and device
CN117455984B (en) Method and device for determining acquisition point of arm-following camera
CN115880424A (en) Three-dimensional reconstruction method and device, electronic equipment and machine-readable storage medium
CN116524109A (en) WebGL-based three-dimensional bridge visualization method and related equipment
CN115375762A (en) Three-dimensional reconstruction method for power line based on trinocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant