CN116668814A - Rotation correction photographing method, device and medium for environmental test - Google Patents

Rotation correction photographing method, device and medium for environmental test Download PDF

Info

Publication number
CN116668814A
CN116668814A CN202310645194.XA CN202310645194A CN116668814A CN 116668814 A CN116668814 A CN 116668814A CN 202310645194 A CN202310645194 A CN 202310645194A CN 116668814 A CN116668814 A CN 116668814A
Authority
CN
China
Prior art keywords
sample
rectangular frame
shooting
mechanical arm
shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310645194.XA
Other languages
Chinese (zh)
Inventor
杨小奎
吴护林
苏晓杰
吴�灿
陈康
孙少欣
敖文刚
马铁东
黄伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Chongqing Technology and Business University
Southwest Institute of Technology and Engineering of China South Industries Group
Original Assignee
Chongqing University
Chongqing Technology and Business University
Southwest Institute of Technology and Engineering of China South Industries Group
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University, Chongqing Technology and Business University, Southwest Institute of Technology and Engineering of China South Industries Group filed Critical Chongqing University
Priority to CN202310645194.XA priority Critical patent/CN116668814A/en
Publication of CN116668814A publication Critical patent/CN116668814A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Testing Resistance To Weather, Investigating Materials By Mechanical Methods (AREA)

Abstract

The application relates to the technical field of environmental tests, in particular to a rotation correction photographing method, a rotation correction photographing device and a rotation correction photographing medium for an environmental test. In a long-term environmental test, when a piece of sample needs to be shot, the mechanical arm can be automatically controlled to move to a preset position to shoot a positioning photo, and a new specific position and a new deflection angle of the piece of sample with new displacement and deflection can be determined through analysis of the positioning photo; and then, according to the new position and the new deflection angle of each piece of sample, accurate shooting is carried out, so that the sample is not missed, and meanwhile, each piece of sample is positioned at the central position of the shot picture in a deflection-free posture, and the shooting efficiency and quality of the environmental test are obviously improved. The application detects the network S through a single-stage rotation target 2 Predicting the object type by the A-Net and framing the object position by using a rotary frame; by usingThe binocular depth camera acquires an object depth map and calculates three-dimensional coordinates; and obtaining the attitude information of the sample relative to the coordinate system of the mechanical arm base by rotating the rectangular frame.

Description

Rotation correction photographing method, device and medium for environmental test
Technical Field
The application relates to the technical field of environmental tests, in particular to a rotation correction photographing method, a rotation correction photographing device and a rotation correction photographing medium for an environmental test.
Background
And (3) natural environment test, wherein a sample to be tested is fixed on a test frame and is exposed to the natural environment, and the change information of the sample to be tested along with time is obtained through long-term observation and comparison for several days, months or years.
At present, the natural environment test of the atmosphere in China mainly depends on manual inspection and periodic sampling, wherein the periodic sampling generally adopts a manual photographing method, and industrial products need to be taken out from a test stand for independent photographing.
The defects are that: 1. the manual inspection photographing efficiency is low, each sample needs to be photographed independently, and the situation of missing photographing due to false photographing can occur; 2. the quality of manual photographing depends on the manual photographing level, and the imaging of a sample in an image is easy to deviate from inclination and the like, and further processing is needed in a later period.
Disclosure of Invention
The application discloses a rotation correction photographing method, a rotation correction photographing device and a rotation correction photographing medium for an environment test.
In order to achieve the above object, in one aspect, the present application provides a rotation correction photographing method for environmental test, which is characterized in that the specific method comprises:
1) Starting an environmental test, and finishing feeding a plurality of feeding samples;
2) Acquiring sample position information of each sample to be shot;
3) The mechanical arm with the camera moves to a preset position and takes positioning pictures of all the upper samples in a preset gesture;
4) Calculating moving tracks of all samples to be shot, which are shot by the mechanical arm, and shooting postures of each sample to be shot relative to the tail end of the mechanical arm according to the positioning photo and the sample position information of the samples to be shot;
5) The mechanical arm completes shooting of each sample to be shot according to the calculated moving track and shooting gesture, and returns to an initialization position;
6) Judging whether a new shooting task exists, and if so, turning to the step 2).
The embodiment has the advantages that in a long-term environmental test, when a piece-feeding sample needs to be shot, the mechanical arm can be automatically controlled to move to a preset position to shoot a positioning photo, and the new specific position and the new deflection angle of the piece-feeding sample with new displacement and deflection can be determined through analysis of the positioning photo; and then, according to the new position and the new deflection angle of each piece of sample, accurate shooting is carried out, so that the sample is not missed, and meanwhile, each piece of sample is positioned at the central position of the shot picture in a deflection-free posture, and the shooting efficiency and quality of the environmental test are obviously improved.
Further, the specific method for acquiring the sample position information of each sample to be photographed in the step 2) is as follows:
numbering each piece of sample, and generating a retrieval table by corresponding the numbering information with the row number information of the piece of sample;
and inputting or receiving the serial number information of the samples to be shot, and acquiring the sample position information of each sample to be shot through the retrieval table.
Specifically, the method for obtaining the preset position in step 3) is as follows:
and establishing a first coordinate system by taking the center of the mechanical arm base as an origin, acquiring information of a loading area of a loading sample, generating a minimum rectangular frame capable of framing the loading area according to the information of the loading area, marking a first center coordinate of the minimum rectangular frame in the first coordinate system, calculating the bottommost height capable of shooting the complete minimum rectangular frame according to a shooting wide angle of a camera on the mechanical arm, and forming a preset position according to the first center coordinate and the bottommost height of the minimum rectangular frame.
Specifically, the preset gesture in step 3) is:
when the preset gesture is to take a positioning picture, the shooting boundaries are respectively parallel to the boundaries of the minimum rectangular frame, and the mechanical arm needs to rotate by an angle.
Further, the specific method for calculating the movement track of the mechanical arm to shoot all the samples to be shot and the shooting gesture of each sample to be shot in the step 4) is as follows:
4-1) inputting the positioning photograph into a single-stage rotating object detection network S 2 In A-Net, predicting the types of all the upper samples, and generating a rotary rectangular frame for each upper sample;
4-2) taking the position of the camera as an origin when the camera shoots the positioning picture to construct a second coordinate system; when the positioning photo is shot, a depth image of each piece of sample is acquired through a binocular depth camera, and a second coordinate of each rotary rectangular frame in a second coordinate system is calculated;
4-3) calculating first coordinate information of each rotary rectangular frame in a first coordinate system and attitude information relative to the tail end of the mechanical arm by combining a preset position and a second coordinate when the camera shoots the positioning picture;
4-4) publishing the first coordinate information and the gesture information of the rotating rectangular frame through the ROS topic.
The embodiment has the advantages that a rectangular frame is generated for each piece of sample through the single-stage rotation target detection network, so that the type of the piece of sample can be identified, the processing modes of samples with different shapes can be unified, and coordinate calculation and deflection attitude calculation are facilitated; the coordinates of the upper sample in the second coordinates are converted into the coordinates of the first coordinates, so that the displacement track of the mechanical arm can be accurately calculated; the rotation angle of the rotary rectangular frame is used as shooting gesture information, so that each piece of upper sample is positioned in a shot photo without inclination.
Specifically, the specific method of the step 4-1) is as follows:
4-1-1)S 2 the A-Net network carries out rolling and pooling operation on the positioning photos to generate a depth feature map;
4-1-2) sending the depth feature map into a feature alignment module FAM for feature alignment and anchor point generation;
the FAM first trims the initial anchor point using the anchor optimization network Anchor Refinement Network, and then the FAM performs a rotation invariance operation on the feature map using the alignment convolution layer Alignment Convolution;
4-1-3) the FAM sends the aligned feature images to a direction detection module ODM for target detection, the ODM uses an active rotation filter ARF to encode the direction information of the target, and uses the information in classification and regression operation to detect industrial products in the images; the ARF is a k multiplied by N filter and actively rotates N-1 times in the convolution process, so that a characteristic diagram with N direction channels is generated;
4-1-4) generating the rotating rectangular frame of the upper sample on an original image according to the target detection result output by the ODM.
Specifically, the specific method of the step 4-2) is as follows:
4-2-1) by S 2 A-Net network predicts the rotating rectangular boxThe pixel coordinates of the four vertexes are D respectively 1 (x 1 ,y 1 )、D 2 (x 2 ,y 2 )、D 3 (x 3 ,y 3 )、D 4 (x 4 ,y 4 ) The coordinates (i, j) of the center pixel point of the rotating rectangular frame are calculated by the following formula, and the depth value is D (i, j):
4-2-2) the coordinates of the central pixel point of the rotating rectangular frame are combined with the camera internal reference (f) x ,f y ,u 0 ,v 0 ) The specific formula is as follows:
wherein f x Representing the dequantization of the physical focal length f, f by the number of pixels in the x-direction y Representing the dequantization of the physical focal length f, u by the number of pixels in the y-direction 0 Representing half of the image width pixel value, v 0 Representing half of the image height pixel value.
Specifically, the specific method of the step 4-3) is as follows:
4-3-1) assuming that the external parameters of the camera are the rotation matrix R and the translation vector T, the coordinates of the center pixel point of the rotation rectangular frame in the second coordinate system can be deduced as (X) according to the conversion formula w ,Y w ,Z w ) The conversion formula is as follows:
4-3-2) in order to take a picture of the camera perpendicular to the test stand, the pose of the arm tip with respect to the first coordinate system is expressed as follows:
(x es ,y es ,z es ,w es )=(sin(β/2),0,0cos(β/2))
wherein beta is E (-pi, pi)]Is the rotation angle of the coordinate axis around the y axis, (x) es ,y es ,z es ,w es ) A quaternion representation of the gesture;
4-3-3) calculating a rotation angle theta of the rotating rectangular frame relative to the horizontal position of the first coordinate system;
4-3-4) the shooting attitude of the rotating rectangular frame with respect to the distal end of the robot arm is expressed as follows:
(x ce ,y ce ,z ce ,w ce )=(0,sin(θ/2),0,cos(θ/2))
4-3-5) the shooting attitude of the rotating rectangular frame with respect to the first coordinate system is expressed as follows:
4-3-6) rotating the rectangular frame relative to the second coordinate system coordinates (X) w ,Y w ,Z w ) Converting into coordinates (x, y, z) relative to a first coordinate system;
4-3-7) calculating the moving track of the mechanical arm and the shooting attitude of the tail end of the mechanical arm according to the coordinates of the rotary rectangular frame relative to the first coordinate system and the shooting attitude of the rotary rectangular frame relative to the first coordinate system.
The embodiment has the advantages that through a plurality of experiments and formula deductions, the conversion formula for converting the photo coordinates into the second coordinate system coordinates and then into the first coordinate system coordinates is summarized, and the embodiment can be suitable for any test rack and any workpiece sample; the calculation formula for calculating the shooting gesture according to the rotating rectangular frame is also summarized, and the method is suitable for shooting of the upper sample at any deviating angle.
In order to achieve the above object, in another aspect, the present application provides a rotation correction photographing apparatus for environmental test, comprising: the system comprises an information acquisition module, a positioning photo shooting module, a track gesture calculation module and a sample shooting module;
the information acquisition module acquires sample position information of each sample to be shot;
the positioning photo shooting module is used for controlling the mechanical arm with the camera to move to a preset position and shooting positioning photos of all the upper samples in a preset gesture;
the track gesture calculation module is used for calculating movement tracks of all samples to be shot, shot by the mechanical arm, and shooting gestures of each sample to be shot relative to the tail end of the mechanical arm according to the positioning photo and the sample position information of the samples to be shot;
and the sample shooting module is used for controlling the mechanical arm to shoot each sample to be shot according to the calculated moving track and the shooting gesture.
To achieve the above object, in another aspect, the present application provides a storage medium storing a plurality of instructions adapted to be loaded by a processor, in the above method.
Additional advantages, objects, and features of the application will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims hereof.
Drawings
The drawings of the present application are described below.
FIG. 1 is a schematic overall flow chart of example 1.
Fig. 2 is a schematic flow chart of step 4 of example 1.
Fig. 3 is a schematic structural diagram of embodiment 2.
Fig. 4 is a graph showing the effect of the torque frame marking in example 1.
Fig. 5 is a graph showing the photographing effect of a single sample in example 1.
Detailed Description
The application is further described below with reference to the drawings and examples.
Example 1:
as shown in fig. 1 and 2, the rotation correction photographing method for environmental test specifically comprises the following steps:
s1, starting an environmental test, and finishing loading a plurality of loading samples.
In this embodiment, the loading process may be manual loading, or may be automated mechanical arm loading.
S2, acquiring sample position information of each sample to be shot.
After the loading is completed in the embodiment, the position and the posture of the loading sample are not actively adjusted in the whole environment test process, but the loading sample may be displaced or offset under the natural vibration or accident collision, and the position and the posture of the loading sample can be actively adjusted as required, so that the number and the position information of the loading sample need to be updated after the adjustment is completed.
Specifically, numbering each piece of sample, and generating a retrieval table by corresponding the numbering information with the row number information of the piece of sample; and inputting or receiving the serial number information of the samples to be shot, and acquiring the sample position information of each sample to be shot through a retrieval table.
In this embodiment, the acquisition of the sample position information of the sample to be photographed by retrieving the table is one of the embodiments, and the position information may be edited in the sample number, and the sample position information may be directly identified by the sample number.
S3, the mechanical arm with the camera moves to a preset position, and all the upper samples are shot with positioning pictures in a preset gesture.
In this embodiment, an overall positioning photograph is taken before each individual shot of the part sample is taken, for positioning the position and posture of each part sample. The positioning photo can be used for fixing the whole test stand under shooting, but the strategy can be adjusted according to the requirement, and only the area containing all shot samples is shot.
Specifically, a first coordinate system is established by taking the center of a mechanical arm base as an origin, piece-loading area information of a piece-loading sample is obtained, a minimum rectangular frame capable of framing the piece-loading area is generated according to the piece-loading area information, a first center coordinate of the minimum rectangular frame is marked in the first coordinate system, the bottommost height capable of shooting the complete minimum rectangular frame is calculated according to the shooting wide angle of a camera on the mechanical arm, and a preset position is formed by the first center coordinate and the bottommost height of the minimum rectangular frame.
Specifically, when the preset gesture is to take a positioning photo, the shooting boundaries are respectively parallel to the boundaries of the minimum rectangular frame, and the mechanical arm needs to rotate by an angle.
S4, calculating movement tracks of all samples to be shot, which are shot by the mechanical arm, according to the positioning photo and the sample position information of the samples to be shot, and shooting postures of each sample to be shot relative to the tail end of the mechanical arm.
S41, inputting the positioning photo into a single-stage rotating target detection network S 2 In A-Net, predicting the types of all the upper samples, and generating a rotary rectangular frame for each upper sample;
S411、S 2 the A-Net network carries out rolling and pooling operation on the positioning photos to generate a depth feature map;
s412, sending the depth feature map into a feature alignment module FAM for feature alignment and anchor point generation; the FAM first trims the initial anchor point using the anchor optimization network Anchor Refinement Network, and then the FAM performs a rotation invariance operation on the feature map using the alignment convolution layer Alignment Convolution;
s413, the FAM sends the aligned feature images to a direction detection module ODM for target detection, the ODM uses an active rotation filter ARF to encode the direction information of the target, and uses the information in classification and regression operation to detect industrial products in the images; ARF is a kxkxn filter that actively rotates N-1 times during convolution to generate a feature map with N directional channels;
s414, generating a rotating rectangular frame of the upper sample on the original image according to the target detection result output by the ODM.
S42, constructing a second coordinate system by taking the position of the camera as an origin when the camera shoots a positioning picture; when a positioning photo is shot, a depth image of each piece of sample is acquired through a binocular depth camera, and a second coordinate of each rotary rectangular frame in a second coordinate system is calculated;
s421, through S 2 Four tops of A-Net network prediction rotating rectangular frameThe pixel coordinates of the points are D 1 (x 1 ,y 1 )、D 2 (x 2 ,y 2 )、D 3 (x 3 ,y 3 )、D 4 (x 4 ,y 4 ) The coordinates (i, j) of the center pixel point of the rotating rectangular frame are obtained through calculation according to the following formula, and the depth value of the coordinates is D (i, j):
s422, combining the coordinates of the central pixel point of the rotary rectangular frame with the camera internal parameters (f) x ,f y ,u 0 ,v 0 ) The specific formula is as follows:
wherein f x Representing the dequantization of the physical focal length f, f by the number of pixels in the x-direction y Representing the dequantization of the physical focal length f, u by the number of pixels in the y-direction 0 Representing half of the image width pixel value, v 0 Representing half of the image height pixel value.
S43, calculating first coordinate information of each rotary rectangular frame in a first coordinate system and attitude information relative to the tail end of the mechanical arm by combining a preset position and a second coordinate when the camera shoots a positioning picture;
s431, setting the external parameters of the camera as a rotation matrix R and a translation vector T, and deducing the coordinates of the central pixel point of the rectangular frame under the second coordinate system as (X) according to a conversion formula w ,Y w ,Z w ) The conversion formula is as follows:
s432, in order to enable the camera to shoot perpendicular to the test stand, the gesture of the tail end of the mechanical arm relative to the first coordinate system is expressed as follows:
(x es ,y es ,z es ,w es )=(sin(β/2),0,0cos(β/2))
wherein beta is E (-pi, pi)]Is the rotation angle of the coordinate axis around the y axis, (x) es ,y es ,z es ,w es ) A quaternion representation of the gesture;
s433, calculating a rotation angle theta of the rotating rectangular frame relative to the horizontal position of the first coordinate system;
s434, the shooting pose of the rotating rectangular frame with respect to the end of the mechanical arm is expressed as follows:
(x ce ,y ce ,z ce ,w ce )=(0,sin(θ/2),0,cos(θ/2))
s435, a shooting posture of the rotating rectangular frame with respect to the first coordinate system is expressed as follows:
s436, rotating the rectangular frame with respect to the second coordinate system coordinates (X w ,Y w ,Z w ) Converting into coordinates (x, y, z) relative to a first coordinate system;
s437, calculating the moving track of the mechanical arm and the shooting posture of the tail end of the mechanical arm according to the coordinates of the rotary rectangular frame relative to the first coordinate system and the shooting posture of the rotary rectangular frame relative to the first coordinate system.
And S44, issuing first coordinate information and gesture information of the rotating rectangular frame through the ROS topic.
In this embodiment, the object class is predicted by a single-stage rotating target detection network S2A-Net and the object position is framed with a rotating frame as shown in FIG. 2; obtaining an object depth map by adopting a binocular depth camera, and calculating a three-dimensional coordinate of an object center point relative to a camera coordinate system according to the depth map; acquiring attitude information of the industrial product relative to a coordinate system of the mechanical arm base through the rotating frame, and acquiring a three-dimensional coordinate of the industrial product relative to the mechanical arm base according to coordinate conversion; issuing pose information of the object through the ROS topic; and the robot performs track planning after receiving the pose information and moves to a designated position to take a picture. The actual effect of shooting the rotating rectangular frame is shown in fig. 4, the surrounding environment of the robot is detected in real time through a camera at the tail end of the mechanical arm, and RGB images are input into an S2A-Net network in real time to be rolled and pooled, so that a depth feature map is generated. And sending the depth feature map into the FAM for feature alignment and anchor point generation so as to better adapt to the target in the image. After the process is completed, the FAM sends the feature map to the ODM for object detection, which encodes the direction information of the object using the active rotation filter, and uses this information in classification and regression operations to detect industrial products in the image. And finally, generating a rotation detection frame of the industrial product on the original image according to the target detection result output by the ODM.
The shooting effect of the sample to be shot is shown in fig. 5, and the rotation angle of the sample to be shot relative to the horizontal position is calculated by rotating the rectangular frame, and the gesture of the sample to be shot relative to the base of the mechanical arm can be calculated because the inclination angle of the test frame is fixed, the robot body is vertically opposite to the industrial product test frame and the camera is fixed at the tail end of the mechanical arm. And combining the three-dimensional points (x, y, z) and the gesture information to obtain the pose of the sample to be shot relative to the coordinate system of the mechanical arm base. And outputting the pose of the industrial product relative to the coordinate system of the mechanical arm base through the ROS topic, and receiving the pose by the mechanical arm control end and performing track planning to enable the tail end of the mechanical arm to reach the target point to complete the photographing task.
This embodiment rotates the object detection network S by a single stage 2 The A-Net marks the rotating rectangular frame and identifies the type of the sample of the piece, and can also be replaced by a combination means of other marking modes and image identification modes. The method is characterized in that the coordinates of the upper sample are converted into the coordinates of a second coordinate system from the image coordinates, and finally converted into the coordinates of a first coordinate system, and the method is also an alternative method, and is replaced by any method capable of calculating the path and the distance between the mechanical arm and the upper sample. The essence of the gesture calculation is that the rotation angle of the rotating rectangular frame relative to the datum line, the rotation angle of the tail end of the mechanical arm is calculated by the same datum line, so that the rotation angle of the tail end of the mechanical arm is consistent with the rotation angle of the rotating rectangular frame, and the gesture of the tail end of the mechanical arm is the shooting gesture.
S5, the mechanical arm completes shooting of each sample to be shot according to the calculated moving track and shooting gesture, and the mechanical arm returns to an initialization position;
s6, judging whether a new shooting task exists, and if so, turning to S2.
Example 2:
a rotation correction photographing apparatus for environmental test, as shown in fig. 3, comprising: the system comprises an information acquisition module, a positioning photo shooting module, a track gesture calculation module and a sample shooting module;
the information acquisition module acquires sample position information of each sample to be shot;
the positioning photo shooting module is used for controlling the mechanical arm with the camera to move to a preset position and shooting positioning photos for all the upper samples in a preset gesture;
the track gesture calculation module is used for calculating the moving tracks of all the samples to be shot, which are shot by the mechanical arm, and the shooting gesture of each sample to be shot relative to the tail end of the mechanical arm according to the positioning photo and the sample position information of the samples to be shot;
and the sample shooting module is used for controlling the mechanical arm to shoot each sample to be shot according to the calculated moving track and shooting gesture.
In this embodiment, the information acquisition module may be a sensor group, or may be a data receiving device matched with position calculation; the positioning photo shooting module can be a control chip arranged on the mechanical arm or a remote server which can work by controlling the mechanical arm wirelessly; the track gesture calculation module can be arranged locally on the mechanical arm or can be a cloud computing platform; the sample shooting module can be arranged in a compatible mode with the positioning photo shooting module, and can also be independently arranged.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the application without departing from the spirit and scope of the application, which is intended to be covered by the claims.

Claims (10)

1. The rotation correction photographing method for the environmental test is characterized by comprising the following steps of:
1) Starting an environmental test, and finishing feeding a plurality of feeding samples;
2) Acquiring sample position information of each sample to be shot;
3) The mechanical arm with the camera moves to a preset position and takes positioning pictures of all the upper samples in a preset gesture;
4) Calculating moving tracks of all samples to be shot, which are shot by the mechanical arm, and shooting postures of each sample to be shot relative to the tail end of the mechanical arm according to the positioning photo and the sample position information of the samples to be shot;
5) The mechanical arm completes shooting of each sample to be shot according to the calculated moving track and shooting gesture, and returns to an initialization position;
6) Judging whether a new shooting task exists, and if so, turning to the step 2).
2. The rotation correction photographing method for environmental test according to claim 1, wherein the specific method for acquiring the sample position information of each sample to be photographed in step 2) is as follows:
numbering each piece of sample, and generating a retrieval table by corresponding the numbering information with the row number information of the piece of sample;
and inputting or receiving the serial number information of the samples to be shot, and acquiring the sample position information of each sample to be shot through the retrieval table.
3. The rotation correction photographing method for environmental test as claimed in claim 1, wherein the preset position in step 3) is obtained by:
and establishing a first coordinate system by taking the center of the mechanical arm base as an origin, acquiring information of a loading area of a loading sample, generating a minimum rectangular frame capable of framing the loading area according to the information of the loading area, marking a first center coordinate of the minimum rectangular frame in the first coordinate system, calculating the bottommost height capable of shooting the complete minimum rectangular frame according to a shooting wide angle of a camera on the mechanical arm, and forming a preset position according to the first center coordinate and the bottommost height of the minimum rectangular frame.
4. The rotation correction photographing method for environmental test as claimed in claim 3, wherein the preset posture in step 3) is:
when the preset gesture is to take a positioning picture, the shooting boundaries are respectively parallel to the boundaries of the minimum rectangular frame, and the mechanical arm needs to rotate by an angle.
5. The rotation correction photographing method for environmental test according to claim 3, wherein the specific method for calculating the movement track of the mechanical arm for photographing all the samples to be photographed and the photographing posture of each sample to be photographed in the step 4) is as follows:
4-1) inputting the positioning photograph into a single-stage rotating object detection network S 2 In A-Net, predicting the types of all the upper samples, and generating a rotary rectangular frame for each upper sample;
4-2) taking the position of the camera as an origin when the camera shoots the positioning picture to construct a second coordinate system; when the positioning photo is shot, a depth image of each piece of sample is acquired through a binocular depth camera, and a second coordinate of each rotary rectangular frame in a second coordinate system is calculated;
4-3) calculating first coordinate information of each rotary rectangular frame in a first coordinate system and attitude information relative to the tail end of the mechanical arm by combining a preset position and a second coordinate when the camera shoots the positioning picture;
4-4) publishing the first coordinate information and the gesture information of the rotating rectangular frame through the ROS topic.
6. The rotation correction photographing method for environmental test according to claim 5, wherein the specific method of step 4-1) is as follows:
4-1-1)S 2 the A-Net network carries out rolling and pooling operation on the positioning photos to generate a depth feature map;
4-1-2) sending the depth feature map into a feature alignment module FAM for feature alignment and anchor point generation;
the FAM first trims the initial anchor point using the anchor optimization network Anchor Refinement Network, and then the FAM performs a rotation invariance operation on the feature map using the alignment convolution layer Alignment Convolution;
4-1-3) the FAM sends the aligned feature images to a direction detection module ODM for target detection, the ODM uses an active rotation filter ARF to encode the direction information of the target, and uses the information in classification and regression operation to detect industrial products in the images; the ARF is a k multiplied by N filter and actively rotates N-1 times in the convolution process, so that a characteristic diagram with N direction channels is generated;
4-1-4) generating the rotating rectangular frame of the upper sample on an original image according to the target detection result output by the ODM.
7. The rotation correction photographing method for environmental test according to claim 5, wherein the specific method of step 4-2) is as follows:
4-2-1) by S 2 The A-Net network predicts the pixel coordinates of the four vertexes of the rotating rectangular frame, which are D respectively 1 (x 1 ,y 1 )、D 2 (x 2 ,y 2 )、D 3 (x 3 ,y 3 )、D 4 (x 4 ,y 4 ) The coordinates (i, j) of the center pixel point of the rotating rectangular frame are calculated by the following formula, and the depth value is D (i, j):
4-2-2) rotating rectangular frame center imageReference points coordinates combined with camera internal parameters (f) x ,f y ,u 0 ,v 0 ) The specific formula is as follows:
wherein f x Representing the dequantization of the physical focal length f, f by the number of pixels in the x-direction y Representing the dequantization of the physical focal length f, u by the number of pixels in the y-direction 0 Representing half of the image width pixel value, v 0 Representing half of the image height pixel value.
8. The rotation correction photographing method for environmental test of claim 7, wherein the specific method of step 4-3) is as follows:
4-3-1) assuming that the external parameters of the camera are the rotation matrix R and the translation vector T, the coordinates of the center pixel point of the rotation rectangular frame in the second coordinate system can be deduced as (X) according to the conversion formula w ,Y w ,Z w ) The conversion formula is as follows:
4-3-2) in order to take a picture of the camera perpendicular to the test stand, the pose of the arm tip with respect to the first coordinate system is expressed as follows:
(x es ,y es ,z es ,w es )=(sin(β/2),0,0cos(β/2))
wherein beta is E (-pi, pi)]Is the rotation angle of the coordinate axis around the y axis, (x) es ,y es ,z es ,w es ) A quaternion representation of the gesture;
4-3-3) calculating a rotation angle theta of the rotating rectangular frame relative to the horizontal position of the first coordinate system;
4-3-4) the shooting attitude of the rotating rectangular frame with respect to the distal end of the robot arm is expressed as follows:
(x ce ,y ce ,z ce ,w ce )=(0,sin(θ/2),0,cos(θ/2))
4-3-5) the shooting attitude of the rotating rectangular frame with respect to the first coordinate system is expressed as follows:
4-3-6) rotating the rectangular frame relative to the second coordinate system coordinates (X) w ,Y w ,Z w ) Converting into coordinates (x, y, z) relative to a first coordinate system;
4-3-7) calculating the moving track of the mechanical arm and the shooting attitude of the tail end of the mechanical arm according to the coordinates of the rotary rectangular frame relative to the first coordinate system and the shooting attitude of the rotary rectangular frame relative to the first coordinate system.
9. A rotation correction photographing apparatus for environmental test, comprising: the system comprises an information acquisition module, a positioning photo shooting module, a track gesture calculation module and a sample shooting module;
the information acquisition module acquires sample position information of each sample to be shot;
the positioning photo shooting module is used for controlling the mechanical arm with the camera to move to a preset position and shooting positioning photos of all the upper samples in a preset gesture;
the track gesture calculation module is used for calculating movement tracks of all samples to be shot, shot by the mechanical arm, and shooting gestures of each sample to be shot relative to the tail end of the mechanical arm according to the positioning photo and the sample position information of the samples to be shot;
and the sample shooting module is used for controlling the mechanical arm to shoot each sample to be shot according to the calculated moving track and the shooting gesture.
10. A storage medium storing instructions adapted to be loaded by a processor to perform the method of any one of claims 1 to 8.
CN202310645194.XA 2023-06-02 2023-06-02 Rotation correction photographing method, device and medium for environmental test Pending CN116668814A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310645194.XA CN116668814A (en) 2023-06-02 2023-06-02 Rotation correction photographing method, device and medium for environmental test

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310645194.XA CN116668814A (en) 2023-06-02 2023-06-02 Rotation correction photographing method, device and medium for environmental test

Publications (1)

Publication Number Publication Date
CN116668814A true CN116668814A (en) 2023-08-29

Family

ID=87711335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310645194.XA Pending CN116668814A (en) 2023-06-02 2023-06-02 Rotation correction photographing method, device and medium for environmental test

Country Status (1)

Country Link
CN (1) CN116668814A (en)

Similar Documents

Publication Publication Date Title
CN110555889B (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN111127422B (en) Image labeling method, device, system and host
US9118823B2 (en) Image generation apparatus, image generation method and storage medium for generating a target image based on a difference between a grip-state image and a non-grip-state image
CN111801198B (en) Hand-eye calibration method, system and computer storage medium
CN110712194B (en) Object inspection device, object inspection system, and method of adjusting inspection position
CN106780623A (en) A kind of robotic vision system quick calibrating method
CN111079565B (en) Construction method and identification method of view two-dimensional attitude template and positioning grabbing system
CN110580723A (en) method for carrying out accurate positioning by utilizing deep learning and computer vision
JP2016103230A (en) Image processor, image processing method and program
CN112132908B (en) Camera external parameter calibration method and device based on intelligent detection technology
WO2021218542A1 (en) Visual perception device based spatial calibration method and apparatus for robot body coordinate system, and storage medium
CN113379849A (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
JP4270949B2 (en) Calibration chart image display device, calibration device, and calibration method
CN114283079A (en) Method and equipment for shooting correction based on graphic card
TW202145146A (en) Image registration method and related model training methods, equipment and computer readable storage medium thereof
CN115816471A (en) Disordered grabbing method and equipment for multi-view 3D vision-guided robot and medium
CN112348890B (en) Space positioning method, device and computer readable storage medium
CN110009689A (en) A kind of image data set fast construction method for the robot pose estimation that cooperates
CN110853102A (en) Novel robot vision calibration and guide method, device and computer equipment
CN116668814A (en) Rotation correction photographing method, device and medium for environmental test
JP2014238687A (en) Image processing apparatus, robot control system, robot, image processing method, and image processing program
CN110910478A (en) GIF graph generation method, device, electronic equipment and storage medium
CN114952832B (en) Mechanical arm assembling method and device based on monocular six-degree-of-freedom object attitude estimation
CN115619877A (en) Method for calibrating position relation between monocular laser sensor and two-axis machine tool system
Liang et al. An integrated camera parameters calibration approach for robotic monocular vision guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination