CN111590591A - Automatic garbage pile grabbing method and system based on computer stereoscopic vision guiding mechanism - Google Patents

Automatic garbage pile grabbing method and system based on computer stereoscopic vision guiding mechanism Download PDF

Info

Publication number
CN111590591A
CN111590591A CN202010501902.9A CN202010501902A CN111590591A CN 111590591 A CN111590591 A CN 111590591A CN 202010501902 A CN202010501902 A CN 202010501902A CN 111590591 A CN111590591 A CN 111590591A
Authority
CN
China
Prior art keywords
garbage
camera
pile
points
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010501902.9A
Other languages
Chinese (zh)
Inventor
路绳方
高芳征
焦良葆
高阳
陈烨
刘洋洋
孟琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN202010501902.9A priority Critical patent/CN111590591A/en
Publication of CN111590591A publication Critical patent/CN111590591A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a garbage pile automatic grabbing method and system based on a computer stereoscopic vision guiding mechanism, wherein the method comprises the following steps: firstly, taking a picture of a garbage bin by using a fixed camera to obtain a garbage pile picture; then extracting feature points of the garbage heap image; three-dimensional reconstruction of the characteristic points is realized by using a calibrated camera, and the three-dimensional coordinates of the garbage pile under a stereoscopic vision coordinate system are solved; solving the closest point of the garbage pile to the camera to be the highest point of the garbage pile; the highest point of the garbage pile is converted into a mechanical gripper coordinate system, the mechanical gripper is guided to intelligently grip the current garbage pile, and automatic gripping of garbage in the garbage bin is realized. The invention utilizes a multi-view three-dimensional reconstruction technology, solves the problem of online intelligent grabbing of the garbage piles in the garbage bin by extracting and matching the characteristic points of the garbage piles and the mechanical arm, replaces the traditional working mode of manually operating the mechanical arm with low efficiency on site, and has important significance for saving enterprise cost and improving garbage treatment efficiency.

Description

Automatic garbage pile grabbing method and system based on computer stereoscopic vision guiding mechanism
Technical Field
The invention relates to the technical field of computer vision application, in particular to a garbage pile automatic grabbing method and system based on a computer stereoscopic vision guiding mechanism.
Background
With the rapid improvement of the national economic level, the urbanization process of China is accelerated continuously, and domestic garbage is increased day by day. The garbage pollutes the environment and threatens the human health, and the garbage disposal problem always troubles the development of cities. In recent years, various garbage disposal techniques have been developed due to the national regulations on garbage disposal and the increased awareness of environmental protection of residents. The garbage incineration can convert garbage into heat energy for power generation, so that the garbage resource is recycled, and the method is a technical means with important significance in the current garbage treatment. The garbage incineration power plant is provided with a special garbage stacking bin which is in a closed and negative pressure state, so that no gas escapes, all the garbage to be incinerated needs to be fully sterilized at the place, and the harm of the garbage is reduced. The garbage after being disinfected in the bin is grabbed by controlling a mechanical arm by external personnel, and is put into an incinerator for incineration.
At present, the garbage stack grabs the incineration process, and the whole process requires workers outside the bin to control the mechanical arm, so that the mechanical and automatic treatment degree is not high, the labor cost is increased, and certain psychological burden can be brought to the workers outside the bin due to the severe environment in the garbage bin. The computer vision technology can complete multiple tasks in a complex environment or a high-risk environment by high automation degree and non-contact without manual intervention. Therefore, the method for automatically grabbing the garbage piles based on the computer vision guide mechanism is researched, and has important significance for improving the garbage treatment efficiency.
Disclosure of Invention
The technical purpose is as follows: aiming at the defects of the automatic garbage pile grabbing link in the garbage incineration process, the invention discloses a garbage pile grabbing method based on a visual guide mechanism, which introduces the characteristics of non-contact, high automation degree and repeatability of computer vision into the field of garbage treatment, provides a garbage pile online grabbing method based on a visual measurement, positioning and guide mechanism, solves the problem of online intelligent grabbing of garbage piles in a garbage bin, replaces the working mode of manual operation of mechanical arms with low efficiency on site, and has important significance for saving enterprise cost and improving garbage treatment efficiency.
The technical scheme is as follows: in order to achieve the technical purpose, the invention adopts the following technical scheme:
a garbage pile automatic grabbing method based on a computer stereoscopic vision guiding mechanism is used for realizing the online guiding of a manipulator to the garbage pile grabbing process, and is characterized by comprising the following steps:
s1, image acquisition: shooting the garbage bin from different angles by more than one camera to obtain a plurality of garbage pile images including garbage piles;
s2, feature extraction: processing the garbage heap image, and extracting characteristic points, wherein the characteristic points comprise mechanical arm characteristic points and garbage heap top characteristic points;
s3, three-dimensional reconstruction: mapping the characteristic points in the garbage pile image to a three-dimensional space by using a camera model calibrated by a camera, and establishing a three-dimensional camera coordinate system and a mechanical gripper coordinate system to obtain three-dimensional coordinates of the mechanical arm characteristic points and the garbage pile top characteristic points;
s4, positioning the top of the garbage pile: converting the highest point of the garbage pile into the coordinate system of the mechanical gripper through a coordinate system of the camera and a coordinate system of the mechanical gripper, judging the characteristic point of the garbage pile closest to the characteristic point of the manipulator through calculating the distance between the characteristic points, and solving the highest point of the garbage pile;
s5, intelligently guiding and grabbing by a mechanical arm: and (4) guiding the mechanical gripper to intelligently grip the highest point of the current garbage pile, and repeating the steps S1-S4 to automatically grip the garbage in the garbage bin from high to low.
Preferably, in step S1, three cameras are used to collect images, including a first camera, a second camera and a third camera, where the fields of view of the first camera and the second camera both cover the garbage pile to be captured, and the fields of view of the second camera and the third camera both cover the manipulator for capturing garbage; a binocular vision measuring system is formed between every two views in the three cameras, and target points serving as calculation objects are selected from the feature points by using the conversion relation between every two cameras, so that the relative positions of the target points are solved.
Preferably, in step S3, the three-dimensional geometric information of the object is recovered based on the parallax principle, the three-dimensional shape and position of the surrounding scenery are reconstructed, and the relative position between the target points is determined by using the conversion relationship between two cameras, wherein the distance between the connecting lines of the projection centers of the two cameras is the baseline distance.
Preferably, in step S2, the feature point P corresponding to the same spatial object is extracted from the garbage heap image captured by the first camera and the second camera1Respectively obtain points P1Point P of1Respectively, are pleft=(Xleft,Yleft),pright=(Xright,Yright);
In step S3, assuming that the images of the two cameras are on the same plane, the feature point P is1Has the same Y coordinate as the image coordinate, i.e. Yleft=YrightY, then the geometric relationship yields:
Figure BDA0002525076460000021
then the parallax is Disparity Xleft-XrightB is the base line distance, (x)c,yc,zc) Is a characteristic point P1And (3) calculating three-dimensional coordinates under a camera coordinate system, wherein f is the focal length of the camera to obtain:
Figure BDA0002525076460000031
the characteristic point P is obtained by the above method2Three-dimensional coordinates under a camera coordinate system; further, the feature point P is obtained1And P2Relative positional relationship.
Preferably, in step S2, SURF feature points are used as feature points on the surface of the garbage heap.
Preferably, in step S2, Harris corner points are used as feature points.
Preferably, the method for extracting the Harris corner comprises a matrix M associated with the autocorrelation coefficients, wherein the eigenvalue of the matrix M is the first-order curvature of the autocorrelation function, the corner is judged according to the curvature value,
Figure BDA0002525076460000032
wherein the content of the first and second substances,
Figure BDA0002525076460000033
is the first derivative in the x-direction,
Figure BDA0002525076460000034
g (sigma) is a Gaussian function for the first derivative in the y direction.
Preferably, the actual corner response function is:
fresp(x,y)=Det(M)-k·Trace2(M) (4)
where, Det is a determinant of the matrix, Trace is a Trace of the matrix, k is a parameter, and when the Harris operator corner point response function fresp value of the feature point is greater than a set threshold T, the point is a corner point.
The invention also discloses a garbage pile automatic grabbing system based on the computer stereoscopic vision guiding mechanism, which is used for automatically grabbing garbage piles in the garbage bin and is characterized in that: the garbage bin garbage collection device comprises a first industrial camera, a second industrial camera, a third industrial camera, a mechanical arm and a cross beam, wherein the first industrial camera, the second industrial camera, the third industrial camera, the mechanical arm and the cross beam are arranged at preset positions in a garbage bin, the cross beam is located above a garbage pile, the mechanical arm is fixed at the bottom of the cross beam, the visual fields of the first industrial camera and the second industrial camera cover the garbage pile in the garbage bin, and the visual fields of the second industrial camera and the third industrial camera cover an eight-claw mechanical arm.
Has the advantages that: due to the adoption of the technical scheme, the invention has the following technical effects:
(1) the invention utilizes a multi-view three-dimensional reconstruction technology, realizes the three-dimensional position positioning of the characteristic points by extracting and matching the characteristic points of the garbage pile and the eight-claw manipulator, solves the three-dimensional position of the garbage pile under a mechanical gripper coordinate system, and realizes the online real-time monitoring and grabbing of the garbage pile by a mechanical arm according to the height sequence of the garbage pile.
(2) The method can realize that the manipulator grabs the garbage piles according to the sequence of height, and avoids the phenomenon that the garbage piles collapse to cause fluctuation to the gas in the bin and influence the closed environment of the gas in the bin.
(3) The invention introduces the characteristics of non-contact, high automation degree and repeatability of computer vision into the field of garbage disposal, provides a garbage pile online grabbing method of a vision measurement, positioning and guide mechanism, and solves the problem of online intelligent grabbing of garbage piles in a garbage bin.
(4) Compared with the traditional manual manipulator operation mode, the automatic grabbing device for the garbage disposer has the advantages of simple required equipment, convenience in installation, higher recognition and positioning efficiency, continuity and repeatability in automatic grabbing work of the manipulator, capability of greatly improving the automation degree of garbage disposal, reduction of labor intensity of workers, improvement of garbage disposal efficiency and enterprise cost saving.
Drawings
FIG. 1 is a flow chart of the automatic garbage heap catching method based on the stereoscopic vision guiding mechanism of the computer;
FIG. 2 is a schematic structural diagram of an automatic garbage heap grabbing system based on a computer stereoscopic vision guiding mechanism;
FIG. 3 is a schematic diagram of a parallax principle of a vision measurement model based on two binocular vision measurement systems according to the present invention;
1-garbage pile, 2-first industrial camera, 3-third industrial camera, 4-third industrial camera, 6-eight-claw machine and 7-beam.
Detailed Description
The flow chart of the method provided by the invention is shown in figure 1, namely the automatic garbage pile grabbing method based on the guiding mechanism of the computer vision technology comprises five parts of characteristic extraction, three-dimensional reconstruction, garbage pile vertex positioning, intelligent mechanical arm guiding grabbing and the like of garbage in a garbage bin:
firstly, taking a picture of a garbage bin by using a fixed camera to obtain a garbage pile picture;
then, extracting feature points of the garbage heap image;
thirdly, three-dimensional reconstruction of the characteristic points is realized by using a calibrated camera, and three-dimensional coordinates of the garbage pile under a stereoscopic vision coordinate system are solved;
fourthly, solving the point of the garbage pile closest to the camera to be the highest point of the garbage pile;
and fifthly, converting the highest point of the garbage pile into the coordinate system of the mechanical gripper through the coordinate conversion between the coordinate system of the camera and the middle of the mechanical gripper, and guiding the mechanical gripper to intelligently grab the current garbage pile to realize automatic grabbing of the garbage in the garbage bin.
As shown in fig. 2, in order to realize the automatic grabbing of the garbage pile in the garbage bin by the mechanical gripper in a high-low sequence, the invention also discloses an automatic grabbing system of the garbage pile in the garbage bin based on a stereoscopic vision guide mechanism, wherein the first industrial camera 2 and the second industrial camera 3 have a common visual field, and the visual field ranges of the first industrial camera and the second industrial camera can cover the garbage pile in the garbage bin; the second camera 3 and the third industrial camera 4 have a common view, the view of the two cameras covers the eight-claw robot 6, and the eight-claw robot 6 is fixed at the bottom of the cross beam 7.
According to the invention, by utilizing a multi-view three-dimensional reconstruction technology, the three-dimensional position location of the characteristic points is realized by extracting and matching the characteristic points of the garbage pile and the eight-claw manipulator, the three-dimensional position of the garbage pile under a mechanical manipulator coordinate system is solved, and the garbage pile is grabbed by the mechanical arm according to the height sequence of the garbage pile. Compared with the traditional manual manipulator operation mode, the automatic grabbing device for the garbage disposer has the advantages of simple required equipment, convenience in installation, higher recognition and positioning efficiency, continuity and repeatability in automatic grabbing work of the manipulator, capability of greatly improving the automation degree of garbage disposal, reduction of labor intensity of workers, improvement of garbage disposal efficiency and enterprise cost saving.
Specifically, the invention discloses a garbage pile automatic grabbing method based on a visual guidance mechanism, which comprises the following steps:
1.1 Multi-View visual measurement model establishment
The multi-view vision measurement is a method for acquiring three-dimensional geometric information of an object from a plurality of images based on the parallax principle. In a computer vision measurement system, in multi-view vision measurement, a plurality of digital images of surrounding scenery are generally obtained simultaneously from different angles by two cameras, or a plurality of digital images of the surrounding scenery are obtained from different angles at different times by a single camera, three-dimensional geometric information of an object can be recovered based on a parallax principle, and the three-dimensional shape and position of the surrounding scenery are reconstructed. With the development of computer vision theory, multi-view vision measurement plays an increasingly important role in industrial production, and has wide applicability.
The invention is based on a three-view vision measurement technology, realizes the acquisition of three-dimensional coordinates of a target point, positions the top point position of a garbage pile, and guides a mechanical gripper to grab according to the high and low sequence. As shown in FIG. 3, the vision measurement model of the present invention is based on the parallax principle of two binocular vision measurement systems, two views in three cameras (I)1、I2、I3) A binocular vision measuring system is formed between the two cameras, the relative position between target points is calculated by utilizing the conversion relation between every two cameras, wherein the distance between the projection centers of the two cameras is the base line distance B.
Taking the measuring system consisting of the first industrial camera 2 and the second industrial camera 3 as an example, it is assumed that the two cameras view the same feature point P of the object in space at the same time1The point P is acquired on the "left eye" and the "right eye" respectively1Respectively having image coordinates pleft=(Xleft,Yleft),pright=(Xright,Yright)。
Assuming that the images of the two cameras are on the same plane, the feature point P1Has the same Y coordinate as the image coordinate, i.e. Yleft=YrightY, then the geometric relationship yields:
Figure BDA0002525076460000061
then the parallax is Disparity Xleft-Xright. From this, the feature point P can be calculated1The three-dimensional coordinates in the camera coordinate system are:
Figure BDA0002525076460000062
therefore, as long as any point on the image surface of the left camera can find a corresponding matching point on the image surface of the middle camera, P can be determined1The three-dimensional coordinates of the point in the intermediate camera coordinate system. The method is point-to-point operation, and all points on an image surface can participate in the operation as long as corresponding matching points exist, so that corresponding three-dimensional coordinates of the points are obtained. In the same way, the feature point P2According to the method, three-dimensional coordinates in an intermediate camera coordinate system can also be realized. Further, the feature point P can be obtained1And P2And guiding the mechanical gripper to grip the positions of the characteristic points of the garbage pile according to the relative position relation.
1.2 image feature point extraction
(1) Eight-claw manipulator feature point extraction
In the above-described vision measuring system, the measurement system is mounted in a fixed position, ensuring that both the first industrial camera 2 and the second industrial camera 3 have a common view of the top of the pile of refuse, and the second industrial camera 3 and the third industrial camera 4 have a common view of the eight-claw robot. The first industrial camera 2 and the second industrial camera 3 are responsible for feature extraction and three-dimensional reconstruction of eight-claw angular points at the bottom of the eight-claw manipulator. The Harris angular point extraction method is not influenced by the posture and illumination of the camera, and is suitable for feature extraction of eight-claw angular points in the invention.
Harris operator is a signal-based point feature extraction operator proposed by c.harris and m.j.stephens in 1988. This operator is inspired by the autocorrelation function in the signal processing and gives a matrix M associated with the autocorrelation coefficients, the eigenvalues of the M matrix being the first order curvature of the autocorrelation function, and if both curvature values are high, the point is considered to be the corner point.
Figure BDA0002525076460000071
In the formula (I), the compound is shown in the specification,
Figure BDA0002525076460000072
is the first derivative in the x-direction,
Figure BDA0002525076460000073
g (sigma) is a Gaussian function for the first derivative in the y direction. To avoid feature value computation, the actual corner response function is:
fresp(x,y)=Det(M)-k·Trace2(M) (4)
where Det is the determinant of the matrix, Trace is the Trace of the matrix, and k is a parameter, and k is 0.04-0.06. When the Harris operator corner response function fresp value of a certain point is larger than a set threshold value T, the point is a corner.
The Harris operator is the most stable point feature extraction operator even if the rotation, the gray scale change, the noise influence and the viewpoint transformation of the image exist, because only the first derivative is involved in the calculation formula. Therefore, Harris corner points are very suitable as characteristic points for the mechanical gripper in the invention.
(2) Garbage pile top feature point extraction
The environment of the garbage pile in the bin is complex, the information of the surface characteristic points of the garbage pile is extracted quickly and accurately, and the method is an important step for realizing the reconstruction of the three-dimensional information of the garbage pile. According to the actual situation in the field, the invention is intended to utilize SURF (speeded Up RobustFeatures) feature points as the initial feature points of the garbage heap extraction. SURF characteristics are also characteristic points with invariable scale, and the method has the advantages that the characteristic extraction efficiency is high, the real-time requirement can be met in the actual operation, and the specific extraction algorithm of the SURF characteristic points can refer to Herbert Bay, Tinne Tuytelaars, the article "SURF: Speeded Up Robust feeds" of Luc Van Gool.
By extracting the characteristic points and utilizing a camera model calibrated by a camera, three-dimensional coordinates of the characteristic points of the eight-claw manipulator and the characteristic points of the top of the garbage pile under a coordinate system of the second industrial camera 3 are respectively realized, and the characteristic points of the garbage pile closest to the eight-claw manipulator are judged according to the distance between the characteristic points, so that the manipulator can guide the garbage pile grabbing process on line.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (9)

1. A garbage pile automatic grabbing method based on a computer stereoscopic vision guiding mechanism is used for realizing the online guiding of a manipulator to the garbage pile grabbing process, and is characterized by comprising the following steps:
s1, image acquisition: shooting the garbage bin from different angles by more than one camera to obtain a plurality of garbage pile images including garbage piles;
s2, feature extraction: processing the garbage heap image, and extracting characteristic points, wherein the characteristic points comprise mechanical arm characteristic points and garbage heap top characteristic points;
s3, three-dimensional reconstruction: mapping the characteristic points in the garbage pile image to a three-dimensional space by using a camera model calibrated by a camera, and establishing a three-dimensional camera coordinate system and a mechanical gripper coordinate system to obtain three-dimensional coordinates of the mechanical arm characteristic points and the garbage pile top characteristic points;
s4, positioning the top of the garbage pile: converting the highest point of the garbage pile into the coordinate system of the mechanical gripper through a coordinate system of the camera and a coordinate system of the mechanical gripper, judging the characteristic point of the garbage pile closest to the characteristic point of the manipulator through calculating the distance between the characteristic points, and solving the highest point of the garbage pile;
s5, intelligently guiding and grabbing by a mechanical arm: and (4) guiding the mechanical gripper to intelligently grip the highest point of the current garbage pile, and repeating the steps S1-S4 to automatically grip the garbage in the garbage bin from high to low.
2. The method for automatically grabbing the garbage piles based on the stereoscopic vision guiding mechanism of the computer according to claim 1, wherein the method comprises the following steps: in the step S1, three cameras are used to collect images, including a first camera, a second camera and a third camera, where the fields of view of the first camera and the second camera both cover the garbage pile to be captured, and the fields of view of the second camera and the third camera both cover the manipulator for capturing garbage; a binocular vision measuring system is formed between every two views in the three cameras, and target points serving as calculation objects are selected from the feature points by using the conversion relation between every two cameras, so that the relative positions of the target points are solved.
3. The method for automatically grabbing the garbage piles based on the stereoscopic vision guiding mechanism of the computer according to claim 2, wherein: and step S3, restoring the three-dimensional geometric information of the object based on the parallax principle, reconstructing the three-dimensional shapes and positions of the surrounding scenes, and determining the relative position between the target points by using the transformation relationship between two cameras, wherein the distance between the projection centers of the two cameras is the baseline distance.
4. The method for automatically grabbing the garbage piles based on the stereoscopic vision guiding mechanism of the computer according to claim 3, wherein the method comprises the following steps:
in step S2, feature points P corresponding to the same spatial object are extracted from the garbage pile images captured by the first camera and the second camera1Respectively obtain points P1Point P of1Respectively of image coordinates of
pleft=(Xleft,Yleft),pright=(Xright,Yright);
In step S3, assuming that the images of the two cameras are on the same plane, the feature point P is1Has the same Y coordinate as the image coordinate, i.e. Yleft=YrightY, then the geometric relationship yields:
Figure FDA0002525076450000021
then the parallax is Disparity Xleft-XrightB is the base line distance, (x)c,yc,zc) Is a characteristic point P1And (3) calculating three-dimensional coordinates under a camera coordinate system, wherein f is the focal length of the camera to obtain:
Figure FDA0002525076450000022
the above method determines the feature point P2Three-dimensional coordinates under a camera coordinate system; further, the feature point P is obtained1And P2Relative positional relationship.
5. The method for automatically grabbing garbage piles based on the stereoscopic vision guiding mechanism of computer claimed in claim 1, wherein in step S2, SURF feature points are used as garbage pile surface feature points.
6. The method for automatically grabbing garbage piles based on the stereoscopic vision guiding mechanism of computer according to claim 1, wherein in step S2, Harris corner points are used as feature points.
7. The method for automatically grabbing garbage piles based on the stereoscopic vision guiding mechanism of computer according to claim 6, wherein the Harris corner extraction method comprises a matrix M associated with autocorrelation coefficients, wherein the eigenvalue of the M matrix is the first-order curvature of the autocorrelation function, the corner is determined according to the curvature value,
Figure FDA0002525076450000023
wherein the content of the first and second substances,
Figure FDA0002525076450000031
is the first derivative in the x-direction,
Figure FDA0002525076450000032
g (sigma) is a Gaussian function for the first derivative in the y direction.
8. The method of claim 7, wherein the actual corner response function is:
fresp(x,y)=Det(M)-k·Trace2(M) (4)
where, Det is a determinant of the matrix, Trace is a Trace of the matrix, k is a parameter, and when the Harris operator corner point response function fresp value of the feature point is greater than a set threshold T, the point is a corner point.
9. The utility model provides a rubbish heap automatic grabbing system based on computer stereovision guide mechanism for carry out the automation to rubbish heap (1) in the garbage bin and snatch, its characterized in that: the garbage bin garbage collection device comprises a first industrial camera (2), a second industrial camera (3), a third industrial camera (4), a mechanical arm (6) and a cross beam (7), wherein the first industrial camera, the second industrial camera (3), the third industrial camera (4), the mechanical arm (6) and the cross beam (7) are arranged at preset positions in a garbage bin, the cross beam (7) is located above a garbage pile (1), the mechanical arm (6) is fixed at the bottom of the cross beam (7), the visual fields of the first industrial camera (2) and the second industrial camera (3) cover the garbage pile in the garbage bin, and the visual fields of the second industrial camera (3) and the third industrial camera (4) cover the eight-claw.
CN202010501902.9A 2020-06-04 2020-06-04 Automatic garbage pile grabbing method and system based on computer stereoscopic vision guiding mechanism Pending CN111590591A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010501902.9A CN111590591A (en) 2020-06-04 2020-06-04 Automatic garbage pile grabbing method and system based on computer stereoscopic vision guiding mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010501902.9A CN111590591A (en) 2020-06-04 2020-06-04 Automatic garbage pile grabbing method and system based on computer stereoscopic vision guiding mechanism

Publications (1)

Publication Number Publication Date
CN111590591A true CN111590591A (en) 2020-08-28

Family

ID=72181026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010501902.9A Pending CN111590591A (en) 2020-06-04 2020-06-04 Automatic garbage pile grabbing method and system based on computer stereoscopic vision guiding mechanism

Country Status (1)

Country Link
CN (1) CN111590591A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113070234A (en) * 2021-03-03 2021-07-06 浙江博城机器人科技有限公司 Positioning control method of garbage sorting robot for outdoor garbage classification
CN113441421A (en) * 2021-07-22 2021-09-28 北京信息科技大学 Automatic garbage classification system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4745562A (en) * 1985-08-16 1988-05-17 Schlumberger, Limited Signal processing disparity resolution
CN103963058A (en) * 2014-04-30 2014-08-06 重庆环视科技有限公司 Mechanical arm grasping control system and method based on multi-azimuth visual positioning
CN109230580A (en) * 2018-10-11 2019-01-18 西安中科光电精密工程有限公司 It is a kind of based on the mixed unstacking robot system and method for putting material information acquisition
CN110415363A (en) * 2019-08-05 2019-11-05 上海神添实业有限公司 A kind of object recognition positioning method at random based on trinocular vision
CN110751691A (en) * 2019-09-24 2020-02-04 同济大学 Automatic pipe fitting grabbing method based on binocular vision
CN111151463A (en) * 2019-12-24 2020-05-15 北京无线电测量研究所 Mechanical arm sorting and grabbing system and method based on 3D vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4745562A (en) * 1985-08-16 1988-05-17 Schlumberger, Limited Signal processing disparity resolution
CN103963058A (en) * 2014-04-30 2014-08-06 重庆环视科技有限公司 Mechanical arm grasping control system and method based on multi-azimuth visual positioning
CN109230580A (en) * 2018-10-11 2019-01-18 西安中科光电精密工程有限公司 It is a kind of based on the mixed unstacking robot system and method for putting material information acquisition
CN110415363A (en) * 2019-08-05 2019-11-05 上海神添实业有限公司 A kind of object recognition positioning method at random based on trinocular vision
CN110751691A (en) * 2019-09-24 2020-02-04 同济大学 Automatic pipe fitting grabbing method based on binocular vision
CN111151463A (en) * 2019-12-24 2020-05-15 北京无线电测量研究所 Mechanical arm sorting and grabbing system and method based on 3D vision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
侯增涛等: "人工关节力学性能测试硬件系统", 《中国医疗器械杂志》 *
卢洪军: "基于双目视觉机器人自定位与动态目标定位", 《沈阳大学学报(自然科学版)》 *
张绍兵等: "基于靶标的三目视觉3 维坐标测量系统", 《激光技术》 *
范亚兵等: "三目立体摄影测量系统中相机标定技术研究", 《计量学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113070234A (en) * 2021-03-03 2021-07-06 浙江博城机器人科技有限公司 Positioning control method of garbage sorting robot for outdoor garbage classification
CN113441421A (en) * 2021-07-22 2021-09-28 北京信息科技大学 Automatic garbage classification system and method
CN113441421B (en) * 2021-07-22 2022-12-13 北京信息科技大学 Automatic garbage classification system and method

Similar Documents

Publication Publication Date Title
CN106000904B (en) A kind of house refuse Automated Sorting System
CN207861446U (en) Control system for robot destacking apparatus
CN102514002B (en) Monocular vision material loading and unloading robot system of numerical control lathe and method thereof
CN111590591A (en) Automatic garbage pile grabbing method and system based on computer stereoscopic vision guiding mechanism
CN101840736B (en) Device and method for mounting optical glass under vision guide
CN104331894A (en) Robot unstacking method based on binocular stereoscopic vision
CN114241298A (en) Tower crane environment target detection method and system based on laser radar and image fusion
CN105217324A (en) A kind of novel de-stacking method and system
CN207231476U (en) A kind of courier packages' grabbing device based on binocular vision
CN114882109A (en) Robot grabbing detection method and system for sheltering and disordered scenes
CN110640741A (en) Grabbing industrial robot with regular-shaped workpiece matching function
Xia et al. Workpieces sorting system based on industrial robot of machine vision
CN111715559A (en) Garbage sorting system based on machine vision
CN114155301A (en) Robot target positioning and grabbing method based on Mask R-CNN and binocular camera
Ben et al. Research on visual orientation guidance of industrial robot based on cad model under binocular vision
CN111169871A (en) Method for grabbing garbage can by intelligent manipulator of garbage truck and manipulator
CN206645534U (en) A kind of unordered grabbing device of robot based on double camera
CN207013315U (en) The flange quality control system of view-based access control model detection
CN117325170A (en) Method for grabbing hard disk rack based on depth vision guiding mechanical arm
CN110533717A (en) A kind of target grasping means and device based on binocular vision
CN113920020B (en) Human body point cloud real-time restoration method based on depth generation model
CN114241133A (en) Garbage grabbing method and system based on temperature and depth detection
CN203636826U (en) Robot device
CN207014373U (en) A kind of service robot grasp system based on active vision positioning
CN206416179U (en) A kind of motion target tracking positioning and grasping system based on binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200828

RJ01 Rejection of invention patent application after publication