CN114359403A - Three-dimensional space vision positioning method, system and device based on non-integrity mushroom image - Google Patents

Three-dimensional space vision positioning method, system and device based on non-integrity mushroom image Download PDF

Info

Publication number
CN114359403A
CN114359403A CN202111565860.6A CN202111565860A CN114359403A CN 114359403 A CN114359403 A CN 114359403A CN 202111565860 A CN202111565860 A CN 202111565860A CN 114359403 A CN114359403 A CN 114359403A
Authority
CN
China
Prior art keywords
image
mushroom
integrity
calculating
mushrooms
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111565860.6A
Other languages
Chinese (zh)
Inventor
胡荣林
张新新
马鸿泰
董甜甜
邵鹤帅
冯万利
付浩志
刘宬邑
李鑫鑫
荆佳龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zotao Robot Yancheng Co ltd
Huaiyin Institute of Technology
Original Assignee
Zotao Robot Yancheng Co ltd
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zotao Robot Yancheng Co ltd, Huaiyin Institute of Technology filed Critical Zotao Robot Yancheng Co ltd
Priority to CN202111565860.6A priority Critical patent/CN114359403A/en
Publication of CN114359403A publication Critical patent/CN114359403A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to the field of three-dimensional space vision positioning, and discloses a three-dimensional space vision positioning method, a system and a device based on a non-integral mushroom image, wherein the three-dimensional space vision positioning method, the system and the device comprise an image acquisition module, a distance measurement module and a main control module; the image acquisition module is used for acquiring image information of the mushroom to be detected; the distance measurement module is used for measuring the vertical distance between the image acquisition module and the mushroom to be detected; the master control module is used for preprocessing the collected image, identifying and positioning mushrooms in the preprocessed image, detecting the integrity of the identified mushrooms, respectively determining the center points of the mushrooms in the image according to the integrity of the image during identification and positioning, and finally generating three-dimensional coordinates of the mushrooms. Compared with the prior art, the mushroom positioning method can realize positioning of the mushroom with incomplete image acquisition and further improve the positioning precision of the mushroom.

Description

Three-dimensional space vision positioning method, system and device based on non-integrity mushroom image
Technical Field
The invention relates to the field of three-dimensional space visual positioning, in particular to application of positioning in a non-integrity mushroom image, namely a three-dimensional space visual positioning method, a system and a device based on the non-integrity mushroom image.
Background
In recent years, mushroom picking operation modes are changed from the first manual picking operation mode to the robot picking operation mode, and a mature mushroom picking operation mode is known to be a robot operation mode based on a monocular camera, but when the robot operation mode based on the monocular camera is used for positioning mushrooms, positioning accuracy still has certain deviation due to problems such as shading, and positioning is usually abandoned for mushrooms with incomplete image acquisition. Therefore, the efficiency of the whole picking process is greatly reduced, the precision is not high, and the conditions of missed picking and wrong picking can exist in the picking process. For the collected incomplete images, how to carry out three-dimensional space vision positioning on the mushrooms and improve the positioning precision is a problem which needs to be solved urgently.
Disclosure of Invention
The technical purpose is as follows: in order to solve the problems in the prior art, the invention provides a three-dimensional space vision positioning method, a system and a device for non-integrity collected mushroom images, which make up the defects of a robot operation mode based on a monocular camera from the aspects of positioning precision and positioning processing of the non-integrity collected mushroom images.
The technical scheme is as follows: the invention 1 discloses a three-dimensional space visual positioning method based on a non-integral mushroom image, which is characterized by comprising the following steps of:
step 1: fixing a shooting height H, and calibrating a camera to obtain calibration parameters, wherein the camera comprises a left-eye camera and a right-eye camera, and the calibration parameters comprise an internal reference matrix N, an external reference rotation matrix W and a translation matrix B;
step 2: carrying out image acquisition on mushrooms to be detected, and carrying out image preprocessing operation on images acquired by the left eye camera;
and step 3: introducing the preprocessed image into a neural network model, identifying mushrooms in the image and generating a bounding box to determine the positions of the mushrooms in the image;
and 4, step 4: judging the integrity r of each mushroom in the image according to the boundary box generated in the step 3, calculating the central point of the measured mushroom so as to obtain the pixel coordinate of the measured mushroom, and calculating the world coordinate of the mushroom;
and 5: collecting the distance from the left eye camera to the mushroom, calculating the height of the measured mushroom, and solving the three-dimensional coordinate of the measured mushroom;
step 6: performing the same operations from step 2 to step 5 on the image collected by the right eye camera, and carrying out one-to-one correspondence on the image collected by the right eye camera and the image collected by the left eye camera to obtain a coordinate O of a center point of mushroom in the image collected by the left eye camerawCorresponding point of (1) to (O)kIntroducing OkFor fine tuning OwCan obtain the final three-dimensional coordinate Of
Figure BDA0003421925490000021
2. The non-integral mushroom image based three-dimensional space visual positioning method according to claim 1, characterized in that the preprocessing operation in the step 2 is: and zooming and rotating are adopted, then image enhancement is carried out, and an image data set for network training is obtained.
3. The non-integral mushroom image-based three-dimensional space visual positioning method according to claim 1, wherein the neural network model used in the step 3 is a YOLOv3 network model, which identifies and positions the preprocessed image to generate bounding boxes of each mushroom in the image.
4. The method for visually positioning a three-dimensional space based on a non-integral mushroom image according to claim 1, wherein the specific steps of calculating the measured mushroom center point in the step 4 are as follows:
step 4.1: according to the width e of the bounding boxwHeight e from the bounding boxhCalculating the integrity of the mushrooms in the captured image
Figure BDA0003421925490000022
Step 4.2: recording mushroom images with the integrity r lower than 0.25 or higher than 4 in the images collected by the levo-ocular camera as pseudo targets, and removing the pseudo targets;
step 4.3: respectively taking mushroom and soil in the image as a foreground object and a background object of the image, classifying pixel points in the image into two clusters by using a clustering algorithm K-means, and separating the mushroom image in the collected image from the soil image;
step 4.4: carrying out pixel-level contour detection on the separated mushroom image, and fitting the contour of the mushroom image by using a prior circle to obtain the complete contour of the mushroom image;
step 4.5: when the integrity r of the mushroom in the captured image is 1 or in the (0.5, 2) interval, in the mushroom image area QijCenter point of intra search Ol,ol=argmax{Qij}; wherein i is the number of searched rows and j is the number of searched columns; otherwise, turning to step 4.4;
step 4.6: the corresponding center point is calculated when the mushroom integrity r in the acquired image is in the (0.25, 0.5) U (2, 4) interval.
5. The non-integral mushroom image based three-dimensional space visual positioning method according to claim 4, wherein the step 4.4 of determining the outline of the mushroom image according to the calculated outline edge comprises the steps of:
step 4.4.1: calculating the foreground area and an edge detection filter, and extracting a pixel-level contour edge of the mushroom in the collected image;
step 4.4.2: taking each pixel of the obtained contour edge as an anchor point, and assuming that the anchor points are points in a prior circle of the mushroom, so that each anchor point corresponds to a prior circle;
step 4.4.3: and calculating the pixel number of the prior circle, and selecting the prior circle with the largest common mushroom image area in each prior circle as the outline of the mushroom image.
6. The non-integrity mushroom image-based three-dimensional space visual positioning method according to claim 5, wherein the step 4.6 of calculating the integrity in the (0.25, 0.5) U (2, 4) interval with the center point of the non-integrity mushroom image comprises the following steps:
step 4.6.1: if the integrity of the mushroom image is in the interval of (0.25, 0.5) U (2, 4), filling a certain pixel value P in the width and the height of the image acquired by the left eye camera by adopting pure white, wherein the range of P is (25, 100), and establishing a pixel coordinate system by taking the upper left corner of the filled image as an origin;
step 4.6.2: arbitrarily taking four anchor point coordinates a (a) on the contour edge of the obtained mushroom imagex,ay)、b(bx,by)、c(cx,cy)、d(dx,dy) Respectively obtain a (a)x,ay)、b(bx,by) And c (c)x,cy)、d(dx,dy) Perpendicular bisector l of1And l2Can be calculated to obtain twoLine intersection OL(OLx,OLy) I.e. pixel coordinates of the center point of the mushroom image, wherein:
Figure BDA0003421925490000031
Figure BDA0003421925490000032
Figure BDA0003421925490000033
Figure BDA0003421925490000034
wherein x is1Is 11Abscissa of the point passed, y1Is 11Ordinate, x, of the point passed2Is 12Abscissa of the point passed, y2Is 12The ordinate of the point passed;
step 4.6.3: calculating to obtain height O of mushroom according to measured vertical distance h between camera and mushroomwz
Owz=H-h
Wherein H is a fixed shooting height;
step 4.6.4: the pixel coordinate O of the central point can be obtained by calculationLIs converted into world three-dimensional coordinates Ow
Ow=W-1N-1TzOL+W-1B
Wherein, TzDistance from optical center to mushroom in camera coordinate system:
Figure BDA0003421925490000041
Owzis the height of the mushroom.
The invention also discloses a three-dimensional space vision positioning system based on the non-integrity mushroom image, which comprises an image acquisition module, a distance measurement module and a main control module;
the image acquisition module is used for acquiring image information of the mushroom to be detected; the mushroom cultivation system comprises a left eye camera and a right eye camera, wherein the left eye camera and the right eye camera are used for collecting image data of the same mushroom object together and performing fine adjustment in a one-to-one correspondence manner;
the distance measurement module is used for measuring the distance between the image acquisition module and the mushroom to be detected;
the main control module comprises an image preprocessing unit, a target detection unit, an integrity detection unit and a coordinate generation unit:
the image preprocessing unit is used for preprocessing the acquired image;
the target detection unit is used for identifying and positioning mushrooms in the preprocessed image;
the integrity detection unit is used for detecting the integrity of mushrooms in the acquired image;
the coordinate generating unit is used for generating three-dimensional coordinates of the measured mushrooms.
Preferably, the image preprocessing unit, the target detection unit, the integrity detection unit, and the coordinate generation unit of the main control module specifically operate as follows:
the image preprocessing unit is used for zooming and rotating the acquired image, then carrying out image enhancement and obtaining an image data set for network training;
the target detection unit is used for identifying and positioning mushroom objects in the preprocessed image, transmitting the image into a neural network model, identifying mushrooms in the image and generating a boundary box so as to determine the positions of the mushrooms in the image;
the integrity detection unit is used for detecting and judging the integrity of the mushrooms in the collected images, and when the integrity r is lower than 0.25 or higher than 4, the mushroom images are removed as false targets; searching for a center point within the mushroom outline for a mushroom image with integrity r in the (0.5, 2) interval and integrity 1; when the integrity r of the mushroom in the measured image is in the (0.25, 0.5) U (2, 4) interval, calculating the central point of the non-integrity mushroom image; the coordinate generating unit is used for converting the pixel coordinate of the measured mushroom center point, calculating the height of the mushroom according to the vertical distance from the image acquisition module to the mushroom measured by the distance measuring module, and obtaining the three-dimensional coordinate of the measured mushroom.
Preferably, the distance measuring module comprises a laser distance measuring sensor for measuring the vertical distance from the image acquisition module to the mushroom to calculate the height of the measured mushroom.
The invention also discloses a three-dimensional space visual positioning device based on the non-integrity mushroom image, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the three-dimensional space visual positioning method based on the non-integrity mushroom image according to any one of the claims 1 to 6.
Has the advantages that:
1. the technical scheme of the invention solves the problems of positioning of the mushroom with incomplete image acquisition and deviation of the measured mushroom center point caused by shielding. According to the three-dimensional space vision positioning method, system and device based on the non-integral mushroom image, the mushroom with incomplete image acquisition can be positioned, and the positioning precision of the mushroom is further improved.
2. Although the invention can not obtain the complete outline of the mushroom when the clustering algorithm is used due to the incomplete image caused by shooting or the shielding between the mushrooms and the soil-attached image of the mushroom, the invention uses the prior circle to fit the outline of the mushroom image, can ensure that the mushroom obtains the complete outline, then uses points on the outline to calculate the position of the central point of the mushroom, finally converts the position into a three-dimensional coordinate, and uses the three-dimensional coordinate to carry out positioning, thereby greatly improving the positioning precision.
Drawings
FIG. 1 is a schematic flow chart of a three-dimensional space visual positioning method based on non-integrity mushroom images according to the present invention;
FIG. 2 is a block diagram of the structure of the three-dimensional spatial visual positioning system based on non-integral mushroom images according to the present invention;
FIG. 3 is a schematic diagram of an edge detection filter according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a principle of calculating a center point of a mushroom image with non-integrity according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in further detail with reference to the attached drawings, but it should be understood that the scope of the present invention is not limited by the specific embodiments.
The following describes in detail a three-dimensional space visual positioning method based on non-integral mushroom images proposed by the present invention with reference to fig. 1. Referring to the example of agaricus bisporus, the structural block diagram of the three-dimensional space vision positioning system based on non-integral mushroom images disclosed by the invention is shown in the attached figure 2, and the positioning system mainly comprises:
the system comprises an image acquisition module, a distance measurement module and a main control module;
the image acquisition module is used for acquiring image information of the mushroom to be detected; the mushroom cultivation system comprises a left eye camera 1-1 and a right eye camera 1-2, wherein the left eye camera 1-1 and the right eye camera 1-2 jointly acquire image data of the same mushroom object and perform fine adjustment in a one-to-one correspondence manner;
the distance measurement module is used for measuring the distance between the image acquisition module and the mushroom to be detected; the distance measuring module comprises a laser distance measuring sensor 2-1 which is used for measuring the vertical distance between the image acquisition module and the mushroom to calculate the height of the mushroom to be measured.
The main control module comprises an image preprocessing unit 3-1, a target detection unit 3-2, an integrity detection unit 3-3 and a coordinate generation unit 3-4:
the image preprocessing unit 3-1 is used for preprocessing the acquired image; the method comprises the steps of zooming and rotating the collected image, then carrying out image enhancement, and obtaining an image data set for network training.
The target detection unit 3-2 is used for identifying and positioning mushrooms in the preprocessed image; the image is passed into a neural network model, mushrooms in the image are identified and a bounding box is generated to determine the position of the mushrooms in the image.
The integrity detection unit 3-3 is used for detecting the integrity of mushrooms in the collected image; when the integrity r is lower than 0.25 or higher than 4, the mushroom image is removed as a false target; searching for a center point within the mushroom outline for a mushroom image with integrity r in the (0.5, 2) interval and integrity 1; when the integrity r of the mushroom in the measured image is in the interval of (0.25, 0.5) U (2, 4), the center point of the non-integrity mushroom image is calculated.
The coordinate generating unit 3-4 is used for generating a three-dimensional coordinate of the measured mushroom, converting a pixel coordinate of a central point of the measured mushroom, calculating the height of the mushroom according to the vertical distance from the image acquisition module to the mushroom measured by the distance measuring module, and obtaining the three-dimensional coordinate of the measured mushroom.
As shown in fig. 1, the method disclosed in this embodiment includes the following steps:
step 1, firstly, fixing the shooting height H of an image acquisition module, printing a checkerboard as a calibration object, calibrating a camera of the image acquisition module, and calculating an internal reference matrix N, an external reference rotation matrix W and a translation matrix B of the camera.
And 2, acquiring an image of the agaricus bisporus to be detected by adopting an image acquisition module, carrying out zooming and rotation on an image acquired by a left-eye camera in the image acquisition module by an image processing unit of the main control module, and then carrying out image enhancement to finish the preprocessing of the image.
And 3, introducing the preprocessed image into a neural network model, wherein the preprocessed image is identified and positioned by taking YOL0v3 as an example in the embodiment, so as to identify the agaricus bisporus in the image and generate a boundary box to determine the position of the agaricus bisporus in the image.
And 4, judging the integrity r of each agaricus bisporus in the image according to the generated boundary frame, and calculating the central point of the measured agaricus bisporus so as to obtain the pixel coordinates of the measured agaricus bisporus and further calculate the world coordinates of the agaricus bisporus.
Step 4.1: according to the width e of the bounding boxwHeight e from the bounding boxhAnd calculating the integrity r of the agaricus bisporus in the acquired image:
Figure BDA0003421925490000071
step 4.2: and (3) recording the agaricus bisporus image with the integrity r lower than 0.25 or higher than 4 in the image collected by the levo-ocular camera as a false target, and removing the false target.
Step 4.3: the agaricus bisporus and the soil in the image are respectively used as a foreground object and a background object of the image, a clustering algorithm is used, for example, K-means is used for classifying pixel points in the image into two clusters, and therefore the agaricus bisporus image in the collected image is separated from the soil image.
Step 4.4: and carrying out pixel-level contour detection on the separated agaricus bisporus image, and fitting the contour of the agaricus bisporus image by using a prior circle to obtain the complete contour of the agaricus bisporus image, thereby reducing the influence caused by shielding among agaricus bisporus and shielding of soil on the agaricus bisporus.
Step 4.4.1: the foreground region is operated with an edge detection filter as shown in fig. 3 to extract the pixel-level contour edge of agaricus bisporus in the captured image.
Step 4.4.2: each pixel from the resulting contour edge is taken as an anchor point and these anchor points are assumed to be points in the prior circle of agaricus bisporus such that each anchor point corresponds to a prior circle.
Step 4.4.3: and calculating the pixel number of the prior circle, and selecting the prior circle with the largest common agaricus bisporus image area in all the prior circles as the outline of the agaricus bisporus image.
Step 4.5: when the integrity of the agaricus bisporus in the collected image is 1 or is in the (0.5, 2) interval, in an agaricus bisporus image area QijInner search center point ol=argmax{QijAnd j is the number of columns in the search. Otherwise, go to step 4.4.
Step 4.6: and calculating the corresponding central point when the integrity of the agaricus bisporus in the acquired image is in the (0.25, 0.5) U (2, 4) interval.
The method for calculating the non-integrity mushroom image center point in the integrity (0.25, 0.5) U (2, 4) interval in the step 4.6 comprises the following steps:
step 4.6.1: if the integrity of the agaricus bisporus image is in the (0.25, 0.5) U (2, 4) interval, filling a certain pixel value P in the width and height of the image acquired by the left-eye camera by pure white as shown in fig. 4, wherein the range of P is (25, 100), the filled pixel is 25 as an example, and establishing a pixel coordinate system by using the upper left corner of the filled image as an origin.
Step 4.6.2: arbitrarily taking four anchor point coordinates a (a) at the contour edge as shown in FIG. 4x,ay)、b(bx,by)、c(cx,cy)、d(dx,dy) Respectively obtain a (a)x,ay)、b(bx,by) And c (c)x,cy)、d(dx,dy) Perpendicular bisector l of1And l2Calculating to obtain two line intersection OL(OLx,OLy) I.e. the pixel coordinates of the center point of the agaricus bisporus image.
Wherein the content of the first and second substances,
Figure BDA0003421925490000081
Figure BDA0003421925490000082
Figure BDA0003421925490000083
Figure BDA0003421925490000084
wherein x is1Is 11Abscissa of the point passed, y1Is 11Ordinate, x, of the point passed2Is 12Abscissa, y, of the point passed2Is 12The ordinate of the passed point.
Step 4.6.3: according to the distance measuring module canMeasuring the vertical distance h between the image acquisition module and the agaricus bisporus, and further calculating the height O of the agaricus bisporuswz
Owz=H-h
Wherein H is a fixed shooting height;
step 4.6.4: the pixel coordinate O of the central point can be obtained by calculationLIs converted into world three-dimensional coordinates Ow
Ow=W-1N-1TzOL+W-1B
Wherein, TzThe distance from the optical center to the agaricus bisporus under a camera coordinate system is as follows:
Figure BDA0003421925490000085
and 5, measuring the distance from a left eye camera in the image acquisition module to the agaricus bisporus according to a laser ranging sensor of the ranging module, calculating the height of the measured agaricus bisporus, and further solving the three-dimensional coordinate of the measured agaricus bisporus.
Step 6, executing the same steps 2 to 5 on the image acquired by the right eye camera in the image acquisition module, and carrying out one-to-one correspondence on the acquired image and the image acquired by the left eye camera to obtain the coordinate O of the center point of the agaricus bisporus in the image acquired by the left eye camerawCorresponding point of (1) to (O)kIntroducing OkFor fine tuning OwCan obtain the final three-dimensional coordinate Of
Figure BDA0003421925490000091
The above embodiments are merely illustrative of the technical concepts and features of the present invention, and the purpose of the embodiments is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.

Claims (10)

1. A three-dimensional space visual positioning method based on non-integral mushroom images is characterized by comprising the following steps:
step 1: fixing a shooting height H, and calibrating a camera to obtain calibration parameters, wherein the camera comprises a left-eye camera and a right-eye camera, and the calibration parameters comprise an internal reference matrix N, an external reference rotation matrix W and a translation matrix B;
step 2: carrying out image acquisition on mushrooms to be detected, and carrying out image preprocessing operation on images acquired by the left eye camera;
and step 3: introducing the preprocessed image into a neural network model, identifying mushrooms in the image and generating a bounding box to determine the positions of the mushrooms in the image;
and 4, step 4: judging the integrity r of each mushroom in the image according to the boundary box generated in the step 3, calculating the central point of the measured mushroom so as to obtain the pixel coordinate of the measured mushroom, and calculating the world coordinate of the mushroom;
and 5: collecting the distance from the left eye camera to the mushroom, calculating the height of the measured mushroom, and solving the three-dimensional coordinate of the measured mushroom;
step 6: performing the same operations from step 2 to step 5 on the image collected by the right eye camera, and carrying out one-to-one correspondence on the image collected by the right eye camera and the image collected by the left eye camera to obtain a coordinate O of a center point of mushroom in the image collected by the left eye camerawCorresponding point of (1) to (O)kIntroducing OkFor fine tuning OwCan obtain the final three-dimensional coordinate Of
Figure FDA0003421925480000011
2. The non-integral mushroom image based three-dimensional space visual positioning method according to claim 1, characterized in that the preprocessing operation in the step 2 is: and zooming and rotating are adopted, then image enhancement is carried out, and an image data set for network training is obtained.
3. The non-integral mushroom image-based three-dimensional space visual positioning method according to claim 1, wherein the neural network model used in the step 3 is a YOLOv3 network model, which identifies and positions the preprocessed image to generate bounding boxes of each mushroom in the image.
4. The method for visually positioning a three-dimensional space based on a non-integral mushroom image according to claim 1, wherein the specific steps of calculating the measured mushroom center point in the step 4 are as follows:
step 4.1: according to the width e of the bounding boxwHeight e from the bounding boxhAnd calculating the integrity r of the mushroom in the acquired image:
Figure FDA0003421925480000012
step 4.2: recording mushroom images with the integrity r lower than 0.25 or higher than 4 in the images collected by the levo-ocular camera as pseudo targets, and removing the pseudo targets;
step 4.3: respectively taking mushroom and soil in the image as a foreground object and a background object of the image, classifying pixel points in the image into two clusters by using a clustering algorithm K-means, and separating the mushroom image in the collected image from the soil image;
step 4.4: carrying out pixel-level contour detection on the separated mushroom image, and fitting the contour of the mushroom image by using a prior circle to obtain the complete contour of the mushroom image;
step 4.5: when the integrity r of the mushroom in the captured image is 1 or in the (0.5, 2) interval, in the mushroom image area QijCenter point of intra search Ol,Ol=argmax{Qij}; wherein i is the number of searched rows and j is the number of searched columns; otherwise, turning to step 4.4;
step 4.6: and calculating the corresponding central point when the integrity r of the mushroom in the collected image is in the (0.25, 0.5) U (2, 4) interval.
5. The non-integral mushroom image based three-dimensional space visual positioning method according to claim 4, wherein the step 4.4 of determining the outline of the mushroom image according to the calculated outline edge comprises the steps of:
step 4.4.1: calculating the foreground area and an edge detection filter, and extracting a pixel-level contour edge of the mushroom in the collected image;
step 4.4.2: taking each pixel of the obtained contour edge as an anchor point, and assuming that the anchor points are points in a prior circle of the mushroom, so that each anchor point corresponds to a prior circle;
step 4.4.3: and calculating the pixel number of the prior circle, and selecting the prior circle with the largest common mushroom image area in each prior circle as the outline of the mushroom image.
6. The non-integrity mushroom image-based three-dimensional space visual positioning method according to claim 5, wherein the step 4.6 of calculating the integrity in the (0.25, 0.5) U (2, 4) interval with the center point of the non-integrity mushroom image comprises the following steps:
step 4.6.1: if the integrity of the mushroom image is in the interval of (0.25, 0.5) U (2, 4), filling a certain pixel value P in the width and the height of the image acquired by the left eye camera by adopting pure white, wherein the range of P is (25, 100), and establishing a pixel coordinate system by taking the upper left corner of the filled image as an origin;
step 4.6.2: arbitrarily taking four anchor point coordinates a (a) on the contour edge of the obtained mushroom imagex,ay)、b(bx,by)、c(cx,cy)、d(dx,dy) Respectively obtain a (a)x,ay)、b(bx,by) And c (c)x,cy)、d(dx,dy) Perpendicular bisector l of1And l2Calculating to obtain two line intersection OL(OLx,OLy) I.e. pixel coordinates of the center point of the mushroom image, wherein:
Figure FDA0003421925480000031
Figure FDA0003421925480000032
Figure FDA0003421925480000033
Figure FDA0003421925480000034
wherein x is1Is 11Abscissa of the point passed, y1Is 11Ordinate, x, of the point passed2Is 12Abscissa, y, of the point passed2Is 12The ordinate of the point passed;
step 4.6.3: calculating to obtain height O of mushroom according to measured vertical distance h between camera and mushroomwz
Owz=H-h
Wherein H is a fixed shooting height;
step 4.6.4: the pixel coordinate O of the central point can be obtained by calculationLIs converted into world three-dimensional coordinates Ow
Ow=W-1N-1TzOL+W-1B
Wherein, TzDistance from optical center to mushroom in camera coordinate system:
Figure FDA0003421925480000035
Owzis the height of the mushroom.
7. A three-dimensional space vision positioning system based on non-integrity mushroom images is characterized by comprising an image acquisition module, a distance measurement module and a main control module;
the image acquisition module is used for acquiring image information of the mushroom to be detected; the mushroom cultivation system comprises a left eye camera and a right eye camera, wherein the left eye camera and the right eye camera are used for collecting image data of the same mushroom object together and performing fine adjustment in a one-to-one correspondence manner;
the distance measurement module is used for measuring the distance between the image acquisition module and the mushroom to be detected;
the main control module comprises an image preprocessing unit, a target detection unit, an integrity detection unit and a coordinate generation unit:
the image preprocessing unit is used for preprocessing the acquired image;
the target detection unit is used for identifying and positioning mushrooms in the preprocessed image;
the integrity detection unit is used for detecting the integrity of mushrooms in the acquired image;
the coordinate generating unit is used for generating three-dimensional coordinates of the measured mushrooms.
8. The non-integrity mushroom image-based three-dimensional spatial visual positioning system of claim 7, wherein the image preprocessing unit, the target detection unit, the integrity detection unit, and the coordinate generation unit of the main control module specifically operate as follows:
the image preprocessing unit is used for zooming and rotating the acquired image, then carrying out image enhancement and obtaining an image data set for network training;
the target detection unit is used for identifying and positioning mushroom objects in the preprocessed image, transmitting the image into a neural network model, identifying mushrooms in the image and generating a boundary box so as to determine the positions of the mushrooms in the image;
the integrity detection unit is used for detecting and judging the integrity of the mushrooms in the collected images, and when the integrity r is lower than 0.25 or higher than 4, the mushroom images are removed as false targets; searching for a center point within the mushroom outline for a mushroom image with integrity r in the (0.5, 2) interval and integrity 1; when the integrity r of the mushroom in the measured image is in the (0.25, 0.5) U (2, 4) interval, calculating the central point of the non-integrity mushroom image; the coordinate generating unit is used for converting the pixel coordinate of the measured mushroom center point, calculating the height of the mushroom according to the vertical distance from the image acquisition module to the mushroom measured by the distance measuring module, and obtaining the three-dimensional coordinate of the measured mushroom.
9. The non-integral mushroom image based three-dimensional spatial visual positioning system of claim 8, wherein the distance measuring module comprises a laser distance measuring sensor for measuring a vertical distance from the image acquisition module to the mushroom to calculate the height of the mushroom.
10. A non-integrity mushroom image based three-dimensional spatial visual positioning device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of the non-integrity mushroom image based three-dimensional spatial visual positioning method according to any one of claims 1 to 6.
CN202111565860.6A 2021-12-20 2021-12-20 Three-dimensional space vision positioning method, system and device based on non-integrity mushroom image Pending CN114359403A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111565860.6A CN114359403A (en) 2021-12-20 2021-12-20 Three-dimensional space vision positioning method, system and device based on non-integrity mushroom image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111565860.6A CN114359403A (en) 2021-12-20 2021-12-20 Three-dimensional space vision positioning method, system and device based on non-integrity mushroom image

Publications (1)

Publication Number Publication Date
CN114359403A true CN114359403A (en) 2022-04-15

Family

ID=81100754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111565860.6A Pending CN114359403A (en) 2021-12-20 2021-12-20 Three-dimensional space vision positioning method, system and device based on non-integrity mushroom image

Country Status (1)

Country Link
CN (1) CN114359403A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422717A (en) * 2023-12-19 2024-01-19 长沙韶光芯材科技有限公司 Intelligent mask stain positioning method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422717A (en) * 2023-12-19 2024-01-19 长沙韶光芯材科技有限公司 Intelligent mask stain positioning method and system
CN117422717B (en) * 2023-12-19 2024-02-23 长沙韶光芯材科技有限公司 Intelligent mask stain positioning method and system

Similar Documents

Publication Publication Date Title
CN110264567B (en) Real-time three-dimensional modeling method based on mark points
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
JP6955783B2 (en) Information processing methods, equipment, cloud processing devices and computer program products
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN103093191A (en) Object recognition method with three-dimensional point cloud data and digital image data combined
CN106951905A (en) Apple identification and localization method on a kind of tree based on TOF camera
CN102986372A (en) Picking object recognizing, classifying and space positioning device and picking object recognizing, classifying and space positioning method based on panoramic stereoscopic vision
CN110070571B (en) Phyllostachys pubescens morphological parameter detection method based on depth camera
Lou et al. Accurate multi-view stereo 3D reconstruction for cost-effective plant phenotyping
CN111998862B (en) BNN-based dense binocular SLAM method
CN103913149A (en) Binocular range finding system based on STM 32 single chip microcomputer and range finding method thereof
CN108320310B (en) Image sequence-based space target three-dimensional attitude estimation method
CN113884002A (en) Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion
CN113744315A (en) Semi-direct vision odometer based on binocular vision
CN112197705A (en) Fruit positioning method based on vision and laser ranging
CN114820474A (en) Train wheel defect detection method based on three-dimensional information
Wang Automatic extraction of building outline from high resolution aerial imagery
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN114359403A (en) Three-dimensional space vision positioning method, system and device based on non-integrity mushroom image
CN112184793B (en) Depth data processing method and device and readable storage medium
CN106709432A (en) Binocular stereoscopic vision based head detecting and counting method
CN111080685A (en) Airplane sheet metal part three-dimensional reconstruction method and system based on multi-view stereoscopic vision
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
US11699303B2 (en) System and method of acquiring coordinates of pupil center point

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination