CN103716399B - Remote interaction picking fruit based on wireless network works in coordination with asynchronous control system and method - Google Patents

Remote interaction picking fruit based on wireless network works in coordination with asynchronous control system and method Download PDF

Info

Publication number
CN103716399B
CN103716399B CN201310746643.6A CN201310746643A CN103716399B CN 103716399 B CN103716399 B CN 103716399B CN 201310746643 A CN201310746643 A CN 201310746643A CN 103716399 B CN103716399 B CN 103716399B
Authority
CN
China
Prior art keywords
robot
fruit
module
coordinate
timestamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310746643.6A
Other languages
Chinese (zh)
Other versions
CN103716399A (en
Inventor
刘成良
刘佰鑫
贡亮
赵源深
陈冉
牛庆良
黄丹枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201310746643.6A priority Critical patent/CN103716399B/en
Publication of CN103716399A publication Critical patent/CN103716399A/en
Application granted granted Critical
Publication of CN103716399B publication Critical patent/CN103716399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

A kind of remote interaction picking fruit cooperative control system of based on wireless network of field of data recognition, including: visual information acquisition module, robot pose and positional information acquisition module, robot motion control module, network transmission module, human-computer interaction module, three-dimensional coordinate resolve module and robot task list block, the present invention collaborative asynchronous controlling based on wireless network, make the fruit identification of fruit picking robot, position and perform to pluck asynchronous, improve efficiency;Meanwhile, native system has played people identification capability and robot under unstructured moving grids and has been accurately positioned respective advantage, identifies that spatial dimension is wide, identifies that kind does not limits, whether can judge fruit maturation.

Description

Remote interaction picking fruit based on wireless network works in coordination with asynchronous control system and method
Technical field
The present invention relates to the system and method for a kind of field of data recognition, a kind of remotely friendship based on wireless network Picking fruit cooperative control system and method mutually.
Background technology
The development of wireless network, the development of particularly IEEE 802.11 standard, make transfer rate, communication distance be continuously increased, Robot controlled in wireless is possibly realized.WLANs based on 802.11 standards (Wlan) are most popular wireless Local area network technology, its cost performance is high, networking flexibility is convenient, and this makes it have important use value in mechanization of agriculture and intellectuality.
Fruit picking robot is for this custom-designed robot of specific task of picking fruit.At present, research and The fruit identification of the fruit picking robot developed and locating module are all located on robot body.The Silsoe academy of Britain grinds Having made mushroom harvesting (1994), harvesting success rate is about 75%, and picking rate is 6.7/s, and growth inclination is to pluck to lose The main cause lost.Holland's agricultural environment Graduate School of Engineering develops Fructus Cucumidis sativi harvesting robot (1996), Fructus Cucumidis sativi detection success rate More than 90%, plucking success rate about 80%, but picking rate is about 54s/, the time is longer can not meet commercialization.QingBei, Korea University have developed apple-picking machine (1998) and reaches 85% from the outside discrimination identifying Fructus Mali pumilae of tree crown, and speed reaches 5/s, Success rate can not meet application requirement.Shanghai Communications University Cao its new etc. develop Strawberry recognition algorithm (2008), at laboratory In environment, mean discrimination speed is 1s, and fruit stem False Rate is 7%, and the fruit of 5% is caused damage by picking process.Chinese agriculture University super, the Li Wei et al. that records develops cucumber picking robot, and success rate of gathering reaches 85%, and single Fructus Cucumidis sativi plucks time-consuming 28.6s, There is higher practicality.Limited by robot operational capability, identified and position link accuracy rate low, the longest, it is impossible to put into Production application.
Through the retrieval of prior art is found, Chinese patent literature CN102682286A, publication date 2012.09.19, Disclosing a kind of picking robot fruit identification method technology based on laser vision system, the summary of this technology is: by laser ranging It is laser vision system that instrument and rectilinear translation mechanical mechanism close structure, obtains fruit tree local distance information, utilizes distance value and gray value Agreement relation generates the 3-D view of labelling scene distance feature.After this 3-D view smooth, use chain code following technology and justify at random Detection method calculates centre of form coordinate and the radius of fruit in image.
But this technology defect compared with the present invention and deficiency are: identify that spatial dimension has limitation: be only capable of local is entered Row processes, it is difficult to effectively identify the whole fruits on fruit tree;Identify that category has limitation: can only be to circular fruit Process in fact;The maturity of fruit can not be judged so that green fruit is plucked in follow-up harvesting operation, causes economy Loss.
Summary of the invention
The present invention is directed to deficiencies of the prior art, it is provided that a kind of remote interaction picking fruit based on wireless network is assisted With control system and method so that the fruit identification of fruit picking robot, position and perform to pluck asynchronous, improve efficiency.
The present invention is achieved by the following technical solutions:
The present invention relates to a kind of remote interaction picking fruit based on wireless network and work in coordination with asynchronous control system, including: vision is believed Breath acquisition module, robot pose and positional information acquisition module, robot motion control module, network transmission module, man-machine friendship Module, three-dimensional coordinate resolve module and robot task list block, wherein mutually:
Visual information acquisition module, robot pose and positional information acquisition module, robot motion control module and robot Task list module is arranged in robot, and human-computer interaction module and three-dimensional coordinate resolve module and be arranged at main control computer;
Robot task list block sends the time to visual information acquisition module and robot pose and positional information acquisition module Stamp, the image information of visual information acquisition module collecting fruit is the most transmitted along with timestamp to network transmission module, robot pose Being that timestamp sets up message breakpoint with positional information acquisition module, image information and timestamp are sent to master control meter by network transmission module The human-computer interaction module of calculation machine, human-computer interaction module determines the fruit scope of image information, three-dimensional coordinate resolve module and determine this fruit Each fruit in real scope relative to robot photographic head and the three-dimensional world coordinate sequence corresponding with timestamp, each three-dimensional generation Boundary's coordinate sequence is returned robot task list block with timestamp by network transmission module, and robot task list block is according to advanced person The rule first gone out reads the three-dimensional world coordinate sequence that timestamp is corresponding, and transformation matrix determines that fruit to be plucked is taken the photograph relative to current binocular As the coordinate sequence of head coordinate system, robot motion control module resolves and sends control signal according to this coordinate sequence is inverse, controls machine The harvesting action of device people.
Described visual information acquisition module includes: two photographic head being arranged in robot and for compressing image information Image compression module.
Described network transmission module includes: application layer, Internet, transportation level and physical layer, wherein: application layer uses certainly Definition agreement, Internet uses Transmission Control Protocol, and transportation level uses IP protocol family, and physical layer includes: the mutually wireless omnidirectional of communication Access point group, hub and wireless directional relay, wireless omnidirectional access point group communicates with robot, wireless directional relay and master control Computer communicates.
Described custom protocol specifically includes: timestamp, information type, stem check portion, instruction department.
Described robot task list block includes: for sending the newly-built TU task unit of timestamp, be used for reading, converting Coordinate sequence and the abortive role management unit that sorts, the resource having completed the hardware resource shared by task for release is released Put unit.
The present invention relates to the control method of said system, the method creates timestamp, the image letter of correspondent time collecting fruit Breath and robot pose and positional information, set up the coordinate sequence task list as robot for image information, and robot is according to being somebody's turn to do Task list carries out harvesting action, comprises the following steps:
Step one, robot task list block are to visual information acquisition module and robot pose and positional information acquisition module Sending timestamp, the image information of visual information acquisition module collecting fruit is the most transmitted along with timestamp to network transmission module, machine Device people's attitude and positional information acquisition module are that timestamp sets up message breakpoint, and network transmission module is by image information and timestamp transmission To main control computer;
Step 2, the human-computer interaction module of main control computer determine the fruit scope of image information, three-dimensional coordinate resolve module Determine that each fruit in this fruit scope is relative to robot photographic head and the three-dimensional world coordinate sequence corresponding with timestamp;
Step 3, each three-dimensional world coordinate sequence are returned robot task list block with timestamp by network transmission module, Robot task list block reads, according to the rule of first in first out, the three-dimensional world coordinate sequence that timestamp is corresponding, and transformation matrix determines Fruit to be plucked is relative to the coordinate sequence of current two camera coordinate systems, and robot motion control module is inverse according to this coordinate sequence Resolve and send control signal, control the harvesting action of robot.
Described step one specifically includes following steps:
1) the task list module of robot sends to visual information acquisition module, robot pose and positional information acquisition module Timestamp;
2) vision collecting module uses its two video cameras being arranged at robot binocular position shoot and compress two-way image information Figure frame, transmit in the lump to network transmission module with corresponding timestamp;
3) network transmission module packaging time stamp and figure frame, be transferred to main control computer;
4) robot pose and positional information acquisition module gather attitude and positional information and the timestamp for receiving sets up breakpoint Mark.
Described attitude and positional information include: for describing the compass output information of robot location, driver motor coding The height of the The Cloud Terrace at device output information and binocular camera place and the angle of pitch.
Described breakpoint be masked as point to the time of receipt (T of R) stamp time attitude and positional information storage address pointer.
Described step 2 specifically includes following steps:
1) information of acquisition is unpacked by the application layer of network transmission module, obtains the image information after timestamp and compression;
2) image information display after decompression is in the visual interface of human-computer interaction module, determines the two of two photographic head shootings Fruit to be plucked on width image, matches fruit position in two width images of same frame, three-dimensional coordinate resolve module and calculate respectively Three-dimensional world coordinate (the x of individual fruit to be pluckedwywzw), all of fruit on the complete image of matching primitives successively;
The three-dimensional world coordinate of each described fruit to be plucked calculates process and specifically includes following steps:
2.1) fruit relative to single photographic head two-dimensional coordinate (x, y):
Z c x y 1 = f d x s u 0 0 0 f d y v 0 0 0 0 1 0 R T 0 T 1 x w y w z w 1 = M 1 M 2 x w y w z w 1 = M x w y w z w 1 ;
Wherein: (x, y) be under image coordinate system fruit relative to the two-dimensional coordinate of single photographic head;ZcFor describing image coordinate It is tied to the Coordinate Conversion factor of the transformation relation of world coordinate system;M1For intrinsic parameters of the camera; M2For video camera external parameter;M=M1M2;F represents video camera effective focal length;Dx, dy represent that horizontal and vertical pixel unit is long Degree;(u0, v0) representing image coordinate system initial point zero under computer coordinate system, unit is millimeter;S represents non-thread Distortion parameter in property camera model;
Intrinsic parameters of the camera M1The transformation relation of image coordinate system it is tied to for computer coordinate:
x y 1 = 1 d x s u 0 0 1 d y v 0 0 0 1 u v 1 ;
The external parameter M of video camera2For the transformation relation of world coordinate system Yu computer coordinate system, R represents that one just determines Handing over spin matrix, T represents a translation matrix determined:
u v w 1 = R T 0 T 1 x w y w z w 1 ;
2.2) fruit degree of depth z relative to binocular camera is drawn: set C1、C2It is respectively the optical center of two video cameras, b For C1、C2Horizontal range;F is the focal length of photographic head, P1And P2For space geometry point P reflecting in two video camera imaging planes Exit point;Cross C1And C2To camera coordinates system, plane makees vertical line, and intersection point is A1And A2;Cross P and make vertical line to camera coordinates system plane, hang down Foot is B, meets at an E with imaging plane;If A1C1=ia, A2C2=ib;Then
2.3) spatial point [xwywzw]TProjection on the image of two video cameras is expressed as:
Z c 1 u 1 v 1 1 = M ′ x w y w z w 1 = m 11 1 m 12 1 m 13 1 m 14 1 m 21 1 m 22 1 m 23 1 m 24 1 m 31 1 m 32 1 m 33 1 m 34 1 x w y w z w 1 ;
Z c 2 u 2 v 2 1 = M ′ ′ x w y w z w 1 = m 11 2 m 12 2 m 13 2 m 14 2 m 21 2 m 22 2 m 23 2 m 24 2 m 31 2 m 32 2 m 33 2 m 34 2 x w y w z w 1 ;
Wherein: Zc1And Zc2The image coordinate being respectively two video cameras acquisitions is tied to the Coordinate Conversion factor of world coordinate system, M ', M ' ' it is respectively two intrinsic parameters of the camera and the product of external parameter,K=1,2, i=1,2,3,4, j=1, 2,3 representing matrix element;
2.4) x of world coordinate system is setwOwywPlane overlaps with the image coordinate system of one of them video camera, xwAxle is optical axis, Then by the position relationship of two video cameras, calculate spatial point [xwywzw]T, wherein, RcRepresent the position relationship of two video cameras Orthogonal spin matrix, TCThe translation matrix of the position relationship of two video cameras of expression:
The transformation matrix of described step 3 refers to: attitude and positional information acquisition module are that waiting task timestamp sets information Breakpoint, according to attitude at this message breakpoint and location expression matrix, current pose and location expression matrix obtain fruit to be plucked relative to The coordinate sequence of two camera coordinate systems.
Described step 3 specifically includes step:
1) three-dimensional world coordinate of each fruit to be plucked is returned robot task list block with timestamp by network transmission module;
2) transformation matrix: set the attitude of robot at message breakpoint and location expression matrix as S0, the attitude of current robot and Location expression matrix is Sp, transformation matrix Sc, as following formula represents:
Sp=Sc·S0, i.e.
xwywzwRepresent the three-dimensional world coordinate of each fruit to be plucked, the relative seat with current two video cameras of fruit to be plucked Mark system xp、yp、zpFor:
x p y p z p = S c · w w y w z w = S p · S 0 - 1 · w w y w z w ;
The attitude of described robot and location expression matrix S0Attitude and location expression matrix S with current robotpIncluding: Compass output information, drivewheel click on height and the angle of pitch of the The Cloud Terrace at encoder output information and binocular camera place.
Technique effect
The present invention collaborative asynchronous controlling based on wireless network so that the fruit identification of fruit picking robot, position and perform Pluck asynchronous, improve efficiency;Meanwhile, native system has played people identification capability and robot under unstructured moving grids and has been accurately positioned respectively From advantage, identify that spatial dimension is wide, identify that kind does not limits, whether fruit maturation can be judged.Relatively regard based on machine with conventional The fruit recognizer felt, can be prevented effectively from illumination shakiness and block the problem brought.
Accompanying drawing explanation
Fig. 1 is the system connection diagram of embodiment 1;
Fig. 2 is the custom protocol of application layer.
Fig. 3 is the topological structure schematic diagram of physical layer;
Fig. 4 is the method step schematic diagram of embodiment 2;
Fig. 5 is the coordinate system schematic diagram of embodiment 2;
Fig. 6 is the visual token amount principle schematic of embodiment 2.
Detailed description of the invention
Elaborating embodiments of the invention below, the present embodiment is implemented under premised on technical solution of the present invention, Give detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Embodiment 1
As it is shown in figure 1, the present embodiment includes: visual information acquisition module, robot pose and positional information acquisition module, Robot motion control module, network transmission module, human-computer interaction module, three-dimensional coordinate resolve module and robot task list mould Block, wherein:
Visual information acquisition module, robot pose and positional information acquisition module, robot motion control module and robot Task list module is arranged in robot, and human-computer interaction module and three-dimensional coordinate resolve module and be arranged at main control computer;
Robot task list block sends the time to visual information acquisition module and robot pose and positional information acquisition module Stamp, the image information of visual information acquisition module collecting fruit is the most transmitted along with timestamp to network transmission module, robot pose Being that timestamp sets up message breakpoint with positional information acquisition module, image information and timestamp are sent to master control meter by network transmission module The human-computer interaction module of calculation machine, human-computer interaction interface determines the fruit scope of image information, three-dimensional coordinate resolve module and determine this fruit Each fruit in real scope relative to robot photographic head and the three-dimensional world coordinate sequence corresponding with timestamp, each three-dimensional generation Boundary's coordinate sequence is returned robot task list block with timestamp by network transmission module, and robot task list block is according to advanced person The rule first gone out reads the three-dimensional world coordinate sequence that timestamp is corresponding, and transformation matrix determines that fruit to be plucked is taken the photograph relative to current binocular As the coordinate sequence of head coordinate system, robot motion control module resolves and sends control signal according to this coordinate sequence is inverse, controls machine The harvesting action of device people.
Described visual information acquisition module includes: the binocular camera that is arranged in robot and for compressing image information Image compression module.
As in figure 2 it is shown, described custom protocol specifically includes: timestamp DD, HH, MM, SS, MS, info class Type TYPE, stem check portion, instruction department data.
Described network transmission module includes: application layer, Internet, transportation level and physical layer, wherein: application layer uses certainly Definition agreement, Internet uses Transmission Control Protocol, and transportation level uses IP protocol family,
As it is shown on figure 3, physical layer includes: mutually wireless omnidirectional access point group, hub and the wireless directional relay of communication, Wireless omnidirectional access point group communicates with robot, and wireless directional relay communicates with main control computer.
Described robot task list block includes: for sending the newly-built TU task unit of timestamp, be used for reading, converting Coordinate sequence and the abortive role management unit that sorts, the resource having completed the hardware resource shared by task for release is released Put unit.
Embodiment 2
As shown in Figure 4, the present embodiment comprises the following steps:
1, sent timestamp by robot task list block, after visual information acquisition module collection receives timestamp, will adopt Collect to image information transfer to network transmission module.With this simultaneously, robot pose and positional information acquisition module are that timestamp is set up Message breakpoint.Network transmission module package image and timestamp are sent to the network transmission module of main control computer.
1.1, the task list module of robot is sent out to visual information acquisition module, robot pose and positional information acquisition module Go out timestamp.
1.2, vision collecting module uses its two binocular cameras being arranged at robot binocular position shoot and compress two-way figure As the figure frame of information, give network transmission module in the lump with corresponding timestamp.
1.3, network transmission module packaging time stamp and figure frame, be transferred to main control computer;
1.4, robot pose and positional information acquisition module gather attitude and positional information and the timestamp for receiving is set up disconnected Point mark.
Described attitude and positional information include: for describing compass output information and the driver motor coding of robot location Device exports height and the angle of pitch of the The Cloud Terrace at information, binocular camera place.
Described breakpoint be masked as point to the time of receipt (T of R) stamp time attitude and positional information storage address pointer.
2, the information of acquisition is unpacked by the application layer of network transmission module, obtains the image information after timestamp and compression, passes through Image information display after decompression, in the visual interface of human-computer interaction module, is selected method to click two width images successively by user by masterplate frame On fruit to be plucked, three-dimensional coordinate resolve module determine that in two field picture, fruit is relative to the three-dimensional coordinate point sequence of photographic head, network Timestamp and three-dimensional coordinate sequence of packets are returned to robot by the application layer of transport module.
2.1, the information of acquisition is unpacked by the application layer of main control computer network transmission module, obtains the figure after timestamp and compression As information.
2.2, the image after decompression is shown in the visual interface of human-computer interaction module, user select method successively by masterplate frame Click the fruit to be plucked on the two width images shot by binocular camera respectively, in two width images of same frame, match fruit position Put, transfer to the three-dimensional coordinate in another thread to resolve module and calculate the coordinate relative to binocular camera of each fruit to be plucked (x y z), successively all of fruit on the complete image of matching primitives.
Calculating process is as follows: as shown in Figure 5, obtains fruit according to the inner parameter of video camera, the external parameter of video camera Relative to single photographic head two-dimensional coordinate (x, y);Fruit degree of depth z relative to binocular camera is obtained by visual token amount principle; Obtained plucking the three-dimensional world coordinate system coordinate (x y z) of fruit by the geometrical relationship between two photographic head.
Described masterplate frame selects the method to be: select circular shuttering, rectangle template and the arch template of prepackage as mouse pointer, Changed template size by mouse roller in proportion, add mouse roller combination by right button and change Aspect Ratio, press mouse for a long time left Key towing changes the radian of arch so that staff indicates fruit region, including the possible region that is blocked.
Described fruit relative to single photographic head two-dimensional coordinate (x, y) is represented by formula 1:
Z c x y 1 = f d x s u 0 0 0 f d y v 0 0 0 0 1 0 R T 0 T 1 x w y w z w 1 = M 1 M 2 X w y w Z w 1 = M x w y w z w 1 ;
Wherein: x, y be under image coordinate system fruit relative to the two-dimensional coordinate of single photographic head;ZcFor describing image coordinate system The Coordinate Conversion factor to the transformation relation of world coordinate system;M1For intrinsic parameters of the camera; M2For video camera external parameter;M=M1M2;F represents video camera effective focal length;Dx, dy represent that horizontal and vertical pixel unit is long Degree;u0、v0Representing image coordinate system initial point zero under computer coordinate system, unit is millimeter;S represents non-linear and takes the photograph Distortion parameter in camera model.
Described intrinsic parameters of the camera M1Demarcated by two-step method based on radial arrangement restraint and draw.
Described computer coordinate system is the cartesian coordinate system of the arrangement mode of the image pixel of u row v row, and its initial point defines For first pixel in the image upper left corner, unit is pixel;Described image coordinate system is coplanar from computer coordinate system but different former The cartesian coordinate system of point, (the u that initial point is0, v0), unit is millimeter.
As it is shown in figure 5, binocular camera coordinate system is respectively two three-dimensional system of coordinates, wherein xc1Oc1yc1、xc2Oc2yc2Point Not Biao Shi two planes, plane xc1Oc1yc1With image coordinate system xO1Y is parallel, ideally zcAxle is the optical axis of video camera, With computer coordinate system uO0V meets at O1, focal distance f is the distance between Two coordinate system, and in reality, optical axis is not orthogonal to x-y plane, uses Distortion parameter s in non-linear camera model describes;
Considering distortion parameter, computer coordinate is tied to the transformation relation following formula of image coordinate system and describes:
x y 1 = 1 d x s u 0 0 1 d y v 0 0 0 1 u v 1 ;
The external parameter M of described video camera2For the transformation relation of world coordinate system Yu computer coordinate system, also by based on radially The two-step method of alignment constraints is demarcated and is drawn;For selected world coordinate system, then coordinate system transformation relation can by one determine orthogonal Spin matrix R and translation matrix T determined are given, coordinate system transformation relation such as following formula:
u v 0 1 = R T 0 T 1 x w y w z w 1 ;
As shown in Figure 6, the fruit visual token amount principle relative to degree of depth z of binocular camera is drawn: wherein C1、C2Point Not Wei the optical center of binocular camera, b is C1、C2Horizontal range;F is the focal length of photographic head, P1And P2For space geometry point World point P mapping point on two camera imaging planes;Cross C1And C2To camera coordinates system, plane makees vertical line, and intersection point is A1And A2; Crossing P and make vertical line to camera coordinates system plane, intersection point is B, meets at an E with imaging plane;If A1C1=ia, A2C2=ib;If z Distance for space geometry point world point P to camera plane;It is apparent from Δ PEP2~Δ PBC2, Δ PEP1~Δ PBC1, thus Go out:
z - f z = a a + i b z - f z = a + b - i a + i b b + i b + a ;
Eliminate intermediate variable a, obtain
z = b f i a - i b .
From formula 1:
Spatial point [xwywzw]TProjection on the image of video camera 1 is as shown in Equation 2:
Z c 1 x 1 y 1 1 = M ′ x w y w z w 1 = m 11 1 m 12 1 m 13 1 m 14 1 m 21 1 m 22 1 m 23 1 m 24 1 m 31 1 m 32 1 m 33 1 m 34 1 x w y w z w 1 ;
Spatial point [xwywzw]TProjection on the image of video camera 2 is as shown in Equation 3:
Z c 2 x 2 y 2 1 = M ′ ′ x w y w z w 1 = m 11 2 m 12 2 m 13 2 m 14 2 m 21 2 m 22 2 m 23 2 m 24 2 m 31 2 m 32 2 m 33 2 m 34 2 x w y w z w 1 ;
Zc1And Zc2The image coordinate being respectively two video cameras acquisitions is tied to the Coordinate Conversion factor of world coordinate system, M ', M " it is respectively two intrinsic parameters of the camera and the product of external parameter,K=1,2, i=1,2,3,4, j=1, 2,3 representing matrix element, if the x of world coordinate systemwOwywPlane overlaps with the image coordinate system of one of them video camera 2, xwAxle For optical axis, as shown in Figure 5, then by the position relationship of two video cameras, spatial point [x can be calculatedwywzw]T
The position relationship of two video cameras is also by orthogonal spin matrix RcWith translation matrix TCBeing given, the position of two video cameras is closed System, as shown in Equation 4:
Simultaneous formula 2,3 and 4 eliminates Zc1And Zc2, can be about xwywzwLinear equation.
3, coordinate sequence transfers to network transmission module to return robot task list block with the timestamp of original image.Task arranges Table reads, by the rule of first in first out, the three-dimensional coordinate sequence that this timestamp is corresponding, determines fruit phase to be plucked through video camera transformation matrix Three-dimensional coordinate sequence to current binocular camera coordinate system.Robot motion control module calculates according to this three-dimensional coordinate sequence is inverse The control signal sent of individual driver, controls the harvesting action of robot.While plucking operation, repeat above-mentioned waiting and pluck fruit Real space coordinate obtains, and forms fruit identification, positions and performs the asynchronous of harvesting, improves picking efficiency.
3.1, coordinate sequence transfers to network transmission module to return the task list module of robot with the timestamp of original image.
3.2, described video camera transformation matrix, particularly as follows: waiting task timestamp is at attitude and positional information acquisition module Message breakpoint, release binocular camera coordinate system according to this breakpoint attitude and location expression matrix and current pose, location expression matrix Transformation matrix:
If at message breakpoint, the attitude of robot and location expression matrix are S0, the attitude of current robot and location expression square Battle array is Spre, then video camera transformation matrix Sc, as following formula represents:
Sp=Sc·S0, i.e.
Relative three-dimensional coordinate (the x with current binocular camera coordinate system of fruit to be pluckedp、yp、zp), (xwywzw) represent The three-dimensional world coordinate of each fruit to be plucked of correspondent time
x p y p z p = S c · x w y w z w = S p r e · S 0 - 1 · x w y w z w ;
Wherein, attitude and location expression matrix comprise compass output information, drivewheel clicks on encoder output information and binocular is taken the photograph The height of the The Cloud Terrace at picture head place and the angle of pitch.

Claims (9)

1. a remote interaction picking fruit based on wireless network works in coordination with asynchronous control system, it is characterised in that including: vision Information acquisition module, robot pose and positional information acquisition module, robot motion control module, network transmission module, man-machine Interactive module, three-dimensional coordinate resolve module and robot task list block, wherein:
Visual information acquisition module, robot pose and positional information acquisition module, robot motion control module and robot task List block is arranged in robot, and human-computer interaction module and three-dimensional coordinate resolve module and be arranged at main control computer;
Robot task list block sends timestamp to visual information acquisition module and robot pose and positional information acquisition module, The image information of visual information acquisition module collecting fruit is the most transmitted along with timestamp to network transmission module, robot pose and position Putting information acquisition module is that timestamp sets up message breakpoint, and image information and timestamp are sent to main control computer by network transmission module Human-computer interaction module, human-computer interaction module determines the fruit scope of image information, by three-dimensional coordinate resolve module determine this fruit model Each fruit in enclosing is sat relative to robot photographic head and the three-dimensional world coordinate sequence corresponding with timestamp, each three-dimensional world Mark sequence is returned robot task list block with timestamp by network transmission module, and robot task list block is according to first in first out Rule read three-dimensional world coordinate sequence corresponding to timestamp, transformation matrix determines that fruit to be plucked is relative to binocular camera coordinate The coordinate sequence of system, robot motion control module resolves and sends control signal according to this coordinate sequence is inverse, controls adopting of robot Pluck action.
System the most according to claim 1, is characterized in that, described visual information acquisition module includes: two are arranged at Photographic head in robot and for compressing the image compression module of image information.
System the most according to claim 1, is characterized in that, described robot task list block includes: be used for sending The newly-built TU task unit of timestamp, is used for reading, coordinate transforming sequence and the abortive role management unit that sorts, and is used for Release has completed the resource releasing unit of the hardware resource shared by task.
4. the control method for system according to any one of claim 1-3, it is characterised in that the method creates the time Stamp, the image information of correspondent time collecting fruit and robot pose and positional information, set up coordinate sequence conduct for image information The task list of robot, robot carries out harvesting action according to this task list, comprises the following steps:
Step one, robot task list block send to visual information acquisition module and robot pose and positional information acquisition module Timestamp, the image information of visual information acquisition module collecting fruit is the most transmitted along with timestamp to network transmission module, robot Attitude and positional information acquisition module are that timestamp sets up message breakpoint, and image information and timestamp are sent to main by network transmission module Control computer;
Step 2, the human-computer interaction module of main control computer determine the fruit scope of image information, three-dimensional coordinate resolve module and determine Each fruit in this fruit scope is relative to robot photographic head and the three-dimensional world coordinate sequence corresponding with timestamp;
Step 3, each three-dimensional world coordinate sequence are returned robot task list block, machine with timestamp by network transmission module People's task list module reads, according to the rule of first in first out, the three-dimensional world coordinate sequence that timestamp is corresponding, and transformation matrix determines to be waited to adopt Plucking the fruit coordinate sequence relative to two camera coordinate systems, robot motion control module resolves concurrently according to this coordinate sequence is inverse Go out control signal, control the harvesting action of robot.
Method the most according to claim 4, is characterized in that, described step one specifically includes following steps:
1) the task list module of robot is when visual information acquisition module, robot pose and positional information acquisition module send Between stab;
2) vision collecting module uses its two video cameras being arranged at robot binocular position shoot and compress two-way image information Figure frame, transmits to network transmission module in the lump with corresponding timestamp;
3) network transmission module packaging time stamp and figure frame, be transferred to main control computer;
4) robot pose and positional information acquisition module gather attitude and positional information and the timestamp for receiving sets up breakpoint mark Will.
6. according to the method described in claim 4 or 5, it is characterized in that, described step 2 specifically includes following steps:
1) information of acquisition is unpacked by the application layer of network transmission module, obtains the image information after timestamp and compression;
2) image information display after decompression is in the visual interface of human-computer interaction module, determines two width figures of two photographic head shootings As upper fruit to be plucked, in two width images of same frame, match fruit position, three-dimensional coordinate resolving module calculate each and wait to pluck Three-dimensional world coordinate (the x of fruitwywzw), all of fruit on the complete image of matching primitives successively.
Method the most according to claim 6, is characterized in that, the three-dimensional world coordinate of each described fruit to be plucked calculated Journey specifically includes following steps:
2.1) fruit relative to single photographic head two-dimensional coordinate (x, y):
Z c x y 1 = f d x s u 0 0 0 f d y v 0 0 0 0 1 0 R T 0 T 1 x w y w z w 1 = M 1 M 2 x w y w z w 1 = M x w y w z w 1 ;
Wherein: x, y be under image coordinate system fruit relative to the two-dimensional coordinate of single photographic head;ZcIt is tied to the world for describing image coordinate The Coordinate Conversion factor of the transformation relation of coordinate system;M1For intrinsic parameters of the camera;M2For video camera external parameter;M=M1M2;f Represent video camera effective focal length;Dx, dy represent horizontal and vertical pixel unit length;u0、v0Represent that image coordinate system initial point is at computer Zero under coordinate system, unit is millimeter;S represents the distortion parameter in non-linear camera model;
Intrinsic parameters of the camera M1The transformation relation of image coordinate system it is tied to for computer coordinate:
x y 1 = 1 d x s u 0 0 1 d y v 0 0 0 1 u v 1 ;
The external parameter M of video camera2For the transformation relation of world coordinate system Yu computer coordinate system, R represents an orthogonal rotation determined Torque battle array, T represents a translation matrix determined:
u v 0 1 = R T 0 T 1 x w y w z w 1 ;
2.2) fruit degree of depth z relative to binocular camera is drawn: set C1、C2Being respectively the optical center of two video cameras, b is C1、 C2Horizontal range;F is the focal length of photographic head, P1And P2For space geometry point world point P reflecting in two video camera imaging planes Exit point;Cross C1And C2To camera coordinates system, plane makees vertical line, and intersection point is A1And A2;Cross P and make vertical line to camera coordinates system plane, hang down Foot is B, meets at an E with imaging plane;If A1C1=ia, A2C2=ib;Put down to video camera if w is space geometry point world point P The distance in face, the i.e. degree of depth
2.3) spatial point [xwywzw]TProjection on the image of two video cameras is expressed as:
Z c 1 u 1 v 1 1 = M ′ x w y w z w 1 = m 11 1 m 12 1 m 13 1 m 14 1 m 21 1 m 22 1 m 23 1 m 24 1 m 31 1 m 32 1 m 33 1 m 34 1 x w y w z w 1 ;
Z c 2 u 2 v 2 1 = M ′ ′ x w y w z w 1 = m 11 2 m 12 2 m 13 2 m 14 2 m 21 2 m 22 2 m 23 2 m 24 2 m 31 2 m 32 2 m 33 2 m 34 2 x w y w z w 1 ;
Wherein: Zc1And Zc2The image coordinate being respectively two video cameras acquisitions is tied to the Coordinate Conversion factor of world coordinate system, M ', M " it is respectively two intrinsic parameters of the camera and the product of external parameter,K=1,2, i=1,2,3,4, j=1, 2,3 representing matrix element;
2.4) x of world coordinate system is setwOwywPlane overlaps with the image coordinate system of one of them video camera, xwAxle is optical axis, then By the position relationship of two video cameras, calculate spatial point [xwywzw]T, wherein, RcRepresent the position relationship of two video cameras Orthogonal spin matrix, TCThe translation matrix of the position relationship of two video cameras of expression:
8. according to the method described in claim 4 or 5, it is characterized in that, the transformation matrix of described step 3 refers to: attitude and Positional information acquisition module is that waiting task timestamp sets message breakpoint, according to attitude at this message breakpoint and location expression matrix, when Front attitude and location expression matrix obtain the fruit the to be plucked coordinate sequence relative to two camera coordinate systems.
Method the most according to claim 8, is characterized in that, described step 3 specifically includes step:
1) three-dimensional world coordinate of each fruit to be plucked is returned robot task list block with timestamp by network transmission module;
2) transformation matrix: set the attitude of robot at message breakpoint and location expression matrix as S0, the attitude of current robot and position Putting Description Matrix is Sp, video camera transformation matrix Sc, as following formula represents:
Sp=Sc·S0, i.e.ywzwRepresenting the three-dimensional world coordinate of each fruit to be plucked, fruit to be plucked is relative Coordinate system x with current two video camerasp、yp、zpFor:
x p y p z p = S c · x w y w z w = S p · S 0 - 1 · x w y w z w ;
The attitude of robot and location expression matrix S at message breakpoint0Attitude and location expression matrix S with current robotpIncluding: Compass output information, drivewheel click on height and the angle of pitch of the The Cloud Terrace at encoder output information and binocular camera place.
CN201310746643.6A 2013-12-30 2013-12-30 Remote interaction picking fruit based on wireless network works in coordination with asynchronous control system and method Active CN103716399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310746643.6A CN103716399B (en) 2013-12-30 2013-12-30 Remote interaction picking fruit based on wireless network works in coordination with asynchronous control system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310746643.6A CN103716399B (en) 2013-12-30 2013-12-30 Remote interaction picking fruit based on wireless network works in coordination with asynchronous control system and method

Publications (2)

Publication Number Publication Date
CN103716399A CN103716399A (en) 2014-04-09
CN103716399B true CN103716399B (en) 2016-08-17

Family

ID=50408969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310746643.6A Active CN103716399B (en) 2013-12-30 2013-12-30 Remote interaction picking fruit based on wireless network works in coordination with asynchronous control system and method

Country Status (1)

Country Link
CN (1) CN103716399B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732550B (en) * 2015-04-08 2016-08-17 李淑兰 Pomegranate tree electronization plucks platform automatically
CN107247634A (en) * 2017-06-06 2017-10-13 广州视源电子科技股份有限公司 A kind of method and apparatus of Robotic Dynamic asynchronous remote procedure call
CN107608525B (en) * 2017-10-25 2024-02-09 河北工业大学 VR interactive mobile platform system
CN108550141A (en) * 2018-03-29 2018-09-18 上海大学 A kind of movement wagon box automatic identification and localization method based on deep vision information
CN108811766B (en) * 2018-07-06 2021-10-01 常州大学 Man-machine interactive greenhouse fruit and vegetable harvesting robot system and harvesting method thereof
CN109566092B (en) * 2019-01-24 2024-05-31 西北农林科技大学 Fruit harvesting robot control system for competition
CN111201922A (en) * 2020-02-27 2020-05-29 扬州大学 Facility fence frame type fruit and vegetable vine and fruit-bearing regulation and control device
CN113093156B (en) * 2021-03-12 2023-10-27 昆明物理研究所 Multi-optical axis calibration system and method for LD laser range finder
CN114770559B (en) * 2022-05-27 2022-12-13 中迪机器人(盐城)有限公司 Fetching control system and method of robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008037035A1 (en) * 2006-09-28 2008-04-03 Katholieke Universiteit Leuven Autonomous fruit picking machine
CN101273688A (en) * 2008-05-05 2008-10-01 江苏大学 Apparatus and method for flexible pick of orange picking robot
CN101807247A (en) * 2010-03-22 2010-08-18 中国农业大学 Fine-adjustment positioning method of fruit and vegetable picking point
CN102914967A (en) * 2012-09-21 2013-02-06 浙江工业大学 Autonomous navigation and man-machine coordination picking operating system of picking robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008037035A1 (en) * 2006-09-28 2008-04-03 Katholieke Universiteit Leuven Autonomous fruit picking machine
CN101273688A (en) * 2008-05-05 2008-10-01 江苏大学 Apparatus and method for flexible pick of orange picking robot
CN101807247A (en) * 2010-03-22 2010-08-18 中国农业大学 Fine-adjustment positioning method of fruit and vegetable picking point
CN102914967A (en) * 2012-09-21 2013-02-06 浙江工业大学 Autonomous navigation and man-machine coordination picking operating system of picking robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
机器人采摘番茄中的双目定位技术研究;王沈辉;《中国优秀硕士学位论文全文数据库(电子期刊)》;20070215;全文 *

Also Published As

Publication number Publication date
CN103716399A (en) 2014-04-09

Similar Documents

Publication Publication Date Title
CN103716399B (en) Remote interaction picking fruit based on wireless network works in coordination with asynchronous control system and method
CN104992441B (en) A kind of real human body three-dimensional modeling method towards individualized virtual fitting
CN110443898A (en) A kind of AR intelligent terminal target identification system and method based on deep learning
CN207117844U (en) More VR/AR equipment collaborations systems
CN105225269B (en) Object modelling system based on motion
CN103606188B (en) Geography information based on imaging point cloud acquisition method as required
CN104376596B (en) A kind of three-dimensional scene structure modeling and register method based on single image
CN104484033B (en) Virtual reality display method and system based on BIM
CN104299261B (en) Three-dimensional imaging method and system for human body
WO2020062434A1 (en) Static calibration method for external parameters of camera
CN108256504A (en) A kind of Three-Dimensional Dynamic gesture identification method based on deep learning
CN109682381A (en) Big visual field scene perception method, system, medium and equipment based on omnidirectional vision
CN108401461A (en) Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN108288292A (en) A kind of three-dimensional rebuilding method, device and equipment
CN109211103A (en) Deduction system
CN102938142A (en) Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN110148217A (en) A kind of real-time three-dimensional method for reconstructing, device and equipment
CN104331924B (en) Three-dimensional rebuilding method based on single camera SFS algorithms
CN109725733A (en) Human-computer interaction method and human-computer interaction equipment based on augmented reality
CN106155299B (en) A kind of pair of smart machine carries out the method and device of gesture control
CN103839277A (en) Mobile augmented reality registration method of outdoor wide-range natural scene
CN105096311A (en) Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit)
CN107491071A (en) A kind of Intelligent multi-robot collaboration mapping system and its method
CN109035401A (en) City three-dimensional scenic automatic modeling system based on inclined camera photography
CN104036541A (en) Fast three-dimensional reconstruction method in vision measurement

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant