CN202010257U - Ward round robot system based on Bayesian theory - Google Patents

Ward round robot system based on Bayesian theory Download PDF

Info

Publication number
CN202010257U
CN202010257U CN2011200488197U CN201120048819U CN202010257U CN 202010257 U CN202010257 U CN 202010257U CN 2011200488197 U CN2011200488197 U CN 2011200488197U CN 201120048819 U CN201120048819 U CN 201120048819U CN 202010257 U CN202010257 U CN 202010257U
Authority
CN
China
Prior art keywords
robot
target
ward
image
rfid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2011200488197U
Other languages
Chinese (zh)
Inventor
田国会
周风余
尹建芹
杨阳
高鑫
姜海涛
韩旭
闫云章
张庆宾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN2011200488197U priority Critical patent/CN202010257U/en
Application granted granted Critical
Publication of CN202010257U publication Critical patent/CN202010257U/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The utility model relates to a ward round robot system based on Bayesian theory. The system comprises a mobile robot, a RFID (radio-frequency identification) antenna is arranged on the mobile robot; a pan tilt zoom (PTZ) camera is arranged at the top of the mobile robot; an infrared positioning system and a laser sensor are arranged at the front part of the mobile robot; and the RFID antenna is matched with a RFID label which is mounted on a ward bed in a ward room. The ward round robot system firstly proposes a mechanism which can hunt for a target by use of the Bayesian theory in integration with the multi-mode target representation and prior knowledge and actively search a target in ward room complex environments by the combination of active vision, feature extraction and feature verification, and achieve the purpose of active robot ward round. The robot control system works based on the Bayesian theory, represents a target by means of multimode representation of targets, and achieves robot control by integrating the prior knowledge. The position of the target is acquired while the position and the posture of the robot are controlled; and the target is positioned by means of active vision and feature fusion.

Description

The service-delivery machine robot system is maked an inspection tour in ward based on bayesian theory
Technical field
The utility model relates to a kind of ward and make an inspection tour the service-delivery machine robot system, belong to the nursing robot field based on bayesian theory.
Background technology
Current nursing robot's research has obtained great progress, and it is mainly used to auxiliary nurse and finishes some relevant nursing activities, as the activity of low intelligent levels such as medicine transports.The ward tour is the important step in the nursery work, but there are a large amount of problems in present stage, therefore, by development nursing make an inspection tour that robot solves that nurse's nursing quality is loaded with the labour, the contradiction of stress between overweight is a problem that is of practical significance very much, and this problem rarely has research at home.The robot ward is maked an inspection tour will finishing of task and mainly contained: patient information checking, patient's status monitoring, wound location are checked, monitoring and the therapeutic equipments state is taken pictures, image transmission etc.Its utilize robot self with photographic head, control by robot and photographic head, realize that patient's wound, syringe needle thrust identification of targets and the location that various nurses such as position, infusion bottle, monitor, drainage bag make an inspection tour under the ward environment, and the target that these identify taken pictures and be transferred to the server of nurse station end, its key is the control of identification of targets and location, robot control and photographic head.But existing target identification method can't be discerned immobilized multiple target accurately, and lacks effective visual identity information and motor control association scheme, can't satisfy the needs of nursing.
China utility application prospectus CN200910013650.9, the location of the human body of description and track algorithm, its object is single, and feature is extracted easily, but when targeted species is various, is difficult to extract suitable characteristics.
China utility application prospectus CN200810063440.6 mainly utilizes clarification of objective, utilizes particle filter to carry out target following then, and it is mainly used in the tracking of target, but not the discovery of target;
China utility application prospectus CN201010248283.3, a kind of moving target detecting method based on Bayesian frame and LBP, the movable information that is used for the video object carries out target detection, is not suitable for static object.
Still lack at present the system of effectively making an inspection tour for the ward, and for effective method for searching of required target.
The utility model content
This utility model is maked an inspection tour the problem of service robot on target recognition at existing ward, provide a kind of ward to make an inspection tour the service-delivery machine robot system based on bayesian theory, it is by the ACTIVE CONTROL and the visual system of robot, finish the active search of the interesting target under the ward environment, thereby realize robot ward tour, utilize various modes to carry out object representation, and utilize the multi-mode of target to represent to carry out target localization and search.
For achieving the above object, this utility model adopts following technical scheme:
The service-delivery machine robot system is maked an inspection tour in a kind of ward based on bayesian theory, it comprises the mobile robot, be provided with the RFID antenna on described mobile robot, the mobile robot top is provided with the The Cloud Terrace photographic head, then is provided with infrared location system and laser sensor in the mobile robot front portion; RFID label in described RFID antenna and the ward on each sick bed matches.
Described infrared location system is made up of infrared ray emitting/receiving and passive type infrared tags; The passive type infrared tags is attached on the ceiling of ward, the infrared ray emitting/receiving is installed in the robot platform top, when the infrared ray emitting/receiving detects the passive type infrared tags, just it is passed to service robot with respect to the pose of passive type infrared tags and the ID of its passive type infrared tags by serial ports; Because the location aware of label in world coordinate system, just can be obtained from pose in world coordinate system by the coordinate transform robot, realize self-align.
Service-delivery machine robot system target homing method is maked an inspection tour in a kind of ward, and its method is:
1) is provided with the RFID label at the head of a bed place of sick bed, information, the corresponding priori of tour information that RFID label record patient's section office, name of disease, needs are maked an inspection tour, the rfid interrogator of robot by carrying, read the RFID label of the head of a bed and finish the patient information checking, as read information and the robot prestored information has difference, then give the alarm, judge whether to be the patient, if then continue as this patient care by the nursing staff to server; If not, need the nursing staff that current patient information is intervened, and change the next bit patient over to;
2) after robot enters the ward, mutual by with server, obtain the map in current ward, robot carries out coarse positioning by the rfid interrogator that carries, promptly whether basis reads the RFID label, judge whether robot and object be approaching,, then walk according to the path trace strategy guided robot of routine if do not have; If receive, then illustrate at big region memory to start the search of finishing target based on the motor control model control robot of Bayesian model in target;
3) for infusion bottle or transfusion bag image, motion by label segmentation result control robot, utilize coordinate transform to obtain the rough position of target, utilize active vision to carry out the candidate region screening then, utilize characteristic matching to cut apart infusion bottle or transfusion bag image to the zone that filters out, with the target that is partitioned into is benchmark, and the adjustment of carrying out photographic head is obtained this kind equipment and can be discerned picture;
For invariant position equipment, adopt template matching control robot motion, and can obtain the approximate location of target, in this position, utilize active vision to carry out the candidate region screening, utilize rectangular histogram to mate to the zone that filters out, to cut apart this kind equipment, with this kind equipment that is partitioned into is benchmark, carries out the photographic head adjustment and obtains this kind equipment and can discern picture;
For syringe needle, wound and human body face image, then adopt the skin color segmentation mode to control the robot motion, and obtain the better picture of pose in conjunction with organization of human body information adjustment photographic head with the result of skin color segmentation;
Above-mentioned picture is uploaded onto the server, check for the nursing staff;
4) after finishing default nursery work, change next bit patient's nursing over to.
Described step 2) in, adopt motor control model based on Bayesian model, the searching target image, its detailed process is:
With ROBOT CONTROL amount R=(x r, y r, z r) expression, wherein x r, y rRepresent the side-play amount on the both direction respectively, z rExpression is towards the side-play amount at angle, and the controlled quentity controlled variable of photographic head is with C=(C p, C t, C z), respectively the expression be Pan, Tilt, the controlled quentity controlled variable of Zoom, total controlled quentity controlled variable be expressed as Z=(R, C); After reading RFID, the controlled quentity controlled variable of The Cloud Terrace photographic head is set according to the tour task, under this condition, carry out robot motion's control, thereby carry out obtaining of target image, represent target image with O, utilize target search mechanism to control the motion of The Cloud Terrace photographic head again, thereby obtain the final objective picture.
Controlled quentity controlled variable is reduced to Z=R, and the influence of the current pose in path that the motion of robot will be subjected to planning simultaneously and robot, according to the viewpoint of statistics opinion, find the solution control law, promptly find the solution P (R|O, M, T), just occurring under the situation of target, the probability of control rate under path planning and robot pose constraint, wherein, O represents target image, M represents the current pose of robot, and T represents the path of planning;
P ( R | O , M , T ) = P ( O | R , M , T ) p ( R | M , T ) P ( O | M , T ) - - - ( 2 )
Wherein P (O|M, T) and P (O|R, M T) are the target probability of occurrence, and (R|M T) is controlled quentity controlled variable likelihood rule to P;
(R|M T) is illustrated under given path T and the current pose likelihood of ROBOT CONTROL amount rule under the M, under the situation of the current pose of given path and robot, carries out asking for of controlled quentity controlled variable; Must disperse in the path that planning is good, carry out asking for of robot controlled quentity controlled variable under the good path of planning:
If M=(x, y, theta), T=(T 1, T 2... T n), T wherein i=(x i, y i), make M p=(x, y), M o=theta, definition
T opt = arg min T i D ( M p , T i ) - - - ( 3 )
Wherein, what D represented is Euclidean distance, T OptBe the position, in order to obtain angle information, expected pose is defined as
T o=|| T Opt|| e J θ, θ is T Opt+1With respect to T OptDeviation angle (4)
The expectation control law
R O=(x o,y o,z o)=(T ox-x,T oy-y,θ-theta) (5)
Then final control likelihood rule is expressed as
P ( R | M , T ) = sim ( R , R O ) = 1 / ( x r - x o ) 2 + ( y r - y o ) 2 + w θ ( z r - z o ) 2 - - - ( 6 )
W wherein θFor regulating weight; P (O | M, T) be illustrated in the probability that the target under current pose and the track occurs, P (O | R, M T) is illustrated in the probability that occurs target under its corresponding controlled quentity controlled variable; If do not read the RFID label, illustrate that the distance objective thing is also distant, then think P (O/M, T)=0, at this moment, formula (2) is nonsensical, correspond to practical situation, under the situation that target can not occur or in the zone, the motion of robot is guided by path planning fully so, Chang Gui path trace strategy just; If read this RFID label, then illustrate in big zone to have target, then start target homing strategy based on image; By investigating the present image of body optically-captured, based on present image and to the understanding of target determine P (O|M, T) and P (O|R, M, T); Wherein (O|M, T) pairing camera image is present image to P, is designated as Img C, and P (O|R, M, T) in obtaining of image then can carry out affine transformation according to R and obtain by present image, be designated as Img Aff,
P(O|M,T)=P(O|Img C) (7)
P(O|R,M,T)=P(O?|Img Aff) (8)
Probability in order to determine that target occurs in the image is designated as P (O|I), and I is an image to be investigated.
In the described step 3), be based on the object representation method of template matching: because the position relative fixed of such target, therefore, the probability of representing target to occur by the appearance of the template image preserved in advance, therefore
P(O|I)=sim(I temp,I) (9)
Wherein, I TempBe the template image of preserving, I is a present image, sim (I Temp, I) be the similarity of two width of cloth images.
In the described step 3), infusion bottle or transfusion bag graphical representation are determined the position of infusion bottle by the method for utilizing the tangible handmarking's thing of colouring information, thereby adopt feature to come alternative image to finish similarity measurement
P(O|I)=sim(I temp,I)=sim(V temp,V) (10)
V wherein TempBe the template characteristic of preserving, V is current feature.Because robot in search process, will find target incessantly, need to determine the size and the pose of target simultaneously; Therefore, adopt color characteristic to carry out target and cut apart, calculate its boundary rectangle W after cutting apart,
sim ( V temp , V ) = ( W temp / W ) 2 + ( H temp / H ) 2 - - - ( 11 )
For cutting apart of handmarking's thing, adopt Cb, Cr color component in the YCbCr space, with the Y filtering, utilize the color lookup table method to set up the two-dimensional color probabilistic model of label, when making an inspection tour in robot, the continuous images acquired of robot, each pixel in the traversing graph picture, and judge in real time, cutting apart of label finished, again the label that is partitioned into is carried out profile and search, utilize the boundary rectangle labelling.
In the described step 3),, adopt the skin color segmentation method, utilize existing color space commonly used to carry out skin color segmentation respectively, and comprehensively finish final skin color segmentation with it for syringe needle or human face's expression; Adopt RGB, HSV and YCrCb to carry out skin color segmentation respectively; Wherein, under the RGB subspace complexion model suc as formula (12):
Figure BDA0000048084720000042
The spatial model of YCrCb is defined as formula (13):
69<Y<256,133<Cr<166,79<Cb<129 (13)
Wherein Y, Cr, Cb are respectively the pixel values of three Color Channels;
The HSV spatial model is defined as formula (14):
H<19,S>=48 (14)
H wherein, S is respectively the value of tone (H), saturation (S).
In the described step 3), adopt the mode of zoned logic to carry out photographic head control, the basis of control is the segmentation result of target.Carry out different dividing methods according to different targets.
In the described step 3), cut apart, at first cut apart and locator markers, carry out the infusion bottle location then for infusion bottle or transfusion bag.
(1) label location
If the centroid position of the centroid position of label and infusion bottle distance is respectively WD, HD, length and width HL, WL, HO, the WO of two rectangle frames of label and infusion bottle, and label and infusion bottle are enough near, there is proximate proportionate relationship: HL/HD ≈ HL '/HD ', wherein HL ' and HD ' are the mark height in the image to be searched and the distance at mark center and infusion bottle center, HL, HD, HL ' are known, HD ' can ask, and the center of label can be asked, thereby can obtain the approximate location P and the yardstick Scale of infusion bottle;
After the color of finishing label is cut apart, the zone that is partitioned into is estimated the noise spot that those areas of filtering are little, the target area that obtains treating the candidate; Utilize seven of Hu not bending moment be used as the shape facility of target, not bending moment of its Hu is calculated in the zone that is partitioned into, determine that according to the method for arest neighbors the zone that is partitioned into is the label zone;
As measured value, is length and width even variation with its model hypothesis with the length of label and width, utilizes Kalman filtering to carry out the filtering of noise;
(2) based on the infusion bottle of active vision location with cut apart
Because label is nearer apart from infusion bottle, the position P and the yardstick Scale of the infusion bottle that obtains behind the label location are the center with this position P on this basis, are neighborhood with 2Scale*2Scale, carry out the fine positioning of infusion bottle; At first the characteristic pattern of computed image adopts color characteristic figure and direction character figure, is designated as M:[n] 2→ R calculates its significance metric function, A:[n] 2→ R promptly calculates its receptance function according to its characteristic pattern; (i, j) representative is at point (i, the eigenvalue of j) locating, definition similarity function with M
( ( i , j ) | | ( p , q ) ) Δ = M ( i , j ) M ( p , q ) ( 15 )
As node, construct full connection layout G with each point in the image A(i, j) (p, the weight on limit q) is node with node
w ( ( i , j ) , ( p , q ) ) Δ = d ( ( i , j ) | | ( p , q ) ) · F = d ( ( i , j ) | | ( p , q ) ) · exp ( - ( i - p ) 2 + ( j - q ) 2 2 σ 2 ) - - - ( 16 )
Wherein, weight is regulated in the F representative, and this weight adjustable range is to the influence of similarity;
Based on the G that makes up A, represent state with node, represent transition probability with weight, introduce Markov Chain, utilize the EVOLUTIONARY COMPUTATION significance figure of markov chain; In the zone with only opposite sex, the cumulative probability value that obtains is big, and receptance function is just big, thereby obtains corresponding significance figure; Calculate its significance figure for the infusion bottle neighborhood that is partitioned into, the receptance function maximum is further oriented infusion bottle, utilizes the SIFT characteristic matching that the zone that obtains is confirmed after the location, thereby obtains final infusion bottle position, size and pose.
In the described step 3), for the geostationary target in position, result according to template matching, utilize affine transformation to obtain the approximate location of target, in this rough zone, utilize active vision to screen, for the tangible object of color, as drainage bag, utilize color histogram to mate, obtain the positioning result of target; Utilize grey level histogram to mate for the unconspicuous object of color, obtain final device location and size.
In the described step 3),, utilize the result of skin color segmentation,, obtain wound, expression and syringe needle information in conjunction with people's structural information for wound, expression and syringe needle information.
The method based on zoned logic in the step 3) realizes simple and has saved complicated camera calibration link.The plane of delineation is divided into photographic head stagnant zone, motion retaining zone, moving region,, adopts different cradle head control strategies according to the zones of different and the size of the target of orienting in image, thus the control cam movement.
Ward of the present utility model is maked an inspection tour robot system and is comprised robot positioning system, robot control system and robotic vision system.Its work process is as follows: for the consideration of protection patient privacy, do not arrange distributed photographic head in the ward, finish the task of patrolling by making an inspection tour the alternative nurse of robotic vision.Robot enters after the ward, and mutual by with server obtains the map in current ward, the path of robot planning from current to sick bed, utilize the path trace technology to arrive the head of a bed then, finish the checking of patient information, obtain the task that patrols simultaneously for patient by reading of RFID.The task of downloading with server that patrols that reads in conjunction with RFID is utilized the cartographic information in current ward, the path in the robot planning room, and make an inspection tour.In the tour process, robot carries out the robot location by StarGazer, carry out robot motion's control according to the real-time pose of robot, the scene image of catching and the good path of planning in advance, after the control robot arrives the target location, scene image search is carried out the target fine positioning, and control The Cloud Terrace photographic head carries out image-capture, and is transferred to server end master interface and is saved in the data base automatically and check for the nurse.
The beneficial effects of the utility model are: pass through robotic vision, massive video data in the ward is carried out active to be extracted, the significant view data that extraction is obtained is sent to nurse station, make the nurse browse a spot of image information and just can finish the ward tour in nurse station, improved in the tour of present ward and depended on the shortcoming of accompanying and attending to unduly, and greatly reduced nurse's workload with calling set.
Description of drawings
Fig. 1 is a flow chart of the present utility model.
Controlled quentity controlled variable under Fig. 2 given path and the pose is asked for sketch map.
Fig. 3 is the robot mechanism sketch map.
Wherein, 1. mobile robot, 2.RFID antenna, 3. The Cloud Terrace photographic head, 4. infrared location system, 5. laser sensor.
The specific embodiment
Below in conjunction with accompanying drawing and embodiment this utility model is described further.
Among Fig. 1, the mobile robot enters after the ward, at first carry out the patient information checking: utilize the good path movement of planning in advance to the head of a bed, post the RFID label on the head of a bed, RFID record patient's section office, name of disease, the information that needs to make an inspection tour, the corresponding priori of tour information etc., the rfid interrogator that carries by robot reads the RFID label of the head of a bed and realizes patient information checking fast and accurately.If the authentication of RFID conforms to record among the data base, carry out next step tour task, otherwise robot to report to the police, and wait for that the nurse confirms, simultaneously, carries out next bit patient's tour.Robot carries out fine positioning by this body vision then by the coarse positioning that the RFID that carries carries out target then; Robot relies on StarGazer to position simultaneously, keeps away barrier by the laser that carries, and the control robot carries out target homing and location.In order to finish target localization, the robot motion's controlling models based on Bayesian model is proposed, utilize the subregion controlling models to carry out the control of photographic head.
With ROBOT CONTROL amount R=(x r, y r, z r) expression, wherein x r, y rRepresent the side-play amount on the both direction respectively, z rExpression is towards the side-play amount at angle, and the controlled quentity controlled variable of photographic head is with C=(C p, C t, C z), that expression is Pan respectively, Tilt, the controlled quentity controlled variable of Zoom, total controlled quentity controlled variable be expressed as Z=(R, C), after reading RFID, the controlled quentity controlled variable of The Cloud Terrace photographic head can be set according to the tour task, under this condition, carry out robot motion's control, thereby carry out obtaining of target image.We represent target image with O, utilize target search mechanism to control the motion of The Cloud Terrace photographic head again, thereby obtain the final objective picture, therefore, we are reduced to Z=R with controlled quentity controlled variable, and the influence of the current pose in path that the motion of robot will be subjected to planning simultaneously and robot, according to the viewpoint of statistics opinion, find the solution control law, promptly find the solution P (R|O, M, T), just occurring under the situation of target, the probability of control rate under path planning and robot pose constraint, wherein, O represents target image, M represents the current pose of robot, and T represents the path of planning.For obtain P (T), with A, B, C be as three different incidents for R|O, M, from standpoint of probability find the solution P (A|B, C)
P ( A | B , C ) = P ( A , B , C ) P ( B , C ) = P ( B | A , C ) P ( A , C ) P ( B | C ) P ( C ) = P ( B | A , C ) P ( A | C ) P ( C ) P ( B | C ) P ( C ) = P ( B | A , C ) P ( A | C ) P ( B | C ) - - - ( 1 )
So
P ( R | O , M , T ) = P ( O | R , M , T ) p ( R | M , T ) P ( O | M , T ) - - - ( 2 )
Wherein P (O|M, T) and P (O|R, M T) are the target probability of occurrence, and (R|M T) is controlled quentity controlled variable likelihood rule to P.
(R|M T) is illustrated under given path T and the current pose likelihood of ROBOT CONTROL amount rule under the M, under the situation of the current pose of given path and robot, carries out asking for of controlled quentity controlled variable.Must disperse in the path that planning is good, as Fig. 2.Under the good path of planning, carry out asking for of robot controlled quentity controlled variable.
If M=(x, y, theta), T=(T 1, T 2... T n), T wherein i=(x i, y i), make M p=(x, y), Mo=theta, definition
T opt = arg min T i D ( M p , T i ) - - - ( 3 )
Wherein, what D represented is Euclidean distance, T OptBe the position, in order to obtain angle information, expected pose is defined as
T o=|| T Opt|| e J θ, θ is T Opt+1With respect to T OptDeviation angle (4)
The expectation control law
R O=(x o,y o,z o)=(T ox-x,T oy-y,θ-theta) (5)
Then final control likelihood rule is expressed as
P ( R | M , T ) = sim ( R , R O ) = 1 / ( x r - x o ) 2 + ( y r - y o ) 2 + w θ ( z r - z o ) 2 - - - ( 6 )
W wherein θFor regulating weight.
For localizing objects, adopt multimodal object representation method.P (O|M T) is illustrated in the probability that the target under current pose and the track occurs, and P (O | R, M T) is illustrated in the probability that occurs target under its corresponding controlled quentity controlled variable.In order to determine the probability of occurrence of target, adopt various modes to carry out object representation.RFID, template image and feature are combined, represent target is from coarse to fine, thereby guided robot is finished target homing efficiently, accurately.
Treating attaching rfid tag on the searching target, if do not read the RFID label, illustrate that the distance objective thing is also distant, then think P (O|M, T)=0, at this moment, formula (2) is nonsensical, corresponds to practical situation, under the situation that target can not occur (or in zone), the motion of robot is guided by path planning fully so, just Chang Gui path trace strategy.If read this RFID label, then illustrate in big zone to have target, then start target homing strategy based on image.
By investigating the present image of body optically-captured, based on present image and to the understanding of target determine P (O|M, T) and P (O|R, M, T).Wherein (O|M, T) pairing camera image is present image to P, is designated as Img C, and P (O|R, M, T) in obtaining of image then can carry out affine transformation according to R and obtain by present image, be designated as Img Aff
P(O|M,T)=P(O|Img C) (7)
P(O|R,M,T)=P(O|Img Aff) (8)
For the probability of determining that target occurs in the image, be designated as P (O|I), I is an image to be investigated, the dimension of image is very high, therefore determine directly that P (O|I) is very difficult, therefore, need seek suitable object representation method, make an inspection tour task at the ward, the design diverse ways is determined the probability that target occurs.Tour target under the ward environment is divided two classes, and a class is the relatively-stationary equipment in position, as, monitor generally is placed on the desk of patient bedside, and surge tank and drainage bag are placed on the bed below; Another kind of is roving target or the tangible target of color, and as infusion bottle, patient can arbitrarily adjust the position, and syringe needle thrusts the position, wound location, human face expression etc.For first kind target, the image that collection in advance comprises target comes the probability of evaluation objective appearance as template by the similarity of image; For the second class target, thereby design is cut apart definite target probability of occurrence based on the target of color characteristic.
(1) cuts apart based on the target of template matching
Because therefore the position relative fixed of first kind target, can represent the probability of target appearance by the appearance of the template image of preservation in advance, therefore
P(O|I)=sim(I temp,I) (9)
Wherein, I TempBe the template image of preserving, I is a present image, sim (I Temp, I) be the similarity of two width of cloth images.About the similarity of image, can there be a lot of methods to determine, as correlation coefficient process, histogram method and Fourier descriptor etc.
(2) cut apart based on the target of feature
Because the second class target location is flexible relatively, as infusion bottle, the method for the tangible handmarking's thing of employing colouring information is determined the position of infusion bottle, and syringe needle thrusts the position simultaneously, wound location, and human face expressions etc. all are the zones with colour of skin.For the probability of fast definite target appearance, thereby adopt feature to come alternative image to finish similarity measurement.
P(O|I)=sim(I temp,I)=sim(V temp,V) (10)
V wherein TempBe the template characteristic of preserving, V is current feature.Because robot in search process, will find target incessantly, need to determine the size and the pose of target simultaneously.Therefore, adopt color characteristic to carry out target and cut apart, calculate its boundary rectangle W after cutting apart,
sim ( V temp , V ) = ( W temp / W ) 2 + ( H temp / H ) 2 - - - ( 11 )
(a) cut apart based on the label of color lookup table
For cutting apart of handmarking's thing, adopt Cb, Cr color component in the YCbCr space, with the Y filtering, utilize the color lookup table method to set up the two-dimensional color probabilistic model of label, when making an inspection tour in robot, the continuous images acquired of robot, each pixel in the traversing graph picture, and judge in real time, cutting apart of label finished, again the label that is partitioned into is carried out profile and search, utilize the boundary rectangle labelling.
(b) based on the skin color segmentation of many color spaces
For skin color segmentation, utilize existing color space commonly used to carry out skin color segmentation respectively, and comprehensively finish final skin color segmentation with it.Human body complexion demonstrates different colors with the environmental difference difference, but from the viewpoint of geometric mode identification, the point of same color is polymeric relatively in the space, therefore, can seek the subspace of the colour of skin in color space, adopt RGB, HSV and YCrCb to carry out skin color segmentation respectively.Wherein, under the RGB subspace complexion model suc as formula (12):
Figure BDA0000048084720000091
The spatial model of YCrCb is defined as formula (13):
69<Y<256,133<Cr<166,79<Cb<129 (13)
Wherein Y, Cr, Cb are respectively the pixel values of three Color Channels.
The HSV spatial model is defined as formula (14):
H<19,S>=48 (14)
H wherein, S is respectively the value of tone (H), saturation (S).
Based on lot of experiment results, discovery has split non-area of skin color (part of red color tone especially partially) based on the result that the RGB color space obtains, but generally there is not omission, main cause is because people's the more ruddy place of the colour of skin has participated in training, thereby causes and can think the colour of skin to redness; And the non-area of skin color of HSV spatial result cut apart more accurate, but the area of skin color segmentation effect is incomplete, the spatial result of YCrCb has removed the rubescent part of the colour of skin preferably.The result of three color spaces is carried out and operation mutually, again because rgb space cut apart more complete, we will come out with the result's who operates area of skin color labelling, corresponding zone is carried out or is operated and improve segmentation result in we and the rgb space in these marked regions, and the result who obtains has better anti-interference and stability.
In order to realize the target localization of high stability and reliability, adopt different features to carry out target localization at concrete tour task.When robot searches after target, robot stops, and carries out target localization.
Because infusion bottle comes in every shape, and interior dress liquid color differs, it is very difficult and more unsettled directly carrying out cutting apart of infusion bottle, still adopt the method that adds handmarking's thing to carry out the infusion bottle location, be partitioned into after handmarking's thing, utilize the image transformation relation of handmarking's thing and infusion bottle, carry out the location of infusion bottle.Because the image that will search for and the matching degree of target image are bigger, be respectively WD if therefore establish the centroid position of label and the centroid position distance of infusion bottle, HD, the length and width HL of two rectangle frames of label and infusion bottle, WL, HO, WO, and label and infusion bottle are enough near, there is proximate proportionate relationship: HL/HD ≈ HL '/HD ', wherein HL ' and HD ' are the mark height in the image to be searched and the distance at mark center and infusion bottle center, HL, HD, HL ' is known, HD ' can ask, and the center of label can be asked, thereby can obtain the approximate location P and the yardstick Scale of infusion bottle.
(1) label location
After finishing the cutting apart of label, the zone that is partitioned into is estimated the noise spot that those areas of filtering are little, the target area that obtains treating the candidate.In order to prevent the interference of the target close with color of object, the shape information of comprehensive article is extracted final objective.Because the not bending moment of image all has invariance to translation, rotation and the dimensional variation of image, therefore we utilize 7 of Hu not bending moment be used as the shape facility of target, not bending moment of its Hu is calculated in the zone that is partitioned into, determine that according to the method for arest neighbors the zone that is partitioned into is the label zone.
In order to overcome the noise of the markers tests that robot causes at moving process, with the length of label and width as measured value, with its model hypothesis is length and width even variation, utilizes Kalman filtering to carry out the filtering of noise, has improved the detection effect of label.
(2) locate based on the infusion bottle of active vision
Because label is nearer apart from infusion bottle, the general location and the yardstick of the available infusion bottle in back, label location are the center with this position P on this basis, are neighborhood with 2Scale*2Scale, carry out the fine positioning of infusion bottle.In the neighborhood that obtains, use for reference the thinking of active vision, obtain vision significance figure.Significance nomography based on graph theory has original advantage in simulation human eye visual aspects, and this algorithm is the analog vision principle in the process of feature extraction, introduces Markov chain in the process of significantly figure generation and calculates the significance value.
The characteristic pattern of computed image at first, we adopt color characteristic figure and direction character figure, are designated as M:[n] 2→ R calculates its significance metric function, A:[n] 2→ R promptly calculates its receptance function according to its characteristic pattern.(i, j) representative is at point (i, the eigenvalue of j) locating, definition similarity function with M
( ( i , j ) | | ( p , q ) ) Δ = M ( i , j ) M ( p , q ) ( 15 )
As node, construct full connection layout G with each point in the image A(i, j) (p, the weight on limit q) is node with node
w ( ( i , j ) , ( p , q ) ) Δ = d ( ( i , j ) | | ( p , q ) ) · F = d ( ( i , j ) | | ( p , q ) ) · exp ( - ( i - p ) 2 + ( j - q ) 2 2 σ 2 ) - - - ( 16 )
Wherein, weight is regulated in the F representative, and this weight adjustable range is to the influence of similarity.
Based on the G that makes up A, represent state with node, represent transition probability with weight, introduce Markov Chain, thereby utilize the EVOLUTIONARY COMPUTATION significance figure of markov chain.In the zone with only opposite sex, the cumulative probability value that obtains is big, and receptance function is just big, thereby obtains corresponding significance figure.Calculate its significance figure for the infusion bottle neighborhood that is partitioned into, the receptance function maximum is further oriented infusion bottle, utilizes the SIFT characteristic matching that the zone that obtains is confirmed after the location, thereby obtains final infusion bottle position.
For the location of monitor and drainage bag, because this quasi-instrument relative fixed, can determine the roughly yardstick Scale and the position P of target by the coupling of image, P is the center with this position, is neighborhood with 3Scale*3Scale, carries out the fine positioning of target.Same utilization is carried out vision guide based on the significance figure of active vision, for the tangible target of colouring information, as drainage bag, utilize color histogram to confirm, for the unconspicuous target of color, but the tangible target of strength ratio is then confirmed with grey level histogram.
Thrust position and wound situation for what check syringe needle, be preset with exposed skin outside, otherwise, even also can't observing syringe needle, the people thrusts situation, by the mode of interactive voice, make patient or accompany and attend to cooperate wound or syringe needle position are spilt, and utilize the mode of skin color segmentation to carry out syringe needle to thrust searching of position, after finding area of skin color, merge the wound site information of the RFID record that reads and people's structure cognizing is carried out the differentiation of syringe needle position and wound location etc.After having solved Face Detection and cutting apart,, people's face or syringe needle can be determined, wound can be located equally according to people's construction features.
According to the target area of orienting, carry out the control of The Cloud Terrace photographic head.What make that target behind the location tries one's best is positioned at visual field central authorities, for next step image convergent-divergent is prepared.Employing is based on the method for zoned logic, and realization is simple and saved complicated camera calibration link, finishes the control of The Cloud Terrace photographic head.The plane of delineation is divided into photographic head stagnant zone, motion retaining zone, moving region,, adopts different cradle head control strategies according to the zones of different of the target of orienting in image, thus the control cam movement; Because server end need obtain clearly equipment state, if target is less, need further to current captured object, observe thereby be convenient to the nurse.
Infrared emission/accepter on the robot body is apart from 2.5 meters on ceiling, within 1.5 meters of the distances between the StarGazer label, 2 centimetres of actual location precision, satisfies the requirement that the ward is maked an inspection tour; w θBe taken as 0.5, according to the stochastical sampling mode, calculate maximum likelihood and obtain final control rate, number of samples adopts the Self Adaptive Control mode, if less with the target similarity, then number of samples is bigger, and the increase along with similarity reduces number of samples; Adopt correlation coefficient process to carry out the coupling of image and template image; In order to improve the speed of cutting apart and to reduce the volume of color lookup table, Cb, Cr are quantized by per 5 grey levels, probability threshold value is taken as 0.3, and handmarking's thing is cut apart; The probability that target appears in the stage to be matched is bigger, so matching threshold is lower, and the SIFT matching threshold is made as 5, and the color histogram matching threshold is made as 0.6, and the grey level histogram threshold value is made as 0.6, finishes the robot ward and makes an inspection tour.
Among Fig. 3, it comprises mobile robot 1, is provided with RFID antenna 2 on described mobile robot 1, and mobile robot 1 top is provided with The Cloud Terrace photographic head 3, then is provided with infrared location system 4 and laser sensor 5 in mobile robot 1 front portion; RFID label in described RFID antenna 2 and the ward on each sick bed matches.
Comprise self aligning system and object locating system in the robot of the present utility model.It has adopted infrared location system 4 to finish the robot location.It is made up of infrared ray emitting/receiving and passive type infrared tags.Label is attached on the ceiling, and emitting/receiving is installed in the robot platform top, when emitting/receiving detects label, just by serial ports it is passed to service robot with respect to the pose of label and the ID of its label.Because the location aware of label in world coordinate system, just can be obtained from pose in world coordinate system by the coordinate transform robot, realize self-align.For infrared location system, its positioning accuracy is higher, but the same localized situation of mistake that exists for the influence of filtering noise, utilizes Kalman filtering to come the pose of robot is carried out filtering.
Server end interface and body end interface, server end receives the video that robot body is passed back, simultaneously, the detected target of body is captured picture to be shown, and deposit it in data base, and the tour information according to patient is carried out rationalization, thereby make the nurse stare at the interface checks always, only retrieval is through the picture after organizing, retrieve picture of makeing mistakes or the picture that does not grab very easily, and carry out manual operation, thereby alleviate nurse's workload at these images, simultaneously, server end shows current tour state and robot running orbit; Server end leaves robot manipulation's function, can carry out the distant operation of robot under the situation of needs, realizes that long-range robot controls and man-machine interaction.Body end is mainly by video display window, and RFID reads window, robot trajectory's window, and the current pose of robot simultaneously, can carry out ROBOT CONTROL equally on body.

Claims (2)

1. the service-delivery machine robot system is maked an inspection tour in the ward based on bayesian theory, it is characterized in that it comprises the mobile robot, on described mobile robot, be provided with the RFID antenna, the mobile robot top is provided with the The Cloud Terrace photographic head, then is provided with infrared location system and laser sensor in the mobile robot front portion; RFID label in described RFID antenna and the ward on each sick bed matches.
2. the service-delivery machine robot system is maked an inspection tour in the ward based on bayesian theory as claimed in claim 1, it is characterized in that described infrared location system is made up of infrared ray emitting/receiving and passive type infrared tags; The passive type infrared tags is attached on the ceiling of ward, and the infrared ray emitting/receiving is installed in the robot platform top.
CN2011200488197U 2011-02-26 2011-02-26 Ward round robot system based on Bayesian theory Expired - Fee Related CN202010257U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011200488197U CN202010257U (en) 2011-02-26 2011-02-26 Ward round robot system based on Bayesian theory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011200488197U CN202010257U (en) 2011-02-26 2011-02-26 Ward round robot system based on Bayesian theory

Publications (1)

Publication Number Publication Date
CN202010257U true CN202010257U (en) 2011-10-19

Family

ID=44780862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011200488197U Expired - Fee Related CN202010257U (en) 2011-02-26 2011-02-26 Ward round robot system based on Bayesian theory

Country Status (1)

Country Link
CN (1) CN202010257U (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103663A (en) * 2011-02-26 2011-06-22 山东大学 Ward visit service robot system and target searching method thereof
CN104399139A (en) * 2014-11-25 2015-03-11 苏州贝多环保技术有限公司 Intelligent infusion support positioning system based on target searching and intelligent infusion support movement controlling method
CN106393143A (en) * 2016-11-18 2017-02-15 上海木爷机器人技术有限公司 Mode switching system and method
CN107194395A (en) * 2017-05-02 2017-09-22 华中科技大学 A kind of object dynamic positioning method based on colour recognition and contours extract
CN108403370A (en) * 2018-04-11 2018-08-17 广东铱鸣智能医疗科技有限公司 A kind of novel intelligent nursing robot
CN108524114A (en) * 2018-02-07 2018-09-14 成都中友启明科技有限公司 A kind of control system and control method of medicinal intelligent stretcher
CN109429003A (en) * 2017-08-29 2019-03-05 杭州海康机器人技术有限公司 A kind of image-pickup method, device, unmanned plane and computer readable storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103663A (en) * 2011-02-26 2011-06-22 山东大学 Ward visit service robot system and target searching method thereof
CN104399139A (en) * 2014-11-25 2015-03-11 苏州贝多环保技术有限公司 Intelligent infusion support positioning system based on target searching and intelligent infusion support movement controlling method
CN104399139B (en) * 2014-11-25 2017-04-19 苏州贝多环保技术有限公司 Intelligent infusion support positioning system based on target searching and intelligent infusion support movement controlling method
CN106393143A (en) * 2016-11-18 2017-02-15 上海木爷机器人技术有限公司 Mode switching system and method
CN107194395A (en) * 2017-05-02 2017-09-22 华中科技大学 A kind of object dynamic positioning method based on colour recognition and contours extract
CN107194395B (en) * 2017-05-02 2020-02-14 华中科技大学 Object dynamic positioning method based on color identification and contour extraction
CN109429003A (en) * 2017-08-29 2019-03-05 杭州海康机器人技术有限公司 A kind of image-pickup method, device, unmanned plane and computer readable storage medium
CN109429003B (en) * 2017-08-29 2021-04-30 杭州海康机器人技术有限公司 Image acquisition method and device, unmanned aerial vehicle and computer readable storage medium
CN108524114A (en) * 2018-02-07 2018-09-14 成都中友启明科技有限公司 A kind of control system and control method of medicinal intelligent stretcher
CN108403370A (en) * 2018-04-11 2018-08-17 广东铱鸣智能医疗科技有限公司 A kind of novel intelligent nursing robot

Similar Documents

Publication Publication Date Title
CN102103663B (en) Ward visit service robot system and target searching method thereof
CN202010257U (en) Ward round robot system based on Bayesian theory
CN111932588B (en) Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN109887040B (en) Moving target active sensing method and system for video monitoring
DE112017002154B4 (en) Mobile robot and control method for a mobile robot
Kalinov et al. Warevision: Cnn barcode detection-based uav trajectory optimization for autonomous warehouse stocktaking
JP3781370B2 (en) Mobile device
Bargoti et al. A pipeline for trunk detection in trellis structured apple orchards
Merino et al. Vision-based multi-UAV position estimation
CN103941746A (en) System and method for processing unmanned aerial vehicle polling image
CN109255568A (en) A kind of intelligent warehousing system based on image recognition
CN110084842B (en) Servo secondary alignment method and device for robot holder
EP3474188A1 (en) Methods and systems for object search in a video stream
Lin et al. Automatic detection of plant rows for a transplanter in paddy field using faster r-cnn
CN205229809U (en) Utilize unmanned vehicles's pasture intelligent management system and unmanned vehicles thereof
CN108919811A (en) A kind of indoor mobile robot SLAM method based on tag label
WO2019151876A1 (en) Cargo detection and tracking
CN106647738A (en) Method and system for determining docking path of automated guided vehicle, and automated guided vehicle
CN107392162A (en) Dangerous person's recognition methods and device
CN109376584A (en) A kind of poultry quantity statistics system and method for animal husbandry
CN115346256A (en) Robot searching method and system
EP4071684A1 (en) Warehouse monitoring system
CN108376237A (en) A kind of house visiting management system and management method based on 3D identifications
Ma et al. An intelligent object detection and measurement system based on trinocular vision
CN108470165B (en) Fruit visual collaborative search method for picking robot

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111019

Termination date: 20140226