CN110103223A - A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically - Google Patents

A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically Download PDF

Info

Publication number
CN110103223A
CN110103223A CN201910444843.3A CN201910444843A CN110103223A CN 110103223 A CN110103223 A CN 110103223A CN 201910444843 A CN201910444843 A CN 201910444843A CN 110103223 A CN110103223 A CN 110103223A
Authority
CN
China
Prior art keywords
target
color
color image
image
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910444843.3A
Other languages
Chinese (zh)
Inventor
杨立娟
苏文杰
刘剑坤
曾钰文
张寅喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201910444843.3A priority Critical patent/CN110103223A/en
Publication of CN110103223A publication Critical patent/CN110103223A/en
Priority to CN201910757661.1A priority patent/CN110355765B/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

What the disclosure disclosed a kind of identification of view-based access control model follows barrier-avoiding method automatically, comprising: acquisition target image identifies and positions target according to the clothes color of target;Path is followed according to the azimuth-range information planning of the target;During following real time scan surrounding enviroment information and if necessary adjustment follow path.What the disclosure further disclosed a kind of identification of view-based access control model follows avoidance robot automatically, comprising: robot body, power plant module, vision positioning module, control module and obstacle avoidance module.The disclosure is identified and is positioned to target by acquiring the clothes color of target, is not needed the person of being followed and is carried emission source or receive source, and can be reduced Electromagnetic Interference raising and be followed precision;Using the clothes colouring information of target as object, the calculation amount of visual identity can reduce, improve response speed;Target can be screened according to color block areas size reduces with wrong probability.

Description

A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically
Technical field
The disclosure belongs to Mechatronics control field, and in particular to a kind of identification of view-based access control model follows avoidance side automatically Method and robot.
Background technique
More than people need to carry quantity or when thing that quality is big, it will feel very tired, for example need to carry Heavy luggage, the market shopping situations such as, and follow robot that people can be helped to carry these things automatically, and follow always Owner, therefore have very big development prospect and very popular on the market.But the automatic of the country follows robot to need at present By matched remote control device, the person of being followed is needed to carry signal emitting-source or receive source, and cannot reached in avoidance While detection always follow the purpose of target, while Various Complex road conditions can not be adapted to.
Summary of the invention
In view of the above deficiencies, the disclosure be designed to provide a kind of identification of view-based access control model follow avoidance machine automatically People can follow target and real time discriminating ambient enviroment to hide obstacle, and adapt to by identification target clothes color identification Different road conditions.
The purpose of the disclosure is achieved by the following technical programs:
A kind of identification of view-based access control model follows barrier-avoiding method automatically, comprising the following steps:
S100: acquisition target image carries out identification to target according to the clothes color of target and according in clothes color Extracted color lump shape calculates the center-of-mass coordinate of color lump, and determines target relative to robot according to the abscissa of center-of-mass coordinate Orientation, and according to the ordinate of center-of-mass coordinate determine target at a distance from robot, complete target positioning;
S200: under the premise of knowing current location, scanning periphery obstacle information, and according to the orientation of the target and Range information planning follows path;
S300: during following, constantly real time scan periphery obstacle information and when finding barrier adjustment with With path.
Preferably, the step S100 includes the following steps:
S101: the original YUV color image of target clothes is obtained;
S102: the original YUV color image of the target clothes is pre-processed;
S103: being converted to RGB color image for pretreated original YUV color image, then by by RGB color image In brightness value and chromatic value carry out separation and be converted to new YUV color image, and the new YUV color image is carried out bright Spend Weakening treatment;
S104: color lump color, color lump quantity and the color of the new YUV color image after brightness Weakening treatment are extracted It is block-shaped;
S105: target is identified according to the color lump color and color lump quantity;
S106: the center-of-mass coordinate of color lump is calculated according to the color lump shape, target is positioned according to center-of-mass coordinate.
Preferably, in step S102, the pretreatment includes carrying out histogram to the YUV color image of original object clothes Equalization.
Preferably, the histogram equalization is that mapping completion is carried out by using cumulative distribution function, is specifically included:
Wherein, n is the summation of pixel in image, and k is the number of pixels of current gray level grade, and L is may in present image Tonal gradation sum.
Preferably, in step S103, pretreated original YUV color image is converted to by RGB color figure by following formula Picture,
R=Y+1.14V
G=Y-0.39U-0.58V
B=Y+2.03U
Wherein, R, G, B are the pixel value in each channel of the pixel under RGB color space, and Y, U, V are corresponding YUV Pixel value under color space, and the value range of R, G, B, Y, U, V are between 0-255;
RGB color image is converted to the new YUV color image for weakening brightness and influencing again by following formula:
UU=R-G=1.72V+0.39V
VV=B-G=2.42U+0.58V
CC=R+G+B=3Y+0.56V+1.64U
Wherein, UU, VV, CC are pixel in the new picture for weakening each channel under the YUV color space that brightness influences Element value.
Preferably, described that following formula completion is passed through to new YUV color image progress brightness Weakening treatment in step S103:
U '=(15UU)/CC
V '=(15VV)/CC
Wherein, U ', V ' indicate the chromatic value of the new YUV color image after brightness reduction.
Preferably, in step S106, the center-of-mass coordinate that color lump is calculated according to the color lump shape includes: to obtain color lump respectively Left margin coordinate x1, right margin coordinate x2, coboundary coordinate y2 and the lower boundary coordinate y1 of profile, then center-of-mass coordinate=((x1+ X2)/2, (yl+y2)/2).
The disclosure additionally provide a kind of identification of view-based access control model follow automatically and the robot of avoidance, comprising:
Robot body, the robot body include shell, several support plates and expansion plate, the shell and the branch Fagging is bolted, and the expansion plate is connected by a hinge with the shell;
Power plant module, for driving Robot different directions to move;
Vision positioning module identifies target according to the clothes color of image, for acquiring target image according to clothing It takes extracted color lump shape in color and calculates the center-of-mass coordinate of color lump, and target is positioned according to center-of-mass coordinate;
Obstacle avoidance module, for obtaining robot periphery obstacle information and feeding back to the control module;
Control module, the environment for being obtained according to the vision positioning module to the positioning of target and the obstacle avoidance module Information planning follows path.
Preferably, the robot further includes memory module, for storing the clothes color parameter of target.
Preferably, the vision positioning module includes:
Acquiring unit, for obtaining the original YUV color image of target clothes;
Pretreatment unit, for the original YUV color image pretreatment to the target clothes;
Converting unit, for pretreated original YUV color image to be converted to RGB color image, then by RGB color Image is converted to new YUV color image, and carries out brightness Weakening treatment to the new YUV color image;
Feature acquiring unit, for extract the new YUV color image after brightness Weakening treatment color lump color, Color lump quantity and color lump shape;
Recognition unit, for identifying target according to the color lump color and color lump quantity;
Positioning unit, for according to the color lump shape calculate color lump center-of-mass coordinate, according to center-of-mass coordinate to target into Row positioning.
Compared with prior art, disclosure bring has the beneficial effect that
1, it using visually-identified targets and is positioned, is not needed the person of being followed and carry emission source or receive source, and can To reduce Electromagnetic Interference, raising follows precision;
2, automatic positioning and avoidance be can be realized, can be used in more scenes.
Detailed description of the invention
Fig. 1 is that the automatic of a kind of view-based access control model identification that an embodiment of the present disclosure provides follows barrier-avoiding method flow chart;
Fig. 2 is the side identified and positioned according to target clothes color to target that another embodiment of the disclosure provides Method flow chart;
Fig. 3 is a kind of front view for following avoidance robot automatically for view-based access control model identification that the disclosure provides;
Fig. 4 is a kind of side view for following avoidance robot automatically for view-based access control model identification that the disclosure provides;
Fig. 5 is the structural schematic diagram of vision positioning module in the robot of disclosure offer.
Specific embodiment
1 the technical solution of the disclosure is described in detail to attached drawing 5 and embodiment with reference to the accompanying drawing, the disclosure can be with It is emerged from by many various forms of embodiments, the protection scope of the disclosure is not limited only to the embodiment mentioned in text.
Referring to Fig. 1, what the disclosure provided a kind of identification of view-based access control model follows barrier-avoiding method automatically, comprising the following steps:
S100: acquisition target image carries out identification to target according to the clothes color of target and according in clothes color Extracted color lump shape calculates the center-of-mass coordinate of color lump, and determines target relative to robot according to the abscissa of center-of-mass coordinate Orientation, and according to the ordinate of center-of-mass coordinate determine target at a distance from robot, complete target positioning;
S200: under the premise of knowing current location, scanning periphery obstacle information, and according to the orientation of the target and Range information planning follows path;
S300: during following, constantly real time scan periphery obstacle environment and when finding barrier adjustment with With path.
Above-described embodiment completely constitutes the technical solution of the disclosure, and above-described embodiment passes through the clothes in acquisition image Color identifies and positions target.Unlike the prior art: the disclosure do not need the person of being followed carry emission source or Reception source, and Electromagnetic Interference raising can be reduced and follow precision;In addition, using the clothes colouring information of target as object, energy The calculation amount for enough reducing visual identity, improves response speed.
In another embodiment, as shown in Fig. 2, the step S100 includes the following steps:
S101: the original YUV color image of target clothes is obtained;
S102: the original YUV color image of the target clothes is pre-processed;
S103: being converted to RGB color image for pretreated original YUV color image, then by by RGB color image In brightness value and chromatic value carry out separation and be converted to new YUV color image, and the new YUV color image is carried out bright Spend Weakening treatment;
S104: color lump color, color lump quantity and the color of the new YUV color image after brightness Weakening treatment are extracted It is block-shaped;
S105: target is identified according to the color lump color and color lump quantity;
S106: the center-of-mass coordinate of color lump is calculated according to the color lump shape, target is positioned according to center-of-mass coordinate.
In the present embodiment, the multiple color lumps obtained in YUV color image are combined according to the predetermined color of color space, In, color lump is the same color region in image, by the color lump number for calculating number of colors possessed by color lump and same color Amount, can get the threshold range of the target clothes image.Here, the purpose of predetermined color combination is distinguished to color, can To define several colors according to color space, illustratively, color combination can be positioned as red, yellow and blue, indicate only Identify target clothes on these three colors, it should be appreciated by those skilled in the art that if predefined color combination in color Type is excessive, will increase the calculation amount of target identification process, to reduce recognition efficiency.
Further, the color lump of same color is combined in the threshold range, so that different color block areas is formed, Geometry by calculating each color block areas and the accounting in threshold range can get the geometry of target clothes image Shape.By the way that color lump color and color lump quantity to be compared with the clothes color parameter stored, followed mesh can be identified Mark.
Further, center-of-mass coordinate is really calculated according to color lump shape, so as to complete target positioning, obtains and determine mesh Target position and azimuth information.
In another embodiment, in step S102, the pretreatment include to the original YUV color image of target clothes into Column hisgram equalization.
In the present embodiment, histogram equalization is carried out by the original YUV color image to target clothes, clothing can be enhanced The contrast between color and surrounding image is taken, is identified convenient for the clothes color more accurately to target, at the same time Also some interference images can be filtered out.
In another embodiment, the histogram equalization is that mapping completion is carried out by using cumulative distribution function, It specifically includes:
Wherein, n is the summation of pixel in image, and k is the number of pixels of current gray level grade, and L is may in present image Tonal gradation sum.
In the present embodiment, is mapped by this, the contrast of target clothes image Yu surrounding image can be enhanced, improved To target location accuracy, to realize the accurate positioning to target.
In another embodiment, in step S103, pretreated original YUV color image is converted to by following formula RGB color image,
Wherein, R, G, B are the pixel value in each channel of the pixel under RGB color space, and Y, U, V are corresponding YUV Pixel value under color space, and the value range of R, G, B, Y, U, V are between 0-255.
RGB color image is converted into YUV color image again by following formula:
Wherein, UU, VV, CC are the pixel value in each channel of the pixel under new YUV color space.
It is described that following formula completion is passed through to new YUV color image progress brightness Weakening treatment in another embodiment:
U '=(15UU)/CC
V '=(15VV)/CC (4)
Wherein, U ', V ' indicate the chromatic value of the new YUV color image after brightness reduction.
In the present embodiment, at different brightnesses, larger difference can be generated by the clothes color that camera obtains, therefore, It needs to weaken influence of the brightness to target image by formula (3), further, in order to consider brightness to the shadow of clothes color It rings, needs to do into above-mentioned processing formula (3).U ' value and V ' are obtained divided by brightness Y again by amplifying UU and VV value, such as This had not only considered influence of the brightness Y to color of object, but also weakened interference of the brightness Y to color of object.
It include: difference according to the center-of-mass coordinate that the color lump shape calculates color lump in step S106 in another embodiment Left margin coordinate x1, right margin coordinate x2, coboundary coordinate y2 and the lower boundary coordinate y1 of color lump profile are obtained, then center-of-mass coordinate =((x1+x2)/2, (y1+y2)/2).
In the present embodiment, the positive value and negative value of abscissa respectively indicate robot and are located at the left or right for following target, Abscissa absolute value indicates the degree deviateed;The positive value and negative value of ordinate respectively indicate robot and follow the remote of target range With close, the size of ordinate absolute value representation distance.
In another embodiment, the disclosure also provide a kind of identification of view-based access control model follow automatically and the robot of avoidance, Include:
Robot body 6, the robot body 6 include shell, several support plates and expand plate, the shell with it is described Support plate is bolted, and the expansion plate is connected by a hinge with the shell;
Power plant module, for driving Robot different directions to move;
Vision positioning module 7 identifies target according to the clothes color of image for acquiring target image, according to Extracted color lump shape calculates the center-of-mass coordinate of color lump in clothes color, and is positioned according to center-of-mass coordinate to target;
Obstacle avoidance module 8, for obtaining robot periphery obstacle information and feeding back to the control module;
Control module 9, the week for being obtained according to the vision positioning module to the positioning of target and the obstacle avoidance module Obstacle information planning in side follows path.
In another embodiment, the robot further includes memory module, for storing the clothes color parameter of target.
It in the present embodiment, before starting to follow target, needs to store the clothes color parameter of target, exists convenient for robot The clothes color parameter to be stored is followed after starting to match as reference to clothes color collected.
In another embodiment, 7 pieces of the vision positioning mould includes:
Acquiring unit, for obtaining the original YUV color image of target clothes;
Pretreatment unit, for the original YUV color image pretreatment to the target clothes;
Converting unit, for pretreated original YUV color image to be converted to RGB color image, then by RGB color Image is converted to new YUV color image, and carries out brightness Weakening treatment to the new YUV color image;
Feature acquiring unit, for extract the new YUV color image after brightness Weakening treatment color lump color, Color lump quantity and color lump shape;Recognition unit, for determining target according to the color lump feature;
Recognition unit, for identifying target according to the color lump color and color lump quantity;
Positioning unit, for according to the color lump shape calculate color lump center-of-mass coordinate, according to center-of-mass coordinate to target into Row positioning.
In another embodiment, as shown in Figure 1, the support plate includes the first support plate 1, the second support plate 2, third branch Fagging 3;First support plate 1 is sequentially connected with second support plate 2 and the third support plate 3 by copper post;
In the embodiment, the shell uses plastic material to mitigate weight and improve the kinematic dexterity of robot.Outside Shell two sides and front side are slotted, and are interacted convenient for internal sensor and the external world, and to post padded coaming (such as extra large for shell surrounding Silk floss, polystyrene foam plastics, air bubble film etc.), for reducing because hitting bring impact force suddenly.Support plate is geometric form Shape, preferably square or rectangular, and front side is triangular shape, can reduce robot body when robot turns to and touch with barrier The possibility hit.First support plate 1 is aluminum alloy material, convenient for increasing robot bottom heaviness, improves stability, is used to support institute State power plant module and the control module;Second support plate 2, third support plate 3 are acrylic material, be can reduce in robot Portion's weight is respectively used to support the obstacle avoidance module and vision positioning module.
In another embodiment, the power plant module includes wheel 4, DC speed-reducing 5 and driving unit;Wherein, institute Wheel 4 is stated to be connected with the output shaft of the DC speed-reducing 5;The DC speed-reducing 5 is single by lead and the driving Member is connected.
In the embodiment, wheel 4 may be implemented to move ahead, traversing, diagonal, rotation and its group it is preferable to use Mecanum wheel The motion modes such as conjunction, so as to flexibly hide the barrier encountered in action.Driving unit is using single driving chip And it is embedded in and expands on plate, the positive and negative rotation of DC speed-reducing 5 can be driven and control its revolving speed.
In another embodiment, the obstacle avoidance module 8 includes: probe unit, for obtaining 360 degree of robot periphery range Interior obstacle information;Distance measuring unit, for obtaining robot at a distance from the barrier of periphery.
In the embodiment, probe unit can obtain the periphery obstacle information within the scope of robot surface sweeping, illustratively, if Robot is in outdoor execution and follows task, can acquire the dynamic barrier such as pedestrian, vehicle and such as by probe unit The static-obstacle things such as trees, electric pole, step;If robot is in interior, can be acquired such as children, pet by probe unit The static-obstacle things such as dynamic barrier and tables and chairs, household electrical appliances.Obstacle information collected is sent to control module by probe unit Planar map is generated, path planning is carried out by Obstacle Position given by the map, obtains the most short avoidance of target Route.
In another embodiment, the probe unit includes laser radar sensor, is used for real time scan robot periphery The angle and distance information of barrier within the scope of 360 degree.
In the embodiment, laser pulse emits through laser emitting diode, scatters after target reflects to all directions.Part It scatters light and returns to sensor, be issued to return by recording and handling from light pulse and be received the time experienced, can survey Set the goal distance.
In another embodiment, the distance measuring unit includes that ultrasonic range finder, the first infrared ambulator and second are red Outside line rangefinder;Wherein, ultrasonic range finder for obtaining the obstacle distance information in front of robot in real time;First infrared ray Rangefinder and the second infrared ambulator are respectively arranged at robot two sides, for obtaining the obstacle distance information of two sides.
In another embodiment, the control module 9 includes: data transceiving unit, and the data transceiving unit includes nothing Line transceiver, the feedback information for real-time reception and the transmission obstacle avoidance module and vision positioning module;Data processing unit, The data processing unit includes miniature PC or single-chip microcontroller, and the feedback information for sending to the data receipt unit divides Analysis processing and under send instructions;Alarm unit, the alarm unit include buzzer, for being in uncontrollable state pair when robot Target is reminded.
In the embodiment, wireless transceiver preferably uses NRF24L01, has the spy that data buffer storage is big, refresh rate is fast Point;Miniature PC or single-chip microcontroller preferably use data-handling capacity it is strong such as raspberry pie, STM32.Miniature PC is connect by data The information that unit real-time reception obstacle avoidance module and vision positioning module are sent is received, and by data processing unit to the received number of institute According to the movement instruction of execution needed for being analyzed and processed and then issuing, movement instruction is sent to by data transmission unit afterwards dynamic Power module is executed.
It should be noted that robot includes following both of which with amiable remote manual control automatically, in automatic follow the mode Under, the navigation of robot view-based access control model is full-automatic mobile, and under remote manual control mode, target can pass through the application journey on mobile terminal Ordered pair robot is controlled, specifically can be by bluetooth, WIFI etc. to robot sending mode select command.In general, for side Just for the sake of, robot is set as automatic follow the mode, and when robot encounters complex road condition, (such as the stream of people is excessively intensive, region mistake Can not pass through in narrow) be difficult to realize follow automatically in the case where, can be switched to manual follow the mode, realize to robot It manually controls.
Further, alarm unit can usually sound an alarm the prompting person of being followed in the following two cases: first is that working as machine The distance between device people and the person of being followed are more than preset safety distance threshold, then alarm unit starts, and setting is being alarmed Buzzer in unit starts to continue alarm.Second is that when robot can not parse optimal path, i.e., between robot and target No direct connected region (such as having gully between robot and target) or the region, which are excessively narrow, causes robot that can not pass through, Alarm unit starts and alarms.
The above is only the section Examples of the disclosure, are not limited to the inventive concept of the disclosure, the technology of this field Personnel centainly can be replaced and be deformed under the principle for not departing from the disclosed invention design, but should fall into the disclosure Protection scope.

Claims (10)

1. a kind of identification of view-based access control model follow automatically and the method for avoidance, comprising the following steps:
S100: acquisition target image carries out identification to target according to the clothes color of target and is mentioned according in clothes color The color lump shape taken calculates the center-of-mass coordinate of color lump, and determines side of the target relative to robot according to the abscissa of center-of-mass coordinate Position, and target is determined at a distance from robot according to the ordinate of center-of-mass coordinate, complete target positioning;
S200: under the premise of knowing current location, periphery obstacle information is scanned, and according to the azimuth-range of the target Planning follows path;
S300: during following, constantly real time scan periphery obstacle information and when finding barrier adjustment follow road Diameter.
2. the step S100 includes the following steps: the method according to claim 1, wherein preferred
S101: the original YUV color image of target clothes is obtained;
S102: the original YUV color image of the target clothes is pre-processed;
S103: being converted to RGB color image for pretreated original YUV color image, then by will be in RGB color image Brightness value and chromatic value carry out separation and are converted to new YUV color image, and it is weak to carry out brightness to the new YUV color image Change processing;
S104: color lump color, color lump quantity and the color lump shape of the new YUV color image after brightness Weakening treatment are extracted Shape;
S105: target is identified according to the color lump color and color lump quantity;
S106: the center-of-mass coordinate of color lump is calculated according to the color lump shape, target is positioned according to center-of-mass coordinate.
3. according to the method described in claim 2, it is characterized in that, the pretreatment includes to target clothes in step S102 Original YUV color image carry out histogram equalization.
4. according to the method described in claim 3, it is characterized in that, the histogram equalization is by using cumulative distribution letter Number carries out mapping completion, specifically includes:
Wherein, n is the summation of pixel in image, and k is the number of pixels of current gray level grade, and L is possible ash in present image Spend total number of grades.
5., will be pretreated original by following formula according to the method described in claim 2, it is characterized in that, in step S103 YUV color image is converted to RGB color image,
R=Y+1.14V
G=Y-0.39U-0.58V
B=Y+2.03U
Wherein, R, G, B are the pixel value in each channel of the pixel under RGB color space, and Y, U, V are corresponding YUV color Pixel value under space, and the value range of R, G, B, Y, U, V are between 0-255;
RGB color image is converted to new YUV color image again by following formula:
UU=R-G=1.72V+0.39V
VV=B-G=2.42U+0.58V
CC=R+G+B=3Y+0.56V+1.64U
Wherein, UU, VV, CC are the pixel value in each channel of the pixel under new YUV color space.
6. described to be carried out to new YUV color image according to the method described in claim 2, it is characterized in that, in step S103 Brightness Weakening treatment is completed by following formula:
U '=(15UU)/CC
V '=(15VV)/CC
Wherein, U ', V ' indicate the chromatic value of the new YUV color image after brightness reduction.
7. according to the method described in claim 2, it is characterized in that, calculating color lump according to the color lump shape in step S106 Center-of-mass coordinate include: to obtain the left margin coordinate x1 of color lump shape, right margin coordinate x2, coboundary coordinate y2 and following respectively Boundary coordinate y1, then center-of-mass coordinate=((xl+x2)/2, (yl+y2)/2).
8. a kind of identification of view-based access control model follow automatically and the robot of avoidance, comprising:
Robot body, the robot body include shell, several support plates and expansion plate, the shell and the support plate It is bolted, the expansion plate is connected by a hinge with the shell;
Power plant module, for driving Robot different directions to move;
Vision positioning module identifies target according to the clothes color of image, for acquiring target image according to clothes face Extracted color lump shape calculates the center-of-mass coordinate of color lump in color, and is positioned according to center-of-mass coordinate to target;
Obstacle avoidance module, for obtaining robot periphery obstacle information and feeding back to the control module;
Control module, the periphery obstacle for being obtained according to the vision positioning module to the positioning of target and the obstacle avoidance module Object information planning follows path.
9. robot according to claim 8, which is characterized in that the robot further includes memory module, for storing The clothes color parameter of target.
10. robot according to claim 8, which is characterized in that the vision positioning module includes:
Acquiring unit, for obtaining the original YUV color image of target clothes;
Pretreatment unit, for the original YUV color image pretreatment to the target clothes;
Converting unit, for pretreated original YUV color image to be converted to RGB color image, then by RGB color image New YUV color image is converted to, and brightness Weakening treatment is carried out to the new YUV color image;
Feature acquiring unit, for extracting color lump color, the color lump of the new YUV color image after brightness Weakening treatment Quantity and color lump shape;
Recognition unit, for identifying target according to the color lump color and color lump quantity;
Positioning unit determines target according to center-of-mass coordinate for calculating the center-of-mass coordinate of color lump according to the color lump shape Position.
CN201910444843.3A 2019-05-27 2019-05-27 A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically Pending CN110103223A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910444843.3A CN110103223A (en) 2019-05-27 2019-05-27 A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically
CN201910757661.1A CN110355765B (en) 2019-05-27 2019-08-16 Automatic following obstacle avoidance method based on visual identification and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910444843.3A CN110103223A (en) 2019-05-27 2019-05-27 A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically

Publications (1)

Publication Number Publication Date
CN110103223A true CN110103223A (en) 2019-08-09

Family

ID=67492264

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910444843.3A Pending CN110103223A (en) 2019-05-27 2019-05-27 A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically
CN201910757661.1A Active CN110355765B (en) 2019-05-27 2019-08-16 Automatic following obstacle avoidance method based on visual identification and robot

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910757661.1A Active CN110355765B (en) 2019-05-27 2019-08-16 Automatic following obstacle avoidance method based on visual identification and robot

Country Status (1)

Country Link
CN (2) CN110103223A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619298A (en) * 2019-09-12 2019-12-27 炬佑智能科技(苏州)有限公司 Mobile robot, specific object detection method and device thereof and electronic equipment
CN110712187A (en) * 2019-09-11 2020-01-21 珠海市众创芯慧科技有限公司 Intelligent walking robot based on integration of multiple sensing technologies
CN112784676A (en) * 2020-12-04 2021-05-11 中国科学院深圳先进技术研究院 Image processing method, robot, and computer-readable storage medium
CN112907625A (en) * 2021-02-05 2021-06-04 齐鲁工业大学 Target following method and system applied to four-footed bionic robot
CN113959432A (en) * 2021-10-20 2022-01-21 上海擎朗智能科技有限公司 Method and device for determining following path of mobile equipment and storage medium
CN118229260A (en) * 2024-03-06 2024-06-21 广东今程光一电力科技有限责任公司 Rail transit operation and maintenance data processing system based on unmanned inspection

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586363B (en) * 2020-05-22 2021-06-25 深圳市睿联技术股份有限公司 Video file viewing method and system based on object
CN113814952A (en) * 2021-09-30 2021-12-21 西南石油大学 Intelligent logistics trolley

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2715931Y (en) * 2004-07-13 2005-08-10 中国科学院自动化研究所 Apparatus for quick tracing based on object surface color
CN101986348A (en) * 2010-11-09 2011-03-16 上海电机学院 Visual target identification and tracking method
CN102431034B (en) * 2011-09-05 2013-11-20 天津理工大学 Color recognition-based robot tracking method
WO2014093144A1 (en) * 2012-12-10 2014-06-19 Abb Technology Ag Robot program generation for robotic processes
CN103177259B (en) * 2013-04-11 2016-05-18 中国科学院深圳先进技术研究院 Color lump recognition methods
CN106945037A (en) * 2017-03-22 2017-07-14 北京建筑大学 A kind of target grasping means and system applied to small scale robot
CN108829137A (en) * 2018-05-23 2018-11-16 中国科学院深圳先进技术研究院 A kind of barrier-avoiding method and device of robot target tracking

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110712187A (en) * 2019-09-11 2020-01-21 珠海市众创芯慧科技有限公司 Intelligent walking robot based on integration of multiple sensing technologies
CN110619298A (en) * 2019-09-12 2019-12-27 炬佑智能科技(苏州)有限公司 Mobile robot, specific object detection method and device thereof and electronic equipment
CN112784676A (en) * 2020-12-04 2021-05-11 中国科学院深圳先进技术研究院 Image processing method, robot, and computer-readable storage medium
CN112907625A (en) * 2021-02-05 2021-06-04 齐鲁工业大学 Target following method and system applied to four-footed bionic robot
CN113959432A (en) * 2021-10-20 2022-01-21 上海擎朗智能科技有限公司 Method and device for determining following path of mobile equipment and storage medium
CN113959432B (en) * 2021-10-20 2024-05-17 上海擎朗智能科技有限公司 Method, device and storage medium for determining following path of mobile equipment
CN118229260A (en) * 2024-03-06 2024-06-21 广东今程光一电力科技有限责任公司 Rail transit operation and maintenance data processing system based on unmanned inspection

Also Published As

Publication number Publication date
CN110355765A (en) 2019-10-22
CN110355765B (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN110103223A (en) A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically
CN206650757U (en) A kind of device
CN110344621A (en) A kind of wheel points cloud detection method of optic towards intelligent garage
US10180683B1 (en) Robotic platform configured to identify obstacles and follow a user device
CN107618396A (en) Automatic charging system and method
CN110928301A (en) Method, device and medium for detecting tiny obstacles
JP2011129126A (en) Automatic tagging for landmark identification
CN102183250B (en) Automatic navigation and positioning device and method for field road of agricultural machinery
US20220237533A1 (en) Work analyzing system, work analyzing apparatus, and work analyzing program
CN213424010U (en) Mowing range recognition device of mowing robot
CN106162144A (en) A kind of visual pattern processing equipment, system and intelligent machine for overnight sight
CN108646727A (en) A kind of vision cradle and its localization method and recharging method
WO2023173950A1 (en) Obstacle detection method, mobile robot, and machine readable storage medium
CN106863259A (en) A kind of wheeled multi-robot intelligent ball collecting robot
CN107065871A (en) It is a kind of that dining car identification alignment system and method are walked based on machine vision certainly
CN108459596A (en) A kind of method in mobile electronic device and the mobile electronic device
CN110456791A (en) A kind of leg type mobile robot object ranging and identifying system based on monocular vision
KR20220035894A (en) Object recognition method and object recognition device performing the same
KR20200020465A (en) Apparatus and method for acquiring conversion information of coordinate system
Hochdorfer et al. 6 DoF SLAM using a ToF camera: The challenge of a continuously growing number of landmarks
CN111109786A (en) Intelligent obstacle early warning crutch based on deep learning and early warning method thereof
CN206643905U (en) A kind of wheeled multi-robot intelligent ball collecting robot
CN108876798A (en) A kind of stair detection system and method
KR20220049851A (en) Wearable device for putting guide using augmented reality and method therefor
CN117353410A (en) Mobile robot autonomous charging method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190809

WD01 Invention patent application deemed withdrawn after publication