CN110355765B - Automatic following obstacle avoidance method based on visual identification and robot - Google Patents

Automatic following obstacle avoidance method based on visual identification and robot Download PDF

Info

Publication number
CN110355765B
CN110355765B CN201910757661.1A CN201910757661A CN110355765B CN 110355765 B CN110355765 B CN 110355765B CN 201910757661 A CN201910757661 A CN 201910757661A CN 110355765 B CN110355765 B CN 110355765B
Authority
CN
China
Prior art keywords
target
color
color image
robot
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910757661.1A
Other languages
Chinese (zh)
Other versions
CN110355765A (en
Inventor
杨立娟
徐劲虎
罗杨
王红
郭艳婕
刘金鑫
田绍华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Publication of CN110355765A publication Critical patent/CN110355765A/en
Application granted granted Critical
Publication of CN110355765B publication Critical patent/CN110355765B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic following obstacle avoidance method based on visual identification, which comprises the following steps: collecting a target image, and identifying and positioning the target according to the clothes color of the target; planning a following path according to the azimuth and distance information of the target; and scanning surrounding environment information in real time in the following process and adjusting a following path if necessary. The present disclosure also discloses an automatic following obstacle avoidance robot based on visual identification, include: the robot comprises a robot body, a power module, a visual positioning module, a control module and an obstacle avoidance module. The target is identified and positioned by collecting the clothes color of the target, a transmission source or a receiving source does not need to be carried by a follower, and the electromagnetic wave interference can be reduced, so that the following precision is improved; by taking the color information of the target clothes as an object, the calculation amount of visual identification can be reduced, and the response speed is improved; the target can be screened according to the size of the color block area to reduce the probability of error tracking.

Description

Automatic following obstacle avoidance method based on visual identification and robot
Technical Field
The disclosure belongs to the field of mechatronic control, and particularly relates to an automatic following obstacle avoidance method based on visual identification and a robot.
Background
When people need to carry a large number of objects or large-quality objects, the user feels tired, for example, the user needs to carry large luggage, shop in a shopping mall and the like, and the automatic following robot can help people to carry the objects and always follow the owner, so the automatic following robot has a great development prospect and is popular in the market. However, the automatic following robot in China at present needs to carry a signal emitting source or a signal receiving source by a person to be followed by a remote control device matched with the robot, cannot achieve the purpose of always detecting a following target while avoiding obstacles, and cannot adapt to various complex road conditions.
Disclosure of Invention
In view of the above, an object of the present disclosure is to provide an automatic following obstacle avoidance robot based on visual recognition, which can follow a target by recognizing the color of the target clothes, can discriminate the surrounding environment in real time to avoid obstacles, and can adapt to different road conditions.
The purpose of the disclosure is realized by the following technical scheme:
an automatic following obstacle avoidance method based on visual identification comprises the following steps:
s100: collecting a target image, identifying the target according to the clothes color of the target, calculating the centroid coordinate of a color block according to the shape of the color block extracted from the clothes color, determining the direction of the target relative to the robot according to the abscissa of the centroid coordinate, determining the distance between the target and the robot according to the ordinate of the centroid coordinate, and completing target positioning;
s200: scanning peripheral obstacle information on the premise of acquiring the current position, and planning a following path according to the direction and distance information of the target;
s300: in the following process, peripheral obstacle information is continuously scanned in real time, and a following path is adjusted when an obstacle is found.
Preferably, the step S100 includes the steps of:
s101: acquiring an original YUV color image of a target garment;
s102: preprocessing the original YUV color image of the target clothes;
s103: converting the preprocessed original YUV color image into an RGB color image, separating the brightness value and the chromatic value in the RGB color image to convert the RGB color image into a new YUV color image, and performing brightness weakening processing on the new YUV color image;
s104: extracting color block colors, the number of color blocks and the shapes of the color blocks of the new YUV color image after the brightness weakening treatment;
s105: identifying a target according to the color block colors and the number of the color blocks;
s106: and calculating the centroid coordinate of the color block according to the shape of the color block, and positioning the target according to the centroid coordinate.
Preferably, in step S102, the preprocessing includes performing histogram equalization on the YUV color image of the original target garment.
Preferably, the histogram equalization is performed by mapping using a cumulative distribution function, and specifically includes:
Figure GDA0002686952650000021
where n is the sum of the pixels in the image, k is the number of pixels in the current gray level, and L is the total number of possible gray levels in the current image.
Preferably, in step S103, the preprocessed raw YUV color image is converted into an RGB color image by the following formula,
R=Y+1.14V
G=Y-0.39U-0.58V
B=Y+2.03U
r, G, B is a pixel value of each channel of a pixel point in an RGB color space, Y, U, V is a pixel value in a corresponding YUV color space, and the value range of R, G, B, Y, U, V is between 0 and 255;
the RGB color image is then converted into a new YUV color image with reduced luminance effects by:
UU=R-G=1.72V+0.39V
VV=B-G=2.42U+0.58V
CC=R+G+B=3Y+0.56V+1.64U
and the UU, the VV and the CC are pixel values of each channel of the pixel point in a new YUV color space for weakening the influence of brightness.
Preferably, in step S103, the brightness weakening processing on the new YUV color image is performed by the following formula:
U’=(15UU)/CC
V’=(15VV)/CC
wherein, U 'and V' represent chroma values of the new YUV color image after the brightness is weakened.
Preferably, in step S106, calculating the centroid coordinates of the color patches according to the color patch shapes includes: the left boundary coordinate x1, the right boundary coordinate x2, the upper boundary coordinate y2, and the lower boundary coordinate y1 of the patch outline are acquired, respectively, and the centroid coordinate is ((x1+ x2)/2, (y1+ y 2)/2).
The present disclosure also provides a robot that follows and keeps away barrier automatically based on visual identification, include:
the robot comprises a robot body, a plurality of supporting plates and an expansion plate, wherein the robot body comprises a shell, the supporting plates and the expansion plate;
the power module is used for driving the robot to move along different directions;
the visual positioning module is used for acquiring a target image, identifying the target according to the clothes color of the image, calculating the centroid coordinate of a color block according to the shape of the color block extracted from the clothes color and positioning the target according to the centroid coordinate;
the obstacle avoidance module is used for acquiring the information of obstacles around the robot and feeding the information back to the control module;
and the control module is used for planning a following path according to the positioning of the visual positioning module on the target and the environment information acquired by the obstacle avoidance module.
Preferably, the robot further comprises a storage module for storing the target clothes color parameters.
Preferably, the visual positioning module comprises:
the acquiring unit is used for acquiring an original YUV color image of the target clothes;
the preprocessing unit is used for preprocessing the original YUV color image of the target clothes;
the conversion unit is used for converting the preprocessed original YUV color image into an RGB color image, converting the RGB color image into a new YUV color image and performing brightness weakening processing on the new YUV color image;
the feature acquisition unit is used for extracting color block colors, the number of color blocks and the shapes of the color blocks of the new YUV color image after the brightness weakening processing;
the identification unit is used for identifying a target according to the color block color and the number of the color blocks;
and the positioning unit is used for calculating the mass center coordinate of the color block according to the shape of the color block and positioning the target according to the mass center coordinate.
Compared with the prior art, the beneficial effect that this disclosure brought does:
1. the target is identified and positioned by using vision, a transmission source or a receiving source does not need to be carried by a follower, the interference of electromagnetic waves can be reduced, and the following precision is improved;
2. the system can realize automatic positioning and obstacle avoidance, and can be applied to multiple scenes.
Drawings
Fig. 1 is a flowchart of an automatic following obstacle avoidance method based on visual recognition according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for identifying and locating an object based on a color of the target garment according to another embodiment of the present disclosure;
fig. 3 is a front view of an automatic following obstacle avoidance robot based on visual recognition provided by the present disclosure;
fig. 4 is a side view of an automatic following obstacle avoidance robot based on visual recognition provided by the present disclosure;
fig. 5 is a schematic structural diagram of a visual positioning module in a robot provided by the present disclosure.
Detailed Description
The technical solutions of the present disclosure are described in detail below with reference to the accompanying drawings and embodiments, and the present disclosure may be embodied in many different forms of embodiments.
Referring to fig. 1, the present disclosure provides an automatic following obstacle avoidance method based on visual identification, including the following steps:
s100: collecting a target image, identifying the target according to the clothes color of the target, calculating the centroid coordinate of a color block according to the shape of the color block extracted from the clothes color, determining the direction of the target relative to the robot according to the abscissa of the centroid coordinate, determining the distance between the target and the robot according to the ordinate of the centroid coordinate, and completing target positioning;
s200: scanning peripheral obstacle information on the premise of acquiring the current position, and planning a following path according to the direction and distance information of the target;
s300: in the following process, the surrounding obstacle environment is continuously scanned in real time, and the following path is adjusted when the obstacle is found.
The above embodiment completely constitutes the technical solution of the present disclosure, and the above embodiment identifies and locates the target by collecting the color of the clothes in the image. Different from the prior art: the method and the device do not need to carry a transmitting source or a receiving source by a follower, and can reduce electromagnetic wave interference and improve following precision; in addition, the amount of calculation for visual recognition can be reduced and the response speed can be improved for the target clothes color information.
In another embodiment, as shown in fig. 2, the step S100 includes the following steps:
s101: acquiring an original YUV color image of a target garment;
s102: preprocessing the original YUV color image of the target clothes;
s103: converting the preprocessed original YUV color image into an RGB color image, separating the brightness value and the chromatic value in the RGB color image to convert the RGB color image into a new YUV color image, and performing brightness weakening processing on the new YUV color image;
s104: extracting color block colors, the number of color blocks and the shapes of the color blocks of the new YUV color image after the brightness weakening treatment;
s105: identifying a target according to the color block colors and the number of the color blocks;
s106: and calculating the centroid coordinate of the color block according to the shape of the color block, and positioning the target according to the centroid coordinate.
In this embodiment, a plurality of color patches in the YUV color image are obtained according to a predetermined color combination in the color space, where the color patches are the same color area in the image, and the threshold range of the target clothing image can be obtained by calculating the number of colors of the color patches and the number of color patches of the same color. Here, the predetermined color combination is intended to distinguish colors, several colors may be defined according to a color space, and exemplarily, the color combination may be positioned as red, yellow and blue, indicating that only three colors on the target clothes are recognized, and it should be understood by those skilled in the art that if there are too many kinds of colors in the predetermined color combination, the amount of calculation of the target recognition process is increased, thereby reducing the recognition efficiency.
Furthermore, color blocks of the same color are combined in the threshold range to form different color block areas, and the geometric shape of the target clothes image can be obtained by calculating the geometric shape of each color block area and the proportion in the threshold range. By comparing the color patch color and the number of color patches with the stored clothes color parameters, the followed target can be identified.
Furthermore, the centroid coordinates are accurately calculated according to the color block shapes, so that the target positioning can be completed, and the position and the direction information of the determined target can be obtained.
In another embodiment, in step S102, the preprocessing includes histogram equalization of the raw YUV color image of the target garment.
In the embodiment, the histogram equalization is performed on the original YUV color image of the target clothes, so that the contrast between the clothes color and the surrounding image can be enhanced, the target clothes color can be more accurately identified, and meanwhile, some interference images can be filtered.
In another embodiment, the histogram equalization is performed by mapping using a cumulative distribution function, which specifically includes:
Figure GDA0002686952650000081
where n is the sum of the pixels in the image, k is the number of pixels in the current gray level, and L is the total number of possible gray levels in the current image.
In the embodiment, through the mapping, the contrast between the target clothes image and the surrounding image can be enhanced, the target positioning precision is improved, and the target is accurately positioned.
In another embodiment, in step S103, the preprocessed raw YUV color image is converted into an RGB color image by the following formula,
R=Y+1.14V
G=Y-0.39U-0.58V (2)
B=Y+2.03U
wherein R, G, B is a pixel value of each channel of the pixel point in the RGB color space, Y, U, V is a pixel value in the corresponding YUV color space, and the value range of R, G, B, Y, U, V is between 0 and 255.
The RGB color image is then converted into a YUV color image by:
UU=R-G=1.72V+0.39V
VV=B-G=2.42U+0.58V (3)
CC=R+G+B=3Y+0.56V+1.64U
and the UU, the VV and the CC are pixel values of all channels of the pixel point in a new YUV color space.
In another embodiment, the brightness reduction of the new YUV color image is performed by the following formula:
U’=(15UU)/CC
V’=(15VV)/CC (4)
wherein, U 'and V' represent chroma values of the new YUV color image after the brightness is weakened.
In this embodiment, under different brightness, the color of the clothes acquired by the camera may generate a large difference, and therefore, the influence of the brightness on the target image needs to be weakened by formula (3), and further, in order to consider the influence of the brightness on the color of the clothes, the above processing needs to be performed on formula (3). The UU 'value and the V' value are obtained by amplifying the UU value and the VV value and dividing the UU value and the VV value by the brightness Y, so that the influence of the brightness Y on the target color is considered, and the interference of the brightness Y on the target color is weakened.
In another embodiment, in step S106, calculating the centroid coordinates of the color patches according to the color patch shapes includes: the left boundary coordinate x1, the right boundary coordinate x2, the upper boundary coordinate y2, and the lower boundary coordinate y1 of the patch outline are acquired, respectively, and the centroid coordinate is ((x1+ x2)/2, (y1+ y 2)/2).
In this embodiment, a positive value and a negative value of the abscissa respectively indicate that the robot is located on the left or right of the follow-up target, and an absolute value of the abscissa indicates the degree of deviation; positive and negative values of the ordinate indicate the distance from the robot to the following target, respectively, and the absolute value of the ordinate indicates the magnitude of the distance.
In another embodiment, the present disclosure further provides a robot for automatic following and obstacle avoidance based on visual recognition, including:
the robot comprises a robot body, a plurality of supporting plates and an expansion plate, wherein the robot body comprises a shell, the supporting plates and the expansion plate;
the power module is used for driving the robot to move along different directions;
the visual positioning module is used for acquiring a target image, identifying the target according to the clothes color of the image, calculating the centroid coordinate of a color block according to the shape of the color block extracted from the clothes color and positioning the target according to the centroid coordinate;
the obstacle avoidance module is used for acquiring the information of obstacles around the robot and feeding the information back to the control module;
and the control module is used for planning a following path according to the positioning of the visual positioning module on the target and the peripheral obstacle information acquired by the obstacle avoidance module.
In another embodiment, the robot further comprises a storage module for storing the target garment color parameters.
In this embodiment, before starting to follow the target, the clothes color parameter of the target needs to be stored, so that the robot can match the acquired clothes color with the stored clothes color parameter as a reference after starting to follow.
In another embodiment, the visual localization module comprises:
the acquiring unit is used for acquiring an original YUV color image of the target clothes;
the preprocessing unit is used for preprocessing the original YUV color image of the target clothes;
the conversion unit is used for converting the preprocessed original YUV color image into an RGB color image, converting the RGB color image into a new YUV color image and performing brightness weakening processing on the new YUV color image;
the feature acquisition unit is used for extracting color block colors, the number of color blocks and the shapes of the color blocks of the new YUV color image after the brightness weakening processing; the identification unit is used for determining a target according to the color block characteristics;
the identification unit is used for identifying a target according to the color block color and the number of the color blocks;
and the positioning unit is used for calculating the mass center coordinate of the color block according to the shape of the color block and positioning the target according to the mass center coordinate.
In another embodiment, as shown in fig. 3, the support plates include a first support plate 1, a second support plate 2, and a third support plate 3; the first supporting plate 1, the second supporting plate 2 and the third supporting plate 3 are sequentially connected through copper columns;
in this embodiment, the housing is made of plastic to reduce the weight and improve the motion flexibility of the robot. The both sides of shell and front side fluting, the internal sensor of being convenient for is mutual with the external world, and pastes around the shell and has buffer material (for example sponge, polystyrene foam, air cushion film etc.) for reduce the impact force because of striking suddenly and bring. The supporting plate is in a geometrical shape, preferably a square shape or a rectangular shape, and the front side of the supporting plate is in a triangular shape, so that the possibility of collision between the robot body and an obstacle when the robot turns can be reduced. The first supporting plate 1 is made of aluminum alloy, so that the bottom weight of the robot is increased conveniently, the stability is improved, and the first supporting plate is used for supporting the power module and the control module; the second supporting plate 2 and the third supporting plate 3 are made of acrylic materials, can reduce the weight of the upper portion of the robot, and are respectively used for supporting the obstacle avoidance module and the vision positioning module.
In another embodiment, the power module comprises wheels 4, a dc gear motor 5 and a drive unit; wherein, the wheel 4 is connected with an output shaft of the DC speed reducing motor 5; the direct current speed reduction motor 5 is connected with the driving unit through a lead.
In this embodiment, the wheels 4 are preferably mecanum wheels, and can implement moving modes such as forward movement, transverse movement, oblique movement, rotation, and combinations thereof, so as to flexibly avoid obstacles encountered in the course of movement. The driving unit adopts a single driving chip and is embedded in the expansion board, and can drive the direct current speed reducing motor 5 to rotate forwards and backwards and control the rotating speed of the direct current speed reducing motor.
In another embodiment, the obstacle avoidance module 8 includes: the detection unit is used for acquiring barrier information within 360-degree range around the robot; and the distance measuring unit is used for acquiring the distance between the robot and the peripheral obstacles.
In this embodiment, the detection unit may acquire information of peripheral obstacles within a scanning range of the robot, and for example, if the robot is located outdoors to perform a following task, dynamic obstacles such as pedestrians and vehicles and static obstacles such as trees, electric poles, and steps may be acquired through the detection unit; if the robot is in a room, dynamic obstacles such as children and pets and static obstacles such as tables, chairs and household appliances can be collected through the detection unit. The detection unit transmits the acquired barrier information to the control module to generate a two-dimensional plane map, and path planning is carried out according to the barrier position given by the map to obtain the shortest barrier-avoiding route of the target.
In another embodiment, the detection unit comprises a laser radar sensor for scanning the angle and distance information of the obstacles in the 360-degree range around the robot in real time.
In this embodiment, the laser pulses are emitted by a laser emitting diode, reflected by the target and scattered in all directions. Part of the scattered light returns to the sensor and the target distance can be determined by recording and processing the time elapsed from the emission of the light pulse to the receipt of the return.
In another embodiment, the distance measuring unit comprises an ultrasonic distance meter, a first infrared distance meter and a second infrared distance meter; the ultrasonic distance measuring instrument is used for acquiring the distance information of an obstacle in front of the robot in real time; the first infrared distance measuring instrument and the second infrared distance measuring instrument are respectively arranged on two sides of the robot and used for acquiring barrier distance information of the two sides.
In another embodiment, the control module 9 comprises: the data receiving and transmitting unit comprises a wireless transceiver and is used for receiving and transmitting the feedback information of the obstacle avoidance module and the visual positioning module in real time; the data processing unit comprises a micro PC or a singlechip and is used for analyzing and processing the feedback information sent by the data receiving unit and issuing an instruction; and the alarm unit comprises a buzzer and is used for reminding the target when the robot is in an uncontrollable state.
In this embodiment, the wireless transceiver preferably adopts NRF24L01, and has the characteristics of large data buffer and high refresh rate; the micro PC or the single chip microcomputer preferably adopts the data processing capability of raspberry pi, STM32 and the like. The micro PC receives the information sent by the obstacle avoidance module and the visual positioning module in real time through the data receiving unit, analyzes and processes the received data through the data processing unit so as to send a motion instruction to be executed, and then sends the motion instruction to the power module through the data sending unit for execution.
The robot comprises an automatic following mode and a manual remote control following mode, the robot moves based on visual navigation in the automatic following mode, and in the manual remote control mode, a target can control the robot through an application program on a mobile terminal, and specifically, a mode selection command can be sent to the robot through Bluetooth, WIFI and the like. Generally, for convenience, the robot is set to an automatic following mode, and when the robot encounters a complex road condition (for example, people flow too densely, an area too narrow and too small to pass through, etc.) and is difficult to realize automatic following, the robot can be switched to a manual following mode, so as to realize manual control of the robot.
Further, the alarm unit will typically issue an alarm to alert the follower in two cases: firstly, when the distance between the robot and the followed person exceeds the preset safe distance threshold value, the alarm unit is started, and the buzzer arranged in the alarm unit starts to continuously alarm. And secondly, when the robot cannot analyze the optimal path, namely, no direct communication area exists between the robot and the target (for example, a gully exists between the robot and the target) or the area is too narrow to enable the robot to pass through, the alarm unit starts and gives an alarm.
The above are only some embodiments of the present disclosure, and are not intended to limit the inventive concept of the present disclosure, and those skilled in the art may make certain substitutions and modifications without departing from the principle of the inventive concept of the present disclosure, but all should fall within the scope of the present disclosure.

Claims (8)

1. A method for automatic following and obstacle avoidance based on visual identification comprises the following steps:
s100: collecting a target image, identifying the target according to the clothes color of the target, calculating the centroid coordinate of a color block according to the shape of the color block extracted from the clothes color, determining the direction of the target relative to the robot according to the abscissa of the centroid coordinate, determining the distance between the target and the robot according to the ordinate of the centroid coordinate, and completing target positioning;
the step S100 includes the steps of:
s101: acquiring an original YUV color image of a target garment;
s102: preprocessing the original YUV color image of the target clothes;
s103: converting the preprocessed original YUV color image into an RGB color image, separating the brightness value and the chromatic value in the RGB color image to convert the RGB color image into a new YUV color image, and performing brightness weakening processing on the new YUV color image;
s104: extracting color block colors, the number of color blocks and the shapes of the color blocks of the new YUV color image after the brightness weakening treatment;
s105: identifying a target according to the color block colors and the number of the color blocks;
s106: calculating the centroid coordinate of the color block according to the shape of the color block, and positioning the target according to the centroid coordinate;
s200: scanning peripheral obstacle information on the premise of acquiring the current position, and planning a following path according to the direction and the distance of the target;
s300: in the following process, peripheral obstacle information is continuously scanned in real time, and a following path is adjusted when an obstacle is found.
2. The method according to claim 1, wherein in step S102, the preprocessing comprises histogram equalization of raw YUV color images of the target garment.
3. The method according to claim 2, wherein the histogram equalization is performed by mapping using a cumulative distribution function, specifically comprising:
Figure FDA0002686952640000021
where n is the sum of the pixels in the image, k is the number of pixels in the current gray level, and L is the total number of possible gray levels in the current image.
4. The method according to claim 1, wherein in step S103, the preprocessed original YUV color image is converted into an RGB color image by the following formula,
R=Y+1.14V
G=Y-0.39U-0.58V
B=Y+2.03U
r, G, B is a pixel value of each channel of a pixel point in an RGB color space, Y, U, V is a pixel value in a corresponding YUV color space, and the value range of R, G, B, Y, U, V is between 0 and 255;
the RGB color image is then converted into a new YUV color image by:
UU=R-G=1.72V+0.39V
VV=B-G=2.42U+0.58V
CC=R+G+B=3Y+0.56V+1.64U
and the UU, the VV and the CC are pixel values of all channels of the pixel point in a new YUV color space.
5. The method according to claim 1, wherein in step S103, the brightness reduction process for the new YUV color image is performed according to the following formula:
U’=(15UU)/CC
V’=(15VV)/CC
wherein, U 'and V' represent chroma values of the new YUV color image after the brightness is weakened.
6. The method of claim 1, wherein in step S106, calculating centroid coordinates of a color block according to the color block shape comprises: the left boundary coordinate x1, the right boundary coordinate x2, the upper boundary coordinate y2, and the lower boundary coordinate y1 of the patch shape are acquired, respectively, and the centroid coordinate is ((x1+ x2)/2, (y1+ y 2)/2).
7. A robot capable of automatically following and avoiding obstacles based on visual recognition comprises: the robot comprises a robot body, a power module, a visual positioning module, an obstacle avoidance module and a control module; wherein the content of the first and second substances,
the robot body comprises a shell, a plurality of supporting plates and expansion plates, wherein the shell is connected with the supporting plates through bolts, and the expansion plates are connected with the shell through hinges;
the power module is used for driving the robot to move along different directions;
the visual positioning module is used for acquiring a target image, identifying the target according to the clothes color of the image, calculating the centroid coordinate of a color block according to the shape of the color block extracted from the clothes color, and positioning the target according to the centroid coordinate;
the visual positioning module comprises:
the acquiring unit is used for acquiring an original YUV color image of the target clothes;
the preprocessing unit is used for preprocessing the original YUV color image of the target clothes;
the conversion unit is used for converting the preprocessed original YUV color image into an RGB color image, converting the RGB color image into a new YUV color image and performing brightness weakening processing on the new YUV color image;
the feature acquisition unit is used for extracting color block colors, the number of color blocks and the shapes of the color blocks of the new YUV color image after the brightness weakening processing;
the identification unit is used for identifying a target according to the color block color and the number of the color blocks;
the positioning unit is used for calculating the mass center coordinate of the color block according to the shape of the color block and positioning the target according to the mass center coordinate;
the obstacle avoidance module is used for acquiring the information of obstacles around the robot and feeding the information back to the control module;
the control module is used for planning a following path according to the positioning of the visual positioning module on the target and the peripheral obstacle information acquired by the obstacle avoidance module.
8. A robot as claimed in claim 7, further comprising a storage module for storing the target garment colour parameters.
CN201910757661.1A 2019-05-27 2019-08-16 Automatic following obstacle avoidance method based on visual identification and robot Active CN110355765B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019104448433 2019-05-27
CN201910444843.3A CN110103223A (en) 2019-05-27 2019-05-27 A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically

Publications (2)

Publication Number Publication Date
CN110355765A CN110355765A (en) 2019-10-22
CN110355765B true CN110355765B (en) 2020-12-25

Family

ID=67492264

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910444843.3A Pending CN110103223A (en) 2019-05-27 2019-05-27 A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically
CN201910757661.1A Active CN110355765B (en) 2019-05-27 2019-08-16 Automatic following obstacle avoidance method based on visual identification and robot

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910444843.3A Pending CN110103223A (en) 2019-05-27 2019-05-27 A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically

Country Status (1)

Country Link
CN (2) CN110103223A (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110712187A (en) * 2019-09-11 2020-01-21 珠海市众创芯慧科技有限公司 Intelligent walking robot based on integration of multiple sensing technologies
CN110619298A (en) * 2019-09-12 2019-12-27 炬佑智能科技(苏州)有限公司 Mobile robot, specific object detection method and device thereof and electronic equipment
CN111586363B (en) * 2020-05-22 2021-06-25 深圳市睿联技术股份有限公司 Video file viewing method and system based on object
CN112784676A (en) * 2020-12-04 2021-05-11 中国科学院深圳先进技术研究院 Image processing method, robot, and computer-readable storage medium
CN112907625B (en) * 2021-02-05 2023-04-28 齐鲁工业大学 Target following method and system applied to quadruped bionic robot
CN113814952A (en) * 2021-09-30 2021-12-21 西南石油大学 Intelligent logistics trolley
CN113959432B (en) * 2021-10-20 2024-05-17 上海擎朗智能科技有限公司 Method, device and storage medium for determining following path of mobile equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2715931Y (en) * 2004-07-13 2005-08-10 中国科学院自动化研究所 Apparatus for quick tracing based on object surface color
CN101986348A (en) * 2010-11-09 2011-03-16 上海电机学院 Visual target identification and tracking method
CN102431034B (en) * 2011-09-05 2013-11-20 天津理工大学 Color recognition-based robot tracking method
WO2014093144A1 (en) * 2012-12-10 2014-06-19 Abb Technology Ag Robot program generation for robotic processes
CN103177259B (en) * 2013-04-11 2016-05-18 中国科学院深圳先进技术研究院 Color lump recognition methods
CN106945037A (en) * 2017-03-22 2017-07-14 北京建筑大学 A kind of target grasping means and system applied to small scale robot
CN108829137A (en) * 2018-05-23 2018-11-16 中国科学院深圳先进技术研究院 A kind of barrier-avoiding method and device of robot target tracking

Also Published As

Publication number Publication date
CN110103223A (en) 2019-08-09
CN110355765A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110355765B (en) Automatic following obstacle avoidance method based on visual identification and robot
US11620835B2 (en) Obstacle recognition method and apparatus, storage medium, and electronic device
CN102944224B (en) Work method for automatic environmental perception systemfor remotely piloted vehicle
WO2020199589A1 (en) Recharging control method for desktop robot
CN207488823U (en) A kind of mobile electronic device
US20210295060A1 (en) Apparatus and method for acquiring coordinate conversion information
CN105014675B (en) A kind of narrow space intelligent mobile robot vision navigation system and method
CN205843680U (en) A kind of orchard robotic vision navigation system
CN213424010U (en) Mowing range recognition device of mowing robot
CN113085896A (en) Auxiliary automatic driving system and method for modern rail cleaning vehicle
WO2018228254A1 (en) Mobile electronic device and method for use in mobile electronic device
Yamauchi Autonomous urban reconnaissance using man-portable UGVs
CN111968132A (en) Panoramic vision-based relative pose calculation method for wireless charging alignment
US20220237533A1 (en) Work analyzing system, work analyzing apparatus, and work analyzing program
WO2022004494A1 (en) Industrial vehicle
CN113081525A (en) Intelligent walking aid equipment and control method thereof
CN209560367U (en) A kind of automatic obstacle-avoiding intelligence weed-eradicating robot
Hudecek et al. A system for precise positioning of vehicles aiming at increased inductive charging efficiency
CN116339326A (en) Autonomous charging positioning method and system based on stereoscopic camera
Garcia-Alegre et al. Real-time fusion of visual images and laser data images for safe navigation in outdoor environments
CN112486173B (en) Self-walking equipment operation boundary acquisition method and self-walking equipment
CN107672586A (en) A kind of electric automobile intelligent car-backing accessory system and method
Yang et al. A new algorithm for obstacle segmentation in dynamic environments using a RGB-D sensor
Aggarwal et al. Vision based collision avoidance by plotting a virtual obstacle on depth map
CN217039972U (en) Outdoor independent work's rubbish cleans machine people

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant