CN108789421B - Cloud robot interaction method based on cloud platform, cloud robot and cloud platform - Google Patents

Cloud robot interaction method based on cloud platform, cloud robot and cloud platform Download PDF

Info

Publication number
CN108789421B
CN108789421B CN201811032020.1A CN201811032020A CN108789421B CN 108789421 B CN108789421 B CN 108789421B CN 201811032020 A CN201811032020 A CN 201811032020A CN 108789421 B CN108789421 B CN 108789421B
Authority
CN
China
Prior art keywords
cloud
robot
cloud robot
surrounding environment
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811032020.1A
Other languages
Chinese (zh)
Other versions
CN108789421A (en
Inventor
庄礼鸿
王宇环
徐敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University of Technology
Original Assignee
Xiamen University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University of Technology filed Critical Xiamen University of Technology
Priority to CN201811032020.1A priority Critical patent/CN108789421B/en
Publication of CN108789421A publication Critical patent/CN108789421A/en
Application granted granted Critical
Publication of CN108789421B publication Critical patent/CN108789421B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a cloud robot interaction method based on a cloud platform, a cloud robot and the cloud platform. Wherein the method comprises the following steps: the method can be used for processing images according to the acquired images of the surrounding environment, judging whether the surrounding environment has obstacles or not, receiving the planned obstacle avoidance path information, finishing obstacle avoidance according to the planned obstacle avoidance path information, and interacting with the cloud platform through a transmission control protocol/Internet protocol (TCP/IP) to send information to the cloud platform and receive information sent by the cloud platform. By the mode, the cloud robot can effectively identify the obstacles and avoid the obstacles, data transmission is stable, and the probability of packet loss is low in the data transmission process.

Description

Cloud robot interaction method based on cloud platform, cloud robot and cloud platform
Technical Field
The invention relates to the technical field of robots, in particular to a cloud robot interaction method based on a cloud platform, a cloud robot and the cloud platform.
Background
The cloud robot is the combination of cloud computing and robotics, and like other network terminals, the robot does not need to store all data information or has super-strong computing capability, and only needs to provide requirements for a cloud platform, and the cloud platform responds correspondingly and meets the requirements.
The cloud robot does not refer to a certain robot or a certain kind of robot, but refers to an academic concept of a robot information storage and acquisition mode. The information access mode has the obvious advantages that for example, the robot can acquire photos of some surrounding environments through the camera and upload the photos to the cloud platform, the cloud platform can retrieve similar photos, the traveling path of the robot can be calculated to avoid obstacles, and the information can be stored to facilitate retrieval of other robots. All robots can share the database, and development time of developers is reduced.
The idea of connecting a robot to an external computer appeared in the 90 s of the 20 th century, and the concept of remote brain (remote brain) was proposed by Inaba, tokyo university. The concept of the cloud robot further deepens the concept, a cheaper computing method is explored and realized, and the concept is interconnected with a ubiquitous network.
At the Humanoids 2010 conference, Kuffner doctor at canarymelong university (now available from Google corporation) first proposed the concept of a cloud robot, leading to extensive discussion. According to Kuffner's idea, the cloud robot is just a combination of cloud computing and robotics, and like other network terminals, the robot itself does not need to store all information or has super-strong computing power, and can connect with relevant servers and obtain required information only when needed.
The super computing and mass storage capacity of cloud computing gradually subverts the traditional application mode. In 2010, at International Symposium on Service organized System Engineering, the american state university of arizona and the qinghua university have proposed a RaaS (Service-organized Architecture based Service) model, which is Oriented to Service robots, and each robot is used as a RaaS unit, has a certain autonomous capability, and provides corresponding services for users. The robot system based on the SOA expands the service mode of cloud computing and brings the robot into the cloud computing era.
In 2012, Kameik et al propose a mall wheelchair robot, which shares map information through a cloud platform, and uses a cloud framework for positioning and navigation to help people with inconvenient actions to visit a mall.
In 2013, a cloud computing architecture is established in an ASORO laboratory in Singapore, and a robot can construct a 3D map of the current environment. The university of california at berkeley base is based on a cloud platform, and uses a PR2 robot of Willow Garage and a google target recognition engine to complete a 3D robot grabbing task.
In 2014, the university of eindhoven in the netherlands released the RoboEarth project, so that 4 robots collaborated with each other in the environment of a simulated hospital to take care of patients, and information sharing and mutual learning are performed through interaction with a cloud server.
In 2014, a "good (KeJia)" robot at china science and technology university located in pittsburgh in china and a "treasure" (CoBot) robot at canary-mellon university located in pittsburgh in the united states realize remote cooperation and resource sharing tests by means of a cloud platform. In the experiment, the cloud provides various knowledge sources and data sources for the two robots, semantic understanding and automatic planning services are transmitted to 'good' and big data analysis services are transmitted to 'good'. With the help of the knowledge sharing and the remote cooperation, "good" and "good" complete the testing task which cannot be completed by the independent work.
However, when the existing cloud robot interacts, obstacles cannot be effectively identified and avoided, data transmission is unstable, and the probability of packet loss is high in the data transmission process.
Disclosure of Invention
In view of this, the invention aims to provide a cloud robot interaction method based on a cloud platform, a cloud robot and the cloud platform, which can realize that the cloud robot can effectively identify obstacles and avoid obstacles, data transmission is stable, and the probability of packet loss is low in the data transmission process.
According to an aspect of the present invention, there is provided a cloud robot interaction method based on a cloud platform, including: the first cloud robot acquires images of the surrounding environment in a camera shooting mode; the first cloud robot carries out image processing according to the acquired image of the surrounding environment and judges whether the surrounding environment has obstacles or not; the first cloud robot acquires position information of the obstacle according to a judgment result of judging whether the obstacle exists in the surrounding environment or not when judging that the obstacle exists in the surrounding environment, and sends the position information of the obstacle to a cloud platform through a transmission control protocol/Internet protocol; the cloud platform receives the position information of the obstacle, plans an obstacle avoiding path for the first cloud robot according to the position information of the obstacle, and sends the planned obstacle avoiding path information to the first cloud robot through a transmission control protocol/Internet interconnection protocol; the first cloud robot receives the planned obstacle avoidance path information and completes obstacle avoidance according to the planned obstacle avoidance path information; after obstacle avoidance is finished, the first cloud robot acquires the image of the surrounding environment again in a camera shooting mode; the first cloud robot carries out image processing according to the acquired image of the surrounding environment and judges whether the surrounding environment has an identification tag for identifying the second cloud robot; the first cloud robot acquires the position information of the identification tag when judging whether the identification tag identifying the second cloud robot exists in the surrounding environment according to the judgment result of judging whether the surrounding environment has the identification tag identifying the second cloud robot, and sends the position information of the identification tag to the cloud platform through a transmission control protocol/Internet interconnection protocol; the cloud platform receives the position information of the identification tag, plans a path to a specified position for the first cloud robot according to the position information of the identification tag, and sends the planned path information to the specified position to the first cloud robot through a transmission control protocol/Internet protocol; the first cloud robot receives the planned path information reaching the designated position and reaches the designated position according to the planned path information reaching the designated position; the first cloud robot wakes up the second cloud robot at the designated position through voice information, and sends action information of actions to be done next to the cloud platform through a transmission control protocol/Internet protocol; the cloud platform receives the action information and sends an instruction enabling the first cloud robot and the second cloud robot to simultaneously do the same action to the first cloud robot and the second cloud robot through a transmission control protocol/Internet protocol; and the first cloud robot and the second cloud robot respectively receive the instructions for simultaneously doing the same action, and simultaneously do the same action according to the instructions for simultaneously doing the same action.
According to another aspect of the present invention, there is provided a cloud robot including: the system comprises a sensing system, an information processing system, a control system and an execution system; the sensing system comprises a camera and a distance sensor; the camera is used for acquiring images of the surrounding environment in a shooting mode; the information processing system is used for processing the image according to the acquired image of the surrounding environment, judging whether the surrounding environment has obstacles or not and judging whether the surrounding environment has an identification tag for identifying the second cloud robot or not; the distance sensor is used for acquiring position information of the obstacle when judging whether the obstacle exists in the surrounding environment according to the judgment result of judging whether the obstacle exists in the surrounding environment, and acquiring position information of the identification tag when judging whether the identification tag for identifying the second cloud robot exists in the surrounding environment according to the judgment result of judging whether the identification tag for identifying the second cloud robot exists in the surrounding environment; the control system is used for sending the position information of the obstacle to a cloud platform through a transmission control protocol/Internet interconnection protocol, sending the position information of the identification tag to the cloud platform through the transmission control protocol/Internet interconnection protocol, sending the action information of the action to be done next to the cloud platform through the transmission control protocol/Internet interconnection protocol, receiving obstacle avoidance path information planned by the cloud platform, completing obstacle avoidance according to the planned obstacle avoidance path information, receiving path information planned by the cloud platform and reaching the specified position, reaching the specified position according to the planned path information reaching the specified position, and awakening a second cloud robot at the specified position through voice information;
the execution system is used for receiving the instruction which is sent by the cloud platform and does the same action at the same time, and does the same action with the second cloud robot at the same time according to the instruction which does the same action at the same time.
According to still another aspect of the present invention, there is provided a cloud platform comprising: a receiving system, a planning system and a sending system; the receiving system is used for receiving position information of the obstacle sent by the first cloud robot, receiving position information of an identification tag which is sent by the first cloud robot and identifies the second cloud robot, and receiving action information of an action to be taken next by the first cloud robot, sent by the first cloud robot; the planning system is used for planning an obstacle avoidance path for the first cloud robot according to the position information of the obstacle and planning a path reaching a specified position for the first cloud robot according to the position information of the identification tag; the sending system is used for sending the planned obstacle avoidance path information to the first cloud robot through a transmission control protocol/Internet interconnection protocol, sending the planned path information reaching the designated position to the first cloud robot through the transmission control protocol/Internet interconnection protocol, and sending an instruction enabling the first cloud robot and the second cloud robot to simultaneously perform the same action to the first cloud robot and the second cloud robot through the transmission control protocol/Internet interconnection protocol.
According to the scheme, the first cloud robot can process the image according to the acquired image of the surrounding environment, judge whether the surrounding environment has the obstacle or not, receive the planned obstacle avoidance path information, complete obstacle avoidance according to the planned obstacle avoidance path information, interact with the cloud platform through a transmission control protocol/Internet interconnection protocol, and send information to the cloud platform and receive information sent by the cloud platform, so that the cloud robot can effectively identify the obstacle and avoid the obstacle, data transmission is stable, and the probability of packet loss is low in the data transmission process.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of a cloud robot interaction method based on a cloud platform according to the present invention;
fig. 2 is an exemplary illustration of an image processing process of performing image binarization on an image after gray level conversion by a cloud robot in an open source computer vision library manner in an embodiment of the cloud robot interaction method based on a cloud platform;
fig. 3 is a schematic diagram illustrating a principle of a monocular distance measurement mode in an embodiment of the cloud robot interaction method based on the cloud platform;
fig. 4 is an exemplary illustration of a first cloud robot acquiring position information of an obstacle in a monocular distance measuring manner according to the cloud robot interaction method based on the cloud platform of the present invention;
fig. 5 is an exemplary view illustrating that the first cloud robot performs image processing according to the acquired image of the surrounding environment and determines whether the surrounding environment has an identification tag identifying the second cloud robot according to the cloud platform-based cloud robot interaction method according to the embodiment of the present invention;
fig. 6 is an exemplary illustration of a first cloud robot interacting with a second cloud robot to perform the same action according to an embodiment of the cloud platform-based cloud robot interacting method of the present invention;
FIG. 7 is a schematic structural diagram of a cloud robot according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of an embodiment of a cloud platform of the present invention;
FIG. 9 is an exemplary illustration of a display interface of a robot query function of a cloud platform according to an embodiment of the present invention;
FIG. 10 is an exemplary illustration of a display interface of a robot task list function of a cloud platform according to an embodiment of the present invention;
fig. 11 is an exemplary illustration of a display interface of task assignment performed by the cloud platform on the robot according to an embodiment of the cloud platform of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be noted that the following examples are only illustrative of the present invention, and do not limit the scope of the present invention. Similarly, the following examples are only some but not all examples of the present invention, and all other examples obtained by those skilled in the art without any inventive work are within the scope of the present invention.
The invention provides a cloud robot interaction method based on a cloud platform, which can realize effective obstacle identification and obstacle avoidance of the cloud robot, is stable in data transmission and has a low probability of packet loss in the data transmission process.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a cloud robot interaction method based on a cloud platform according to the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 1 if the results are substantially the same. As shown in fig. 1, the method comprises the steps of:
s101: the first cloud robot acquires images of the surrounding environment in a camera shooting mode.
S102: and the first cloud robot performs image processing according to the acquired image of the surrounding environment and judges whether the surrounding environment has obstacles or not.
The first cloud robot performs image processing according to the acquired image of the surrounding environment, and determines whether the surrounding environment has an obstacle, which may include:
the first cloud robot performs image processing of converting the gray level of the acquired image of the surrounding environment by using an OpenCV (open source computer vision library) method according to the acquired image of the surrounding environment, performs image processing of binarizing the image of the gray level converted image, and determines whether the surrounding environment has an obstacle or not according to whether the image of the binarized image has a pixel position or not.
Referring to fig. 2, fig. 2 is an exemplary view illustrating an image processing process of performing image binarization on an image after gray level conversion by using an open-source computer vision library by a cloud robot in an embodiment of the cloud robot interaction method based on a cloud platform. As shown in fig. 2, the binarization of the image is to set the gray value of a point on the image to 0 or 255, i.e. the whole image exhibits a clear black and white effect, that is, a 256-level gray image is selected by an appropriate threshold value to obtain a two-dimensional image, which can still reflect the whole image and local features of the image; in digital image processing, where binary images occupy a very important position, particularly in actual image processing, there are many systems consisting of binary image processing, the processing and analysis of binary images should be performed first; binarizing the grayscale image to obtain a binarized image, such that, when the image is further processed, the aggregate attribute of the image is only related to the position of pixels having a pixel value of 0 or 255; the pixel is no longer involved, the level values make the processing simple, the data processing and compression small, and in order to obtain a perfect binary image, the non-overlapping regions are often defined using closed and connected boundaries; all pixels whose gray levels are greater than or equal to the threshold value are determined to belong to the specific object and have a gray value of 255; otherwise, these pixels are excluded from the object region and have a gray value of 0, representing the background or special object region; if one object has uniform gray values inside and uniform background with other gray values, a threshold method can be used to obtain a comparative segmentation effect; if the difference between the object and the background is not reflected in the gray scale values, such as different textures, the difference feature can be converted into a gray scale difference, then the image is segmented by using a threshold selection technology, and the threshold is dynamically adjusted to realize that the binary image can dynamically observe the specific result of the segmented image; therefore, the pixel position of the obstacle can be obtained.
In this embodiment, the IMREAD (one of computer languages is used to read a data function in an Image File) function of the OpenCV supports various dynamic and static Image File formats, the File formats supported by different systems are different, but all support the BMP (Image File Format) Format, and typically also support the PNG (Portable network graphics), JPEG (Joint Photographic Experts Group), TIFF (Tag Image File Format), and so on, and this embodiment is exemplified by the JPG Format of an Image, and then the Image is packaged into the BGR (Blue-Green-Red ) Format using the cv2. cvcolor (color space conversion function) function, each pixel is represented by a ternary array, and each Integer vector represents a Blue-Green-Red vector, Green and Red channels, other color spaces such as HSV (Hue, Saturation, Value, color model) represent pixels in the same way, except that the Value range and the number of channels are different, for example, the chroma Value range of HSV color space is 0-180.
In this embodiment, the cloud robot itself provides YUV422 (a color coding method) image format, which is not common in color space, YUV is a color coding method used by european tv, Y has a value in the range of 0-255 and represents brightness, U has a value in the range of 0-255 and represents chroma, V has a value in the range of 0-255 and represents density, Y is separated from other two in YUV color space, the sampling rate of chroma is lower than that of brightness, and it has an advantage that image quality is not significantly degraded, and YUV prototypes are derived from RGB (Red-Green-Blue ) models and can be converted into each other by formulas.
S103: the first cloud robot acquires position information of the obstacle when judging whether the obstacle exists in the surrounding environment according to the judgment result of judging whether the obstacle exists in the surrounding environment, and sends the position information of the obstacle to the cloud platform through a Transmission Control Protocol/Internet Protocol (TCP/IP) Protocol.
The determining of whether the obstacle exists in the surrounding environment by the first cloud robot obtains the position information of the obstacle according to the determination result of whether the obstacle exists in the surrounding environment, and sends the position information of the obstacle to the cloud platform through the TCP/IP protocol, which may include:
according to the judgment result of judging whether the surrounding environment has the obstacle or not, when the surrounding environment has the obstacle, a first cloud robot forms a similar triangle through the actual focal length of the camera and the center point pair of the camera according to the pixel position of the image subjected to image binarization processing by adopting a monocular distance measuring mode, calculates the distance information between the pixel position and the camera according to the proportion of the similar triangle, obtains the distance information between the obstacle corresponding to the pixel position and the camera, obtains the position information of the obstacle, and sends the position information of the obstacle to the cloud platform through a TCP/IP protocol.
In this embodiment, the monocular distance measurement may include:
the target object is found through a single camera, the distance between the camera and the target object can be measured by utilizing the pinhole imaging principle, wherein a series of image processing modes such as image recognition, coordinate transformation and the like can be involved.
In this embodiment, a binocular ranging mode may be further employed, and the binocular ranging mode may include:
the binocular ranging algorithm is characterized in that two cameras with repeated visual angles shoot within a period of time, due to the fact that the two cameras are different in spatial position, correction and transformation can be conducted on obtained images, designated targets in the two images are recognized, corresponding parameters are obtained respectively, and finally the actual distance between the cloud robot and the target position is calculated through mathematical derivation; the error ratio of the binocular ranging algorithm is small, but the calculation difficulty ratio is large, and for the cloud robot, the binocular ranging difficulty ratio is large due to the fact that the overlapping area between the two cameras is almost not large. Therefore, the monocular distance measuring algorithm is preferably adopted in the present embodiment.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a principle of a monocular distance measurement mode in an embodiment of a cloud robot interaction method based on a cloud platform according to the present invention. As shown in fig. 3, the monocular distance measurement method is mainly based on a distance measurement model of a pinhole imaging principle, where R represents a measured obstacle, a camera may be installed at an eye of a cloud robot, an effective focal length is f, a top view angle is α, a height from the ground is h, a measured point of the obstacle is P, and a horizontal distance between P and a lens center is d. Where O0 is the lens center, O (x0, y0) is the intersection of the optical axis and the image plane as the origin of the image plane coordinate system, and P (x, y) is the projection of the measured point P on the image plane.
As shown in fig. 2, wherein:
β=α+γ (1)
tg=(h-H)/d (2)
tgγ=op'/d (3)
simultaneous (1), (2) and (3) according to the geometrical relationship:
Figure GDA0002559396990000081
where h and α are known, p' satisfies:
op'2=y2+x2(5)
if (u, v) is the coordinates of the image coordinate system in units of pixels, o ″ (u, v)0,v0) Make the intersection o (x) of the camera's optical axis and the image plane0,y0) Presence coordinates of the frame of (1); p "(u, v) is the frame memory coordinate of P' (x, y). Let d be the physical size of a pixel in the frame memory coordinate corresponding to the image plane in the x-axis and y-axis directionsx、dy
Figure GDA0002559396990000082
Then x is (u-u)0)dx,y=(v-v0)dySubstituting into (5) to obtain:
op'2=[(u-u0)dx]2+[(v-v0)dy]2(7)
wherein f isx、fy、u0And v0The pitching angle α of the camera can be obtained by directly setting the parameters of the camera, and the distance d between the measured point P and the camera can be obtained by simultaneous (4) and (7):
Figure GDA0002559396990000083
referring to fig. 4, fig. 4 is an exemplary illustration of a first cloud robot acquiring position information of an obstacle in a monocular distance measuring manner according to an embodiment of the cloud robot interaction method based on the cloud platform. As shown in fig. 4, when determining that an obstacle exists in the surrounding environment according to the determination result of determining whether the obstacle exists in the surrounding environment, the first cloud robot forms a similar triangle by using a monocular distance measurement method according to the pixel position where the image subjected to the image binarization processing exists, calculates distance information between the pixel position and the camera according to the ratio of the similar triangle, obtains distance information between the obstacle corresponding to the pixel position and the camera, obtains position information of the obstacle, and sends the position information of the obstacle to the cloud platform through a TCP/IP protocol.
S104: the cloud platform receives the position information of the obstacle, plans an obstacle avoidance path for the first cloud robot according to the position information of the obstacle, and sends the planned obstacle avoidance path information to the first cloud robot through a TCP/IP protocol.
S105: and the first cloud robot receives the planned obstacle avoidance path information and finishes obstacle avoidance according to the planned obstacle avoidance path information.
S106: after the first cloud robot finishes obstacle avoidance, a camera shooting mode is adopted, and images of the surrounding environment are obtained again.
S107: and the first cloud robot performs image processing according to the acquired image of the surrounding environment and judges whether the surrounding environment has an identification tag for identifying the second cloud robot.
Wherein, this first cloud robot carries out image processing according to this image of the surrounding environment who obtains, judges whether the surrounding environment has the sign label of sign second cloud robot, can include:
and the first cloud robot performs image processing by adopting an identification tag recognition mode according to the acquired image of the surrounding environment, and judges whether the surrounding environment has an identification tag for identifying the second cloud robot.
In this embodiment, since the color of the second cloud robot is too complex, a large error may occur if image processing is used again, so identification tag recognition is adopted, after the first cloud robot finishes obstacle avoidance, the first cloud robot starts to recognize the identification tag attached to the head of the second cloud robot in advance, and the position of the second cloud robot can be indirectly located through the functions of the identification tag recognition and the identification tag recognition of the cloud robot.
Referring to fig. 5, fig. 5 is an exemplary view illustrating that the first cloud robot performs image processing according to the acquired image of the surrounding environment and determines whether the surrounding environment has an identification tag identifying the second cloud robot according to the cloud robot interaction method based on the cloud platform according to the embodiment of the present invention. As shown in fig. 4, the first cloud robot performs image processing by using an identification tag recognition method according to the acquired image of the surrounding environment, and determines whether an identification tag identifying the second cloud robot exists in the surrounding environment.
S108: and the first cloud robot acquires the position information of the identification label according to the judgment result of whether the identification label for identifying the second cloud robot exists in the surrounding environment or not and sends the position information of the identification label to the cloud platform through a TCP/IP protocol when the identification label for identifying the second cloud robot exists in the surrounding environment is judged.
S109: the cloud platform receives the position information of the identification label, plans a path reaching the designated position for the first cloud robot according to the position information of the identification label, and sends the planned path information reaching the designated position to the first cloud robot through a TCP/IP protocol.
S110: and the first cloud robot receives the planned path information for reaching the specified position and reaches the specified position according to the planned path information for reaching the specified position.
S111: the first cloud robot wakes up the second cloud robot at the designated position through voice information, and sends action information of actions to be done next to the cloud platform through a TCP/IP protocol.
In this embodiment, the first cloud robot reaches the designated position, the first cloud robot starts to send out voice, the second cloud robot matches the voice sent by the first cloud robot through the voice library of the first cloud robot, if the voice library conforms to the corpus of the voice library of the second cloud robot, the first cloud robot starts to perform the next step, if the first cloud robot does not return the feedback of the recognized voice information of the cloud platform, the first cloud robot continues to send out voice instructions until the second cloud robot responds, if the first cloud robot sends out voice information for 5 times continuously, the second cloud robot is regarded as unable to be wakened up after the second cloud robot does not make the feedback.
In this embodiment, after the second cloud robot recognizes the voice, the first cloud robot feeds back the action to be done next to the cloud platform, the cloud platform packages the action instructions and sends the action instructions to the second cloud robot, and the preset time is delayed for 5 seconds, so that the second cloud robot can simulate the first cloud robot to make the same action at the same time, and the effect that the second cloud robot simulates the first cloud robot is achieved. In the process, the cloud platform plays a role of middleware, so that data can be stored in the cloud platform, and the first cloud robot and the second cloud robot can access the data.
S112: the cloud platform receives the action information and sends an instruction which enables the first cloud robot and the second cloud robot to simultaneously do the same action to the first cloud robot and the second cloud robot through a TCP/IP protocol.
S113: the first cloud robot and the second cloud robot respectively receive the instruction for simultaneously doing the same action, and simultaneously do the same action according to the instruction for simultaneously doing the same action.
Referring to fig. 6, fig. 6 is an exemplary illustration of a first cloud robot and a second cloud robot interacting to perform the same action according to an embodiment of the cloud robot interaction method based on the cloud platform. As shown in fig. 6, the two cloud robots respectively receive the commands for simultaneously performing the same operation, and simultaneously perform the same operation according to the commands for simultaneously performing the same operation.
In this embodiment, the simulation experiment of the cloud robot can be performed by a Webots (simulation platform of humanoid robot) platform, which can provide the simulation of most of the robots on the market, including the cloud robot, and in the platform, he can provide many effective controllers and experiments of the cloud robot defined in advance, including:
the first step is as follows: starting the simulation cloud robot, and opening the view according to the path
\ files \ projects \ robots \ NAO \ works \ NAO. wbt (the default world provides only one cloud robot, so a second cloud robot needs to be added manually) Add/PROTO/robots/Aldebaran/NAO;
the second step is that: connecting the chord graph to the anthropomorphic robot, starting the chord graph and selecting to connect or click a connection button;
the third step: the behavior of the cloud robot was tested in webots.
In this embodiment, it is first ensured that the computer has been successfully connected to the cloud robot, and then the written program is imported to allow the cloud robot to perform corresponding actions, where it should be noted that the cloud robot needs to ensure that the rigidity is already turned on, otherwise, actions may fail.
In this embodiment, the first cloud robot may be positioned at the position of the obstacle through the camera, and first, the camera may be called, the image may be acquired through calling the camera, and the position of the second cloud robot may be acquired and the position of the obstacle may be determined through calling the image captured by the camera.
In this embodiment, after the first cloud robot calls the camera, the camera takes a picture, a similar triangle is formed according to the actual focal length of the camera of the first cloud robot and the center point pair of the camera pixel through image processing and a monocular distance measurement algorithm, and the distance from the obstacle to the robot is calculated through the proportion of the similar triangle to set the distance the robot will travel.
In this embodiment, before the first cloud robot starts to move, the rigidity of the first cloud robot is set to ensure that the first cloud robot can normally walk, then the cloud platform sends the coordinates of the obstacle to the first cloud robot, the distance that the first cloud robot needs to walk is set, a proper obstacle avoidance angle and a safe distance that needs to be kept are obtained through calculation, the safe distance is set by participation of a sonar sensor of the first cloud robot, whether the first cloud robot reaches the safe distance is judged after walking for a certain distance, the distance of the obstacle is accurately judged by using a sonar detector, and the distance between the first cloud robot and the obstacle is ensured.
In this embodiment, after the first cloud robot completes the designated operation to reach the designated position, the robot takes a second photograph, and determines whether the robot reaches the designated position according to the size of the obstacle in the photograph of the camera, if so, the robot starts turning to avoid the obstacle, and if not, the robot continues to advance to reach the designated position and then turns to avoid the obstacle.
It can be found that, in this embodiment, the first cloud robot may perform image processing according to the acquired image of the surrounding environment, determine whether the surrounding environment has an obstacle, may receive the planned obstacle avoidance path information, and complete obstacle avoidance according to the planned obstacle avoidance path information, may interact with the cloud platform through a TCP/IP protocol, including sending information to the cloud platform and receiving information sent by the cloud platform, may implement that the cloud robot effectively identifies an obstacle and avoids an obstacle, and is stable in data transmission, and has a small probability of packet loss in the data transmission process.
The invention further provides the cloud robot, the cloud robot can effectively identify the obstacles and avoid the obstacles, data transmission is stable, and the probability of packet loss in the data transmission process is low.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a cloud robot according to an embodiment of the present invention. In this embodiment, the cloud robot 70 is the first cloud robot in the above embodiments, and the cloud robot 70 includes a sensing system 71, an information processing system 72, a control system 73, and an execution system 74.
The sensing system 71 may include a camera 711 and a distance sensor 712.
The camera 711 is configured to acquire an image of a surrounding environment by using an imaging method.
The information processing system 72 is configured to perform image processing based on the acquired image of the surrounding environment, determine whether an obstacle exists in the surrounding environment, and determine whether an identification tag identifying the second cloud robot exists in the surrounding environment.
The distance sensor 712 is configured to obtain position information of the obstacle when it is determined that the obstacle exists in the surrounding environment according to the determination result of determining whether the obstacle exists in the surrounding environment, and obtain position information of the identification tag when it is determined that the identification tag identifying the second cloud robot exists in the surrounding environment according to the determination result of determining whether the identification tag identifying the second cloud robot exists in the surrounding environment.
The control system 73 is configured to send the position information of the obstacle to the cloud platform through a TCP/IP protocol, send the position information of the identification tag to the cloud platform through the TCP/IP protocol, send the action information of the action to be performed next to the cloud platform through the TCP/IP protocol, receive obstacle avoidance path information planned by the cloud platform, complete obstacle avoidance according to the planned obstacle avoidance path information, receive path information planned by the cloud platform to reach the designated position, reach the designated position according to the planned path information to reach the designated position, and wake up the second cloud robot at the designated position through voice information.
And the execution system 74 is configured to receive the instruction sent by the cloud platform and performing the same action at the same time, and perform the same action with the second cloud robot according to the instruction.
Optionally, the information processing system 72 may be specifically configured to:
performing image processing of converting gray scale on the acquired image of the surrounding environment by adopting an open source computer vision library mode according to the acquired image of the surrounding environment, performing image processing of binarizing the image after the gray scale conversion, judging whether the surrounding environment has obstacles according to whether the image after the image binarizing processing has pixel positions, judging whether the surrounding environment has obstacles when the image after the image binarizing processing has the pixel positions, and judging whether the surrounding environment has no obstacles when the image after the image binarizing processing has no pixel positions.
Optionally, the information processing system 72 may be specifically configured to:
and processing the image by adopting an identification tag identification mode according to the acquired image of the surrounding environment, and judging whether the surrounding environment has an identification tag for identifying the second cloud robot.
Optionally, the distance sensor 712 may be specifically configured to:
according to the judgment result of judging whether the surrounding environment has the obstacle or not, when the surrounding environment has the obstacle, a monocular distance measurement mode is adopted according to the pixel position of the image subjected to image binarization processing, a similar triangle is formed through the actual focal length of the camera and the pixel center point pair of the camera, the distance information of the pixel position from the camera is calculated according to the proportion of the similar triangle, the distance information of the obstacle corresponding to the pixel position from the camera is obtained, the position information of the obstacle is obtained, and the position information of the obstacle is sent to the cloud platform through a TCP/IP protocol.
Alternatively, the distance sensor 712 may be at least one of a sonar sensor, an infrared sensor, a lidar sensor, an ultrasonic sensor, and the like, and the at least one of the sonar sensor, the infrared sensor, the lidar sensor, the ultrasonic sensor, and the like may be configured to acquire, when it is determined that an obstacle exists in the surrounding environment, position information of the obstacle according to a determination result of whether an obstacle exists in the surrounding environment, and acquire, when it is determined that an identification tag identifying the second cloud robot exists in the surrounding environment, position information of the identification tag according to a determination result of whether an identification tag identifying the second cloud robot exists in the surrounding environment.
In this embodiment, the definition of the cloud robot may be different under different conditions, for example, one mechanical arm may become one cloud robot in a factory, and the humanoid robot is only one complete cloud robot in a home service.
In this embodiment, the camera 711 is the most important visual sensor of the cloud robot 70 in a broad sense, and it may be specified that the camera 711 of the cloud robot 70 may provide a resolution of 1280x960 at a speed of 30 frames per second, so that the cloud robot 70 may clearly identify objects such as obstacles and identification tags in the field of view, and at the same time, the format and size of the image may be unified, so as to facilitate management in the cloud platform, and an automatic exposure algorithm may be adopted, so as to maximally ensure the clarity of the captured image, and for the automatic exposure algorithm, for example, the image may be subdivided into 25 windows and organized as a network of 5x 5.
In this embodiment, the basic principle of the distance sensor 712 may be to emit light wave or sound wave signals, etc., and when the signals reach the detected object and then reflect back to the cloud robot 70, the distance between the cloud robot 70 and the surrounding detected object is calculated, processed and measured, and the distance sensor 712 mostly includes sonar, infrared ray, lidar, ultrasonic wave, etc.; however, the detection ranges of different sensors are different, the sensors for one-dimensional detection are sonar and ultrasonic, the sensors for two-dimensional data acquisition are laser radars, and the sensors for three-dimensional data detection are another common sensor, namely Kinect. In general obstacle detection, only one-dimensional data points are needed, so that only sonar detection needs to be set. Therefore, a left sonar emitter and a right sonar receiver can be respectively set for a one-dimensional data point, and the left sonar emitter and the right sonar receiver can be set because the obstacle can be completely detected from the left direction and the right direction, and the optimal obstacle avoidance line can be selected.
In this embodiment, the control system 73 may obtain data of each sensor through driving after the cloud robot 70 sets each sensor, perform encapsulation through a set interface, and send the encapsulated data to the cloud platform by using a TCP/IP protocol, thereby implementing data sharing and further corresponding data transmission and service processing.
It can be found that, in this embodiment, the cloud robot may perform image processing according to the acquired image of the surrounding environment, determine whether the surrounding environment has an obstacle, may receive the planned obstacle avoidance path information, and complete obstacle avoidance according to the planned obstacle avoidance path information, may interact with the cloud platform through the TCP/IP protocol, including sending information to the cloud platform and receiving information sent by the cloud platform, may implement that the cloud robot effectively identifies an obstacle and avoids an obstacle, and is stable in data transmission, and has a small probability of packet loss in the data transmission process.
The invention further provides a cloud platform, the cloud robot can effectively identify the obstacles and avoid the obstacles, data transmission is stable, and the probability of packet loss in the data transmission process is low.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a cloud platform according to an embodiment of the present invention. In this embodiment, the cloud platform 80 is the cloud platform in the above embodiments, and the cloud platform 80 includes a receiving system 81, a planning system 82, and a sending system 83.
The receiving system 81 is configured to receive the position information of the obstacle sent by the first cloud robot, and receive the position information of the identification tag identifying the second cloud robot sent by the first cloud robot, and receive the motion information of the motion to be performed next by the first cloud robot sent by the first cloud robot.
The planning system 82 is configured to plan an obstacle avoidance path for the first cloud robot according to the position information of the obstacle, and plan a path to a specified position for the first cloud robot according to the position information of the identification tag.
The sending system 83 is configured to send the planned obstacle avoidance path information to the first cloud robot through a TCP/IP protocol, send the planned path information to the designated location to the first cloud robot through the TCP/IP protocol, and send an instruction to the first cloud robot and the second cloud robot to make the first cloud robot and the second cloud robot simultaneously perform the same action through the TCP/IP protocol.
In this embodiment, the cloud platform 80 may adopt a web page design, a user may directly input a website to directly log in the cloud platform 80, and the cloud platform 80 may further include three other main functions, which are robot query, task list, and task execution, respectively. The robot inquiry can be connected with the cloud robot, and simultaneously inquires the state of the cloud robot. The state of the cloud robot can be divided into equipment states, whether the cloud robot is good or not can be displayed, and if the cloud robot cannot be connected, an error can be reported. The state of the cloud robot may be further divided into a connection state, which indicates whether the cloud robot is connected, and the operation may be performed by connecting and disconnecting the cloud platform 80 to and from the cloud robot.
In this embodiment, in the task list interface, different tasks may be classified, and a user may autonomously select a task that the cloud robot needs to complete.
In this embodiment, when a plurality of cloud robots execute a task, different cloud robots are responsible for different modules, so task allocation is required, and the cloud platform 80 allocates tasks before executing the task, so that the situation that the modules are disordered is ensured not to occur.
In this embodiment, after the task is allocated, the task is executed, and then the cloud robot returns image information, data information, voice information, and the like to the cloud platform 80, and the cloud platform 80 directly displays the information on the display interface of the cloud platform 80, so that the user can visually see whether the cloud robot completes the action according to the instruction. After the task is completed, the instructions that the task has completed are returned. And informing the user of the completion of the user, and performing the next operation.
Referring to fig. 9, fig. 9 is an exemplary illustration of a display interface of a robot query function of a cloud platform according to an embodiment of the present invention. As shown in fig. 9, the page of the robot query function may be connected to the cloud robot, and simultaneously query the state of the cloud robot, where the state of the cloud robot may be divided into an equipment state, and may display whether the cloud robot is good, and if the cloud robot cannot be connected, an error may be reported, and where the state of the cloud robot may also be divided into a connection state, and whether the cloud robot is connected is indicated.
Referring to fig. 10 and 11, fig. 10 is a schematic illustration of a display interface of a robot task list function of a cloud platform in an embodiment of a cloud platform of the present invention, and fig. 11 is a schematic illustration of a display interface of a robot task assignment function of a cloud platform in an embodiment of a cloud platform of the present invention. As shown in fig. 10 and 11, in the task list interface, different tasks may be classified, and a user may autonomously select a task that the cloud robot needs to complete. When a plurality of cloud robots execute tasks, different cloud robots can be responsible for different modules, so that task allocation is needed, and the cloud platform can perform task allocation before executing the tasks, so that the condition that the modules are disordered can be avoided.
It can be found that, in this embodiment, the cloud platform may plan an obstacle avoidance path for the first cloud robot according to the position information of the obstacle, plan a path to the specified position for the first cloud robot according to the position information of the identification tag, send the planned obstacle avoidance path information to the first cloud robot through the TCP/IP protocol, send the planned path to the specified position to the first cloud robot through the TCP/IP protocol, and send an instruction, which enables the first cloud robot and the second cloud robot to simultaneously perform the same action, to the first cloud robot and the second cloud robot through the TCP/IP protocol, so that the cloud robot can effectively identify the obstacle and avoid the obstacle, data transmission is stable, and the probability of packet loss is small in the data transmission process.
It should be noted that the cloud platform-based cloud robot interaction method, the cloud robot and the core content of the cloud platform provided by the present invention are the design and construction of the cloud platform and the design of the robot action, and the main flow of executing the action may include: the method comprises the steps of firstly, obtaining an image by calling a camera, obtaining the position of an obstacle through image processing of an open source computer vision library and a monocular distance measurement algorithm, sending the position to a cloud platform, designing an optimal obstacle avoidance path for a first cloud robot by the cloud platform, calling the camera to identify a label of a second cloud robot after the first cloud robot finishes obstacle avoidance, locating the position of the second cloud robot, sending the position information of the second cloud robot to the cloud platform, enabling the first cloud robot to walk near the label, then enabling the second cloud robot to pass through voice information, and then sending an instruction through the cloud platform to enable the first cloud robot and the second cloud robot to simultaneously make the same action.
It should be noted that, according to the cloud robot interaction method based on the cloud platform, the cloud robot and the cloud platform provided by the invention, the probability of packet loss is reduced in the data transmission process through the specified interface at the cloud robot end, so that the purposes of effectively transmitting data and transmitting the data back to another cloud robot are achieved.
It should be noted that, the cloud robot platform is a robot sharing platform as the name implies, and a platform for realizing intercommunication between a robot and between the robot and a user can be realized.
It should be noted that, according to the cloud robot interaction method based on the cloud platform and the cloud robot provided by the invention, the control environment NAOqi and the software environment webots of the cloud robot may be configured on a computer at first. The method is characterized in that programming design is carried out aiming at tasks to be realized, some pyhton header file packages can be installed in the process of debugging codes, a binarization algorithm is mainly applied to an image processing module, and a monocular distance measurement algorithm is used for a distance measurement module, so that the intelligent effect is achieved. The most difficult part in the construction of the cloud platform is the stability of data transmission, and the TCP/IP protocol technology of a computer network is mainly applied to ensure that data can be effectively, quickly and stably transmitted to the cloud platform and data interaction between the cloud platform and the cloud robot. The local cloud can be constructed by a WAMP (Apache + Mysql/MariaDB + Perl/PHP/Python under Windows, a set of open source software commonly used for building a dynamic website or a server) technology, and the intercommunication with other hosts can be realized by a route exchange technology, so that the effect of the global cloud can be achieved, and the data of the local cloud can be shared with the cloud robot of another host, thereby achieving the commercial effect.
It should be noted that, in the cloud platform-based cloud robot interaction method and the cloud robot provided by the invention, the cloud robot technology is the key content for researching the artificial intelligent robot, the artificial intelligence has a great promotion effect on the industrialization of the service robot, workers in factory workshop production in the future are replaced by the intelligent robot, and the data communication interaction between the robots is essential for realizing the production by controlling and managing the robots at the background by human beings. And the interaction of a large number of robots can lead to data explosion and blockage of communication channels, the development of cloud robot technology promotes the development of artificial intelligence, and meanwhile, the development of intelligent robots also enables the development of cloud robot technology to be more rapid. The service robot needs to be popularized and has the capability of replacing human beings, and the intelligent requirement on the robot is very high. According to the maturity of the current market application and the future market space, the industrial service robot is still in the development stage, and the interactive research of the cloud platform double-cloud robot has a profound influence on the application and production of the cloud robot.
In the several embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be substantially or partially implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a part of the embodiments of the present invention, and not intended to limit the scope of the present invention, and all equivalent devices or equivalent processes performed by the present invention through the contents of the specification and the drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A cloud robot interaction method based on a cloud platform is characterized by comprising the following steps:
the first cloud robot acquires images of the surrounding environment in a camera shooting mode;
the first cloud robot carries out image processing according to the acquired image of the surrounding environment and judges whether the surrounding environment has obstacles or not;
the first cloud robot acquires position information of the obstacle according to a judgment result of judging whether the obstacle exists in the surrounding environment or not when judging that the obstacle exists in the surrounding environment, and sends the position information of the obstacle to a cloud platform through a transmission control protocol/Internet protocol;
the cloud platform receives the position information of the obstacle, plans an obstacle avoiding path for the first cloud robot according to the position information of the obstacle, and sends the planned obstacle avoiding path information to the first cloud robot through a transmission control protocol/Internet interconnection protocol;
the first cloud robot receives the planned obstacle avoidance path information and completes obstacle avoidance according to the planned obstacle avoidance path information;
after obstacle avoidance is finished, the first cloud robot acquires the image of the surrounding environment again in a camera shooting mode;
the first cloud robot carries out image processing according to the acquired image of the surrounding environment and judges whether the surrounding environment has an identification tag for identifying the second cloud robot;
the first cloud robot acquires the position information of the identification tag when judging whether the identification tag identifying the second cloud robot exists in the surrounding environment according to the judgment result of judging whether the surrounding environment has the identification tag identifying the second cloud robot, and sends the position information of the identification tag to the cloud platform through a transmission control protocol/Internet interconnection protocol;
the cloud platform receives the position information of the identification tag, plans a path to a specified position for the first cloud robot according to the position information of the identification tag, and sends the planned path information to the specified position to the first cloud robot through a transmission control protocol/Internet protocol;
the first cloud robot receives the planned path information reaching the designated position and reaches the designated position according to the planned path information reaching the designated position;
the first cloud robot wakes up the second cloud robot at the designated position through voice information, and sends action information of actions to be done next to the cloud platform through a transmission control protocol/Internet protocol;
the cloud platform receives the action information and sends an instruction enabling the first cloud robot and the second cloud robot to simultaneously do the same action to the first cloud robot and the second cloud robot through a transmission control protocol/Internet protocol;
and the first cloud robot and the second cloud robot respectively receive the instructions for simultaneously doing the same action, and simultaneously do the same action according to the instructions for simultaneously doing the same action.
2. The cloud robot interaction method based on the cloud platform as claimed in claim 1, wherein the step of performing image processing by the first cloud robot according to the acquired image of the surrounding environment to determine whether the surrounding environment has an obstacle comprises:
the first cloud robot carries out image processing of gray level conversion on the acquired image of the surrounding environment by adopting an open source computer vision library mode according to the acquired image of the surrounding environment, carries out image processing of image binarization on the image after the gray level conversion, judges whether obstacles exist in the surrounding environment according to whether pixel positions exist in the image after the image binarization processing, judges whether obstacles exist in the surrounding environment when the pixel positions exist in the image after the image binarization processing, and judges that no obstacles exist in the surrounding environment when the pixel positions do not exist in the image after the image binarization processing.
3. The cloud robot interaction method based on the cloud platform as claimed in claim 2, wherein the first cloud robot acquires the position information of the obstacle when it is determined that the obstacle exists in the surrounding environment according to the determination result of determining whether the obstacle exists in the surrounding environment, and transmits the position information of the obstacle to the cloud platform through a transmission control protocol/internet protocol, including:
and the first cloud robot adopts a monocular distance measurement mode according to the pixel position of the image subjected to image binarization processing and the pixel center point pair of the camera to form a similar triangle according to the pixel position of the image subjected to image binarization processing when judging that the obstacle exists in the surrounding environment, calculates the distance information between the pixel position and the camera according to the proportion of the similar triangle, obtains the distance information between the obstacle corresponding to the pixel position and the camera, obtains the position information of the obstacle, and sends the position information of the obstacle to a cloud platform through a transmission control protocol/Internet interconnection protocol.
4. The cloud robot interaction method based on the cloud platform as claimed in claim 1, wherein the step of the first cloud robot performing image processing according to the acquired image of the surrounding environment and determining whether the surrounding environment has an identification tag for identifying the second cloud robot comprises:
and the first cloud robot performs image processing by adopting an identification tag recognition mode according to the acquired image of the surrounding environment, and judges whether the surrounding environment has an identification tag for identifying the second cloud robot.
5. A cloud robot, comprising:
the system comprises a sensing system, an information processing system, a control system and an execution system;
the sensing system comprises a camera and a distance sensor;
the camera is used for acquiring images of the surrounding environment in a shooting mode;
the information processing system is used for processing the image according to the acquired image of the surrounding environment, judging whether the surrounding environment has obstacles or not and judging whether the surrounding environment has an identification tag for identifying the second cloud robot or not;
the distance sensor is used for acquiring position information of the obstacle when judging whether the obstacle exists in the surrounding environment according to the judgment result of judging whether the obstacle exists in the surrounding environment, and acquiring position information of the identification tag when judging whether the identification tag for identifying the second cloud robot exists in the surrounding environment according to the judgment result of judging whether the identification tag for identifying the second cloud robot exists in the surrounding environment;
the control system is used for sending the position information of the obstacle to a cloud platform through a transmission control protocol/Internet interconnection protocol, sending the position information of the identification tag to the cloud platform through the transmission control protocol/Internet interconnection protocol, sending the action information of the action to be done next to the cloud platform through the transmission control protocol/Internet interconnection protocol, receiving obstacle avoidance path information planned by the cloud platform, completing obstacle avoidance according to the planned obstacle avoidance path information, receiving path information planned by the cloud platform and reaching the specified position, reaching the specified position according to the planned path information reaching the specified position, and awakening a second cloud robot at the specified position through voice information;
the execution system is used for receiving the instruction which is sent by the cloud platform and does the same action at the same time, and does the same action with the second cloud robot at the same time according to the instruction which does the same action at the same time.
6. The cloud robot of claim 5, wherein the information processing system is specifically configured to:
according to the acquired image of the surrounding environment, performing image processing of gray level conversion on the acquired image of the surrounding environment in an open-source computer vision library mode, performing image binarization image processing on the image subjected to gray level conversion, and judging whether the surrounding environment has obstacles or not according to whether the image subjected to image binarization processing has a pixel position or not, wherein when the image subjected to image binarization processing has a pixel position, the surrounding environment is judged to have obstacles, and when the image subjected to image binarization processing does not have a pixel position, the surrounding environment is judged to have no obstacles.
7. The cloud robot of claim 5, wherein the information processing system is specifically configured to:
and processing the image by adopting an identification tag identification mode according to the acquired image of the surrounding environment, and judging whether the surrounding environment has an identification tag for identifying the second cloud robot.
8. The cloud robot of claim 6, wherein said distance sensor is specifically configured to:
according to the judgment result of judging whether the surrounding environment has the obstacle or not, when the surrounding environment has the obstacle, a monocular distance measurement mode is adopted according to the pixel position of the image subjected to image binarization processing, a similar triangle is formed through the actual focal length of the camera and the pixel center point pair of the camera, the distance information between the pixel position and the camera is calculated according to the proportion of the similar triangle, the distance information between the obstacle corresponding to the pixel position and the camera is obtained, the position information of the obstacle is obtained, and the position information of the obstacle is sent to a cloud platform through a transmission control protocol/Internet interconnection protocol.
9. The cloud robot of claim 5, wherein said distance sensor is at least one of a sonar sensor, an infrared sensor, a lidar sensor, and an ultrasonic sensor.
10. A cloud platform, comprising:
a receiving system, a planning system and a sending system;
the receiving system is used for receiving position information of the obstacle sent by the first cloud robot, receiving position information of an identification tag which is sent by the first cloud robot and identifies the second cloud robot, and receiving action information of an action to be taken next by the first cloud robot, sent by the first cloud robot;
the planning system is used for planning an obstacle avoidance path for the first cloud robot according to the position information of the obstacle and planning a path reaching a specified position for the first cloud robot according to the position information of the identification tag;
the sending system is used for sending the planned obstacle avoidance path information to the first cloud robot through a transmission control protocol/Internet interconnection protocol, sending the planned path information reaching the designated position to the first cloud robot through the transmission control protocol/Internet interconnection protocol, and sending an instruction enabling the first cloud robot and the second cloud robot to simultaneously perform the same action to the first cloud robot and the second cloud robot through the transmission control protocol/Internet interconnection protocol.
CN201811032020.1A 2018-09-05 2018-09-05 Cloud robot interaction method based on cloud platform, cloud robot and cloud platform Expired - Fee Related CN108789421B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811032020.1A CN108789421B (en) 2018-09-05 2018-09-05 Cloud robot interaction method based on cloud platform, cloud robot and cloud platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811032020.1A CN108789421B (en) 2018-09-05 2018-09-05 Cloud robot interaction method based on cloud platform, cloud robot and cloud platform

Publications (2)

Publication Number Publication Date
CN108789421A CN108789421A (en) 2018-11-13
CN108789421B true CN108789421B (en) 2020-10-16

Family

ID=64081684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811032020.1A Expired - Fee Related CN108789421B (en) 2018-09-05 2018-09-05 Cloud robot interaction method based on cloud platform, cloud robot and cloud platform

Country Status (1)

Country Link
CN (1) CN108789421B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109940612A (en) * 2019-03-04 2019-06-28 东北师范大学 Intelligent barrier avoiding robot and its barrier-avoiding method based on a wordline laser
CN110275532B (en) * 2019-06-21 2020-12-15 珠海格力智能装备有限公司 Robot control method and device and visual equipment control method and device
CN111474947A (en) * 2020-05-07 2020-07-31 北京云迹科技有限公司 Robot obstacle avoidance method, device and system
CN113485330B (en) * 2021-07-01 2022-07-12 苏州罗伯特木牛流马物流技术有限公司 Robot logistics carrying system and method based on Bluetooth base station positioning and scheduling
CN114872029B (en) * 2022-06-09 2024-02-02 深圳市巨龙创视科技有限公司 Robot vision recognition system
CN117910188A (en) * 2022-10-10 2024-04-19 华为云计算技术有限公司 Simulation training method and device and computing device cluster
CN117250965B (en) * 2023-11-20 2024-02-23 广东电网有限责任公司佛山供电局 Robot obstacle avoidance rapid path reconstruction method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8478901B1 (en) * 2011-05-06 2013-07-02 Google Inc. Methods and systems for robot cloud computing using slug trails
CN105856243A (en) * 2016-06-28 2016-08-17 湖南科瑞特科技股份有限公司 Movable intelligent robot
CN106168805A (en) * 2016-09-26 2016-11-30 湖南晖龙股份有限公司 The method of robot autonomous walking based on cloud computing
CN108345306A (en) * 2018-02-06 2018-07-31 达闼科技(北京)有限公司 Paths planning method, the update method of road information, equipment and storage medium
CN108279679A (en) * 2018-03-05 2018-07-13 华南理工大学 A kind of Intelligent meal delivery robot system and its food delivery method based on wechat small routine and ROS

Also Published As

Publication number Publication date
CN108789421A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
CN108789421B (en) Cloud robot interaction method based on cloud platform, cloud robot and cloud platform
JP6879891B2 (en) Methods and systems for completing point clouds using plane segments
CN110974088B (en) Sweeping robot control method, sweeping robot and storage medium
CN111325796B (en) Method and apparatus for determining pose of vision equipment
CN110568447B (en) Visual positioning method, device and computer readable medium
US20230072289A1 (en) Target detection method and apparatus
US10674135B2 (en) Handheld portable optical scanner and method of using
KR20220028042A (en) Pose determination method, apparatus, electronic device, storage medium and program
US20190213790A1 (en) Method and System for Semantic Labeling of Point Clouds
WO2022160790A1 (en) Three-dimensional map construction method and apparatus
US20220309761A1 (en) Target detection method, device, terminal device, and medium
CN115578433B (en) Image processing method, device, electronic equipment and storage medium
CN111476894A (en) Three-dimensional semantic map construction method and device, storage medium and electronic equipment
Mojtahedzadeh Robot obstacle avoidance using the Kinect
WO2023164845A1 (en) Three-dimensional reconstruction method, device, system, and storage medium
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN108287345A (en) Spacescan method and system based on point cloud data
CN108364340A (en) The method and system of synchronous spacescan
CN110852132B (en) Two-dimensional code space position confirmation method and device
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
Pinard et al. Does it work outside this benchmark? Introducing the rigid depth constructor tool: Depth validation dataset construction in rigid scenes for the masses
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
KR102299902B1 (en) Apparatus for providing augmented reality and method therefor
US11491658B2 (en) Methods and systems for automatically annotating items by robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201016

Termination date: 20210905

CF01 Termination of patent right due to non-payment of annual fee