WO2023208084A1 - 一种机器人及其启动模式判定方法、数据处理设备 - Google Patents

一种机器人及其启动模式判定方法、数据处理设备 Download PDF

Info

Publication number
WO2023208084A1
WO2023208084A1 PCT/CN2023/091029 CN2023091029W WO2023208084A1 WO 2023208084 A1 WO2023208084 A1 WO 2023208084A1 CN 2023091029 W CN2023091029 W CN 2023091029W WO 2023208084 A1 WO2023208084 A1 WO 2023208084A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
startup mode
mode
photovoltaic panel
panel array
Prior art date
Application number
PCT/CN2023/091029
Other languages
English (en)
French (fr)
Inventor
徐斐
邓龙
Original Assignee
苏州瑞得恩光能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州瑞得恩光能科技有限公司 filed Critical 苏州瑞得恩光能科技有限公司
Publication of WO2023208084A1 publication Critical patent/WO2023208084A1/zh

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy

Definitions

  • This application relates to a robot, its startup mode determination method, and data processing equipment.
  • the control method of the cleaning process can be roughly divided into two methods: manual control and automatic control.
  • the manual control method requires an operator for each robot, which is more expensive and practical. Poor.
  • the automatic control method requires setting some fixed instructions for the robot to let the robot follow the planned route.
  • the initial position of the robot placed on the photovoltaic panel is generally at the lowest point of the photovoltaic panel array, preferably at the lower left corner or lower right corner of the array.
  • the robot's traveling path on the panel array is related to its initial position on the panel array; if the robot's initial position is different, its startup mode and operating mode are also different, and its action instruction set is also different.
  • workers need to manually set the startup mode according to the placement position of the robot. The disadvantage is that on the one hand, the operation is troublesome and the workers need to learn the operation method. On the other hand, once the worker mode is selected, Mistakes will cause the robot to make mistakes in its travel process and deviate from the preset path, or cause the robot to choose the wrong direction and fall from the photovoltaic panel.
  • the purpose of the present invention is to provide a robot and a startup mode determination method thereof, so as to solve the technical problem of complicated operations required for robot startup mode selection.
  • the present invention provides a startup mode determination method, which includes the following steps: using a deep learning algorithm to construct a startup mode judgment model; when a robot is placed on a photovoltaic panel array, the left and right sides of the robot are used to determine the startup mode.
  • the camera collects at least one real-time picture; inputs the real-time picture into the startup mode judgment model; and determines the startup mode of the robot in an initial state, where the startup mode includes a left startup mode and a right startup mode.
  • This application also provides a data processing device, including: a memory for storing executable program code; and a processor, connected to the memory, to run the executable program by reading the executable program code.
  • the computer program corresponding to the program code is used to execute the robot startup mode determination method described above.
  • This application also provides a robot including the above data processing device.
  • the robot includes a vehicle body and a camera
  • the vehicle body can travel on the photovoltaic panel array
  • the cameras are arranged on the left and right sides of the vehicle body
  • the camera is used to collect real-time images of the photovoltaic panel array
  • the data processing equipment is installed in the vehicle body and connected to the camera.
  • the technical effect of the present invention is to collect real-time images on both sides of the vehicle body through the camera, record the real-time images into a mode judgment model, and then select the left start mode or the right start mode, so that the robot can be placed on the photovoltaic panel. Automatically selects the startup mode and executes corresponding control instructions for cleaning. No manual operation is required, and there is no need to set the startup mode button, making the user's operation simpler and effectively improving work efficiency and operational safety.
  • Figure 1 is a road map of the robot described in Embodiment 1 traveling on the photovoltaic panel array in left start mode.
  • Figure 2 is a road map of the robot described in Embodiment 1 traveling on the photovoltaic panel array in the right start mode.
  • Figure 3 is a side view of the robot according to Embodiment 1.
  • Figure 4 is an overall flow chart of the method for determining the startup mode of the robot in Embodiment 1.
  • FIG. 5 is a flow chart of the steps of constructing a startup mode judgment model using a deep learning algorithm in Embodiment 1.
  • Figure 6 is a flow chart of the steps of collecting two or more training samples in Embodiment 1.
  • Figure 7 is a flow chart of the robot control steps described in Embodiment 1.
  • Figure 8 is a side view of the robot according to Embodiment 2.
  • Figure 9 is a top view of the robot according to Embodiment 2.
  • Figure 10 is an overall flow chart of the method for determining the startup mode of the robot in Embodiment 2.
  • Embodiment 1 of the present application discloses a robot for cleaning the photovoltaic panel array 1 . Since the photovoltaic panels are all tilted, the initial position of the robot placed on the photovoltaic panel is generally at the lowest point of the photovoltaic panel array, preferably at the lower left corner or lower right corner of the array, see Figures 1 and 2.
  • the robot 2 in this embodiment can automatically select the left start mode or the right start mode, without the need for staff to operate and set it, which improves work efficiency.
  • the robot 2 includes a vehicle body 21, a camera 22 and data processing equipment (not shown in the figure).
  • the vehicle body 21 can move on the photovoltaic panel array 1.
  • the front or rear end of the vehicle body 21 is equipped with a cleaning device.
  • Part 23 is preferably a roller brush; the data processing equipment is located in the vehicle body 21.
  • There are two cameras 22 respectively located on the left and right sides of the vehicle body 21 .
  • the two cameras 22 are respectively used to obtain real-time images of the photovoltaic panel arrays 1 on the left and right sides of the vehicle body 21 and send the image information to the data processing equipment.
  • the information processing device is installed in the vehicle body 21 and connected to the camera 22. It can receive and process the image information sent by the camera 22, and select the left start mode or the right start mode according to the image information.
  • the photovoltaic panel array 1 is an inclined array planar structure composed of more than two photovoltaic panels.
  • the photovoltaic panel array 1 has several rows and columns, and the overall shape is generally rectangular or square. Therefore, the photovoltaic panel array 1 has four edge lines, and the four edge lines include: an upper edge line 11 and a lower edge line 12 that are parallel to each other, and a left edge line 13 and a right edge line 14 that are parallel to each other.
  • the data processing device processes the image and determines that the car body 21 is located at the lower left corner or the lower right corner of the photovoltaic panel array 1, and then determines whether to execute the left start mode or the right start mode.
  • the four sides of each photovoltaic panel are equipped with metal frames to protect the panel and make the panel easy to identify. Select left start mode or right start mode according to the judgment result, and execute the corresponding cleaning plan.
  • the extending direction of the left side line 13 is defined as the first direction X
  • the extending direction of the upper side line 11 is defined as the second direction Y.
  • the photovoltaic panel array 1 is an inclined plane.
  • an adsorption device is usually provided on its bottom surface.
  • the robot should start from the lower left corner or lower right corner of the photovoltaic panel array 1 and move along the left or right side of the photovoltaic panel array 1.
  • the line travels straight from the lower end of the matrix to the upper end, then travels along the first direction on the matrix, turns left or right at the edge of the matrix, and continues walking in the opposite direction on the matrix along the first direction.
  • the height at which the robot's body travels in the first direction is lower than the height at which it traveled in the first direction previously.
  • the size of the photovoltaic panel array 1 is an unknown parameter for the robot 2. Since the size of the photovoltaic panel array 1 is different, the optimal traveling route of the robot 2 on the panel array is different, the distance of each lateral travel is different, the number of U-turns on the same row of panels is also different, and its working mode is also different.
  • the robot 2 needs to automatically calculate the size of each photovoltaic panel 10 in the array of photovoltaic panels 10 during its travel, especially the length of each row of photovoltaic panels 10 in its tilt direction. According to its size calculation, the robot needs to calculate the size of each photovoltaic panel 10 in each row of photovoltaic panels 10. The number and distance of reciprocating walking on the robot 2 are calculated, and the number and position of the U-turn of the robot 2 are calculated to ensure that every corner of each row of photovoltaic panels 10 can be cleaned.
  • the panel array area that the vehicle body passes each time it travels in the first direction can be defined as a cleaning channel.
  • Each cleaning channel is equal to or smaller than the width of the cleaning part 23.
  • the cleaning part 23 may extend outside the cleaning channel.
  • At least one photovoltaic panel 10 in the same row can be divided into N cleaning channels extending in the first direction, extending from the left side of the photovoltaic panel 10 array to its right side, where N is the length of the row of photovoltaic panels 10 on the inclined surface.
  • the integer part of the quotient of the width of the cleaning part 23 is added by one. For example, if the length of a row of panels is 2 meters, the width of the roller brush is 0.7 meters, and N is 3, then this row of photovoltaic panels 10 needs to be divided into three horizontal cleaning channels. The cleaning robot must go back and forth three times before the entire row of panels can be cleaned. Sweep them all without leaving any dead spots.
  • the three cleaning channels are the first channel, the second channel and the third channel from top to bottom.
  • the width of the three cleaning channels can be set to 0.7 meters, 0.6 meters and 0.7 meters respectively.
  • This embodiment also provides a method for determining the startup mode of the above-mentioned robot, as shown in Figure 4, which specifically includes the following steps S100-S400.
  • Step S100 Use a deep learning algorithm to build a startup mode judgment model.
  • step S100 includes the following steps: S110, collect more than two training samples, each training sample includes a picture, and each picture has a label; S120, group the training samples, and group the training samples with the same label The pictures are divided into the same group; S130. Enter the grouped training samples into a convolutional neural network model, use the convolutional neural network algorithm for training, and obtain a classifier, that is, the startup mode judgment model.
  • This embodiment preferably uses caffe deep learning algorithm to train the model.
  • step S110 specifically includes the following steps: S111. Place a robot 2 to the lower left corner of a photovoltaic panel array 1 multiple times; S112. After each placement, use the camera 22 of the robot 2 to collect at least one The first picture; S113. Set a first label for each first picture, and the first label corresponds to the left start mode; S114. Place a robot 2 to the lower right corner of a photovoltaic panel array 1 multiple times; S115. Each After the first placement, use the camera 22 of the robot 2 to collect at least one second picture; S116. Set a second label for each second picture, and the second label corresponds to the right start mode.
  • steps S112 and S115 the two cameras on the left and right sides of the robot collect a large number of pictures in real time, generally more than 5,000 pictures. Each picture is set with a label corresponding to the left start mode or the right start mode. In other embodiments, you may also choose to perform steps S114-116 first and then perform steps S111-113.
  • Step S200 When a robot 2 is placed on a photovoltaic panel array 1, the cameras 22 on the left and right sides of the robot 2 are used to collect at least one real-time picture; there are two cameras 22, respectively located on the left and right sides of the car body 21. sides are respectively used to obtain real-time images of the photovoltaic panel arrays 1 on the left and right sides of the vehicle body 21 in real time.
  • the robot When the robot is placed in the lower left corner of the panel array and the front of the robot is facing the upper edge of the panel array, its left camera cannot capture images of the panel array, but its right camera can capture images of the panel array located on the right side of the vehicle body.
  • the robot when the robot is placed in the lower right corner of the panel array and the front of the car is facing the upper edge of the panel array, its left camera can capture the image of the panel array located on the left side of the car body.
  • Step S300 the camera 22 sends the image information to the data processing device, and inputs the real-time picture into the startup mode judgment model; when a new picture is entered into the startup mode judgment model, the result output by the model is the tag type. , you can judge by yourself whether the tag corresponding to the new picture is the first tag or the second tag.
  • Step S400 Determine the startup mode of the robot 2 in the initial state.
  • the startup mode includes a left startup mode and a right startup mode. Since the first label corresponds to the left startup mode and the second label corresponds to the right startup mode, the computer can determine the startup mode of the robot 2 in the initial state based on the label type in the previous step. After the startup mode is determined, the robot can execute the control instructions corresponding to the startup mode on its own, travel on the photovoltaic panel array according to the pre-planned preferred path, and clean simultaneously while traveling. The robot needs to clean every corner of the panel array, and also Minimize duplicate routes.
  • the robot startup mode determination method described in this embodiment can also perform robot control steps after steps S100-400, including the following steps S910-S950.
  • Step S910 When the robot 2 travels to the upper edge of the photovoltaic panel array 1, control the robot 2 to turn at a right angle on the spot; if the startup mode is a left startup mode, control the robot 2 to turn right. Right-angle bend; if the start mode is the right start mode, control the robot 2 to turn left at a right angle.
  • Step S920 a straight-line control step, controls the vehicle body 21 to move straight along the extending direction of the upper edge of the photovoltaic panel array.
  • Step S930 U-turn control step.
  • step S940 is executed; if not, step S950 is executed.
  • Step S940 Control the robot 2 to stop traveling.
  • Step S950 Control the vehicle body 21 to turn left or right in a U-shaped turn, and return to the straight-running control step S920.
  • steps S910-S950 are to realize the reciprocating cleaning of the vehicle body 21 on the photovoltaic panel array 1.
  • the new cleaning trajectory is adjacent to or partially overlaps with the previous linear cleaning trajectory.
  • the robot travels on the photovoltaic panel array according to the pre-planned preferred path, and cleans simultaneously while traveling. The robot needs to clean every corner of the panel array without leaving any dead corners.
  • This embodiment also includes a data processing device, which includes a memory and a processor.
  • the memory is used to store executable program code; the processor is connected to the memory and reads the executable program code to run the executable program code.
  • a computer program corresponding to the program code to implement at least one step in the above-mentioned method for determining the startup mode of the robot.
  • the memory can also be used to store control instruction sets corresponding to the left start mode and the right start mode. The actions performed by controlling the robot in steps S910-S950 are all implemented through these control instruction sets.
  • the technical effect of this embodiment is to collect real-time images on both sides of the vehicle body through the camera, record the real-time images into a mode judgment model, and then select the left start mode or the right start mode, so that the robot can be immediately placed on the photovoltaic panel. It can automatically select the startup mode, execute the corresponding control instruction set, and automatically clean without manual operation or setting the startup mode button, making the user's operation simpler and effectively improving work efficiency and operation safety.
  • Embodiment 2 includes all the technical solutions in Embodiment 1.
  • the robot provided in Embodiment 2 includes the vehicle body 21, camera 22, cleaning part 23 and data in Embodiment 1.
  • the processing equipment also includes metal sensors 24, which are arranged on the left and right sides of the bottom of the vehicle body, or are fixed to the side walls of the vehicle body and extend to the bottom surface of the vehicle body; when a metal sensor is connected to a When the distance between the metal frames is less than or equal to a preset threshold, the metal sensor generates and sends a signal to the data processing device.
  • the metal sensor on the left side of the vehicle body can sense the left frame and generate an electrical signal.
  • the metal sensor on the left side of the vehicle body is located on the photovoltaic panel and cannot generate electrical signals. Therefore, when the metal sensor on the left side of the car body generates an electrical signal and the metal sensor on the right side has no signal, the computer can determine that the initial position of the robot is at the lower left corner of the panel array, and its startup mode is the left startup mode.
  • the computer can determine that the initial position of the robot is in the lower right corner of the panel array, and its startup mode is the right startup mode.
  • the robot can determine the starting mode of the car body through the camera 22 or the metal sensor 24. However, the robot will have a certain probability of error in actual work. If only one method of judgment is used, the judgment may be wrong, causing the robot to fall from the panel. , causing a safety accident.
  • This embodiment calculates the two judgment results according to a certain weight ratio to obtain the corrected judgment result, making the startup mode judgment result more accurate.
  • Embodiment 2 includes all the technical solutions in Embodiment 1. Another difference is that, as shown in Figure 10, Embodiment 2 provides a method for determining a robot startup mode. In addition to steps S100-S400 of Embodiment 1, it also includes the following. Steps S500-S800.
  • step S600 Use the metal sensor 24 to determine again the starting mode of the robot 2 in the initial state, and output the second group parameters.
  • step S600 uses the metal sensor 24 to determine the startup mode again, including the following steps S610-S630.
  • Step S610 Set a metal sensor 24 on the left and right sides of the bottom of the robot 2, respectively, defined as the left sensor and the right sensor;
  • Step S620 When a robot 2 is placed on a photovoltaic panel array 1, the robot 2 synchronously collects the The electrical signals generated by the left sensor and the right sensor; step S630, when the left sensor has a signal and the right sensor has no signal, it is determined that the starting mode of the robot 2 in the initial state is the left starting mode; when the When the left sensor has no signal and the right sensor has a signal, it is determined that the starting mode of the robot 2 in the initial state is the right starting mode.
  • the metal sensor 24 When the vehicle body 21 is close to the left side line 13 or the right side line 14, since the height of the metal frame at the edge is low and the metal frame is located in the blind spot of the camera sensor, there is no metal frame in the image acquired by the camera sensor, and it cannot be identified through the judgment model.
  • the metal sensor 24 generates and sends electrical signals to the output processing device.
  • the metal sensor 24 has the characteristics of small detection range and high sensitivity. Generally, the detection range is tens of millimeters. This electrical signal represents that the car body 21 is close to the edge of the photovoltaic panel array 1 with the metal frame. Since the frame position collectors are arranged on both sides of the car body 21, the data processing equipment receives the electrical signal from the metal sensor 24, which means The vehicle body 21 is already close to the left
  • step S800 Re-determine the startup mode of the robot 2 in the initial state according to the result of the correction parameter.
  • the judgment result in step S800 is more accurate than the judgment result in step S400, effectively reducing the probability of an accident.
  • the robot startup mode determination method described in Embodiment 2 after steps S100-800, also includes steps S910-S950 described in Embodiment 1, which will not be described again.
  • the camera 22 and the metal sensor 24 are both installed in the robot 2.
  • the startup mode of the vehicle body can be determined by selecting one of them, there will be a certain error probability in the actual work of the robot. If only one is used, If judged in this way, the judgment may be wrong, causing the robot to fall from the panel, causing a safety accident.
  • the robot 2 in this embodiment outputs a mode judgment result through the camera 22 and the mode judgment model, outputs another mode judgment result through the metal sensor 24, and then calculates the two judgment results according to a certain weight ratio to obtain a correction parameter, according to
  • the calibration parameters automatically select the startup mode, making the judgment result of the startup mode more accurate. After the robot is placed on the photovoltaic panel, it can automatically select the startup mode for cleaning without manual operation or setting the startup mode button, which improves work efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Electromagnetism (AREA)
  • Mathematical Physics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)
  • Camera Bodies And Camera Details Or Accessories (AREA)
  • Numerical Control (AREA)

Abstract

本申请公开了一种机器人及其启动模式判定方法、数据处理设备,所述的机器人通过摄像头根据判断模型选择左启动模式或右启动模式,其放置在光伏面板上后即可自动选择启动模式进行面板阵列的清扫。

Description

一种机器人及其启动模式判定方法、数据处理设备
本申请要求于2022年04月30日提交中国专利局,申请号为202210469186 .X,发明名称为“一种机器人及其启动模式判定方法、数据处理设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及一种机器人及其启动模式判定方法、数据处理设备。
背景技术
清扫机器人被放置于光伏面板阵列上之后,其清扫过程的控制方法大致可以分为人工控制和自动控制两种方式,人工控制方式需要给每个机器人配备一位操作人员,成本较高,实用性较差。自动控制方式需要给机器人设置一些定式指令,让机器人按照规划路线行进。
由于光伏面板都是倾斜设置的,机器人被放置于光伏面板上的初始位置一般都是在光伏面板阵列的最低处,优选放置于该阵列的左下角或右下角。然而,机器人在面板阵列上的行进路径,与其在面板阵列上的初始位置有关;如果机器人的初始位置不同,那么其启动模式及运行模式也不同,其行动指令集也是不一样的。现有技术中,工作人员需要根据机器人的摆放位置,人工设定启动模式,其不足之处在于,一方面,工作人员操作比较麻烦,需要学习操作方法,另一方面,一旦工作人员模式选择错误,就会导致机器人行进过程出错,偏离预设路径,或者,导致机器人方向选择错误,从光伏面板上跌落。
因此,当机器人被放置到光伏面板上之后,需要有一种运行模式判断方法,以获取该机器人在光伏面板阵列上的位置,从而确定其运行模式,并选择合适的指令集。
发明内容
本发明的目的在于,提供一种机器人及其启动模式判定方法,以解决机器人启动模式选择需要复杂操作技术问题。
为实现上述目的,本发明提供一种启动模式判定方法,包括如下步骤:采用深度学习算法构建一启动模式判断模型;当一机器人被放置于一光伏面板阵列时,利用所述机器人左右两侧的摄像头采集至少一实时图片;将所述实时图片录入至所述启动模式判断模型;以及判断所述机器人在初始状态下的启动模式,所述启动模式包括左启动模式和右启动模式。
本申请还提供了一种数据处理设备,包括:存储器,用于存储可执行程序代码;以及处理器,连接至所述存储器,通过读取所述可执行程序代码,来运行与所述可执行程序代码对应的计算机程序,以执行前文所述的机器人启动模式判定方法。
本申请还提供了一种机器人,包括上述数据处理设备。
进一步地,所述机器人包括车体以及摄像头,车体能够在光伏面板阵列上行进;摄像头设置于所述车体的左右两侧;所述摄像头用以采集所述光伏面板阵列的实时影像;其中,所述数据处理设备设于所述车体内且连接至所述摄像头。
本发明的技术效果在于,通过摄像头实时采集车体两侧的实时影像,将实时影像录入至一模式判断模型,进而选择左启动模式或右启动模式,使得机器人被放置在光伏面板上后即可自动选择启动模式,执行相应的控制指令进行清洁,无需人工操作,无需设置启动方式按钮,使得用户操作更加简单,有效提升了工作效率和操作安全性。
附图说明
下面结合附图,通过对本申请的具体实施方式详细描述,将使本申请的技术方案及其它有益效果显而易见。
图1为实施例1所述机器人在左启动模式下在光伏面板阵列上行进的路线图。
图2为实施例1所述机器人在右启动模式下在光伏面板阵列上行进的路线图。
图3为实施例1所述机器人的侧视图。
图4为实施例1中机器人的启动模式判断方法的整体流程图。
图5为实施例1中采用深度学习算法构建一启动模式判断模型的步骤的流程图。
图6为实施例1中采集两个以上训练样本的步骤的流程图。
图7为实施例1所述机器人控制步骤的流程图。
图8为实施例2所述机器人的侧视图。
图9为实施例2所述机器人的俯视图。
图10为实施例2中机器人的启动模式判断方法的整体流程图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
下文的公开提供了许多不同的实施方式或例子用来实现本申请的不同结构。为了简化本申请的公开,下文中对特定例子的部件和设置进行描述。当然,它们仅仅为示例,并且目的不在于限制本申请。此外,本申请可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设置之间的关系。此外,本申请提供了的各种特定的工艺和材料的例子,但是本领域普通技术人员可以意识到其他工艺的应用和/或其他材料的使用。
实施例1
本申请的实施例1公开了一种机器人,用于对光伏面板阵列1进行清扫。由于光伏面板都是倾斜设置的,机器人被放置于光伏面板上的初始位置一般都是在光伏面板阵列的最低处,优选放置于该阵列的左下角或右下角,参见图1及图2。本实施例中的机器人2可以自动选择左侧启动模式或右侧启动模式,无需工作人员对其进行操作设定,提升了工作效率。
如图3所示,机器人2包括车体21、摄像头22和数据处理设备(图中未示出),车体21可以在光伏面板阵列1上移动,车体21的前端或后端设有清洁部23,优选一滚刷;所述数据处理设备设于车体21内。摄像头22设有两个,分别设在车体21的左右两侧,两个摄像头22分别用于实时获取车体21左右两侧光伏面板阵列1的图像,并将图像信息发送给数据处理设备。信息处理设备设于车体21内且连接至摄像头22,可以接收并处理摄像头22发来的图像信息,根据图像信息选择左启动模式或右启动模式。
如图1、图2所示,光伏面板阵列1为由两个以上的光伏面板组成的倾斜阵列式平面结构,光伏面板阵列1具有若干排若干列,整体形状一般为矩形或正方形。因此光伏面板阵列1具有四个边缘线,四个边缘线包括:彼此平行的上边线11和下边线12以及彼此平行的左边线13与右边线14。机器人2被放置在光伏面板阵列1上时,工作人员会将车体21的头部朝向上边线11放置,因此车体21左右两侧的摄像头22分别朝向左边线13和右边线14,获取带有左边线13和右边线14的图像,数据处理设备对图像进行处理,判断车体21位于光伏面板阵列1的左下角或右下角,进而判断执行左启动模式或右启动模式。每一光伏面板的四个边线设置有金属边框,起到保护面板的作用,也使得面板便于识别。根据判断结果选择左启动模式或右启动模式,执行对应的清扫计划。为了便于描述,定义左边线13的延伸方向为第一方向X,定义上边线11的延伸方向为第二方向Y。
光伏面板阵列1为倾斜的平面,机器人2为了提升抓地力,为了防止机器人2从面板上滑落,其底面通常会设有吸附装置。然而,相邻两行面板之间通常会存在间隙,如果机器人2行走过程中,该间隙始终位于机器人2下方,会导致吸附装置失效。因此,为了防止机器人2滑落,机器人2在清扫每一行光伏面板10时,不能在两行光伏面板10间的间隙上方行进。
在清洗过程中,灰尘、垃圾及污水会沿着面板表面滑落,为了保证清洗效果,机器人应当是从光伏面板阵列1的左下角或右下角开始启动,沿着光伏面板阵列1的左边线或右边线从该矩阵的下端直线行驶至其上端,然后在该矩阵上沿着第一方向行进,到矩阵边缘处向左或向右调头,继续在矩阵上沿着第一方向反向行走。机器人的车体在第一方向上行进的高度比其前一次在第一方向上行进的高度要低。
在机器人2被放置于光伏面板阵列1之前,光伏面板阵列1的尺寸对于机器人2来说是未知参数。由于光伏面板阵列1的尺寸不同,因此机器人2在面板阵列上的最优行进路线不同,每次横向行进的距离不同,在同一行面板上调头的次数也不同,其工作模式也各不相同。机器人2需要在其行进中,自动计算光伏面板10阵列中每一光伏面板10的尺寸,特别是每一行光伏面板10在其倾斜方向上的长度,根据其尺寸计算机器人需要在每一行光伏面板10上往复行走的次数和距离,计算机器人2调头的次数和位置,确保每一行光伏面板10的每个角落都能被清扫到。
车体在每次在第一方向行进中通过的面板阵列区域,可以被定义为一个清扫通道,每一清扫通道等于或小于清洁部23的宽度,当车体在一清扫通道内行进时,其清洁部23可能会延伸至该清扫通道之外。
同一行的至少一光伏面板10可以被划分为N个在第一方向上延伸的清扫通道,从光伏面板10阵列的左侧延伸至其右侧,N为此行光伏面板10在倾斜面的长度与清洁部23宽度的商的整数部分加一。例如,若一行面板的长度为2米,滚刷宽度为0.7米,N为3,则此行光伏面板10需要被划分为三个横向的清扫通道,清扫机器人往复走三次后才能将整行面板全部扫完,且不留清洁死角。三个清扫通道从上至下分别为第一通道、第二通道及第三通道,三个清扫通道的宽度可以分别设置为0.7米、0.6米及0.7米,当车体在中间的第二通道行进时,其滚刷两端边缘处会延伸至第一通道及第三通道内。
在这一方案中,当车体在对光伏面板阵列1进行清洁时,如果机器人在光伏面阵列上的初始位置不同(左下角或右下角),其后续执行的控制指令集也不同,因此,机器人判断其在光伏面板阵列上的初始位置,并据此选择启动模式,选择左启动模式或右启动模式,是一个关键性的技术问题。
本实施例还提供了上述机器人的启动模式判定方法,如图4所示,具体包括如下步骤S100-S400。
步骤S100、采用深度学习算法构建一启动模式判断模型。如图5所示,步骤S100包括如下步骤:S110、采集两个以上训练样本,每一训练样本包括一图片,每一图片具有一标签;S120、对所述训练样本进行分组处理,将标签相同的图片划分为同一组;S130、将分组后的训练样本录入至一卷积神经网络模型,利用卷积神经网络算法进行训练,获得一分类器,也就是启动模式判断模型,本实施例优选利用caffe深度学习算法来训练模型。
如图6所示,步骤S110具体包括如下步骤:S111、多次将一机器人2摆放至一光伏面板阵列1的左下角;S112、每次摆放后,利用机器人2的摄像头22采集至少一第一图片;S113、为每一第一图片设置第一标签,所述第一标签对应左启动模式;S114、多次将一机器人2摆放至一光伏面板阵列1的右下角;S115、每次摆放后,利用机器人2的摄像头22采集至少一第二图片;S116、为每一第二图片设置第二标签,所述第二标签对应右启动模式。步骤S112、S115中,机器人左右两侧的两个摄像头都实时采集大量图片,一般为5000张图片以上,每一图片被设置有一标签,对应左启动模式或右启动模式。在其他实施例中,也可以选择先执行步骤S114-116,再执行步骤S111-113。
步骤S200、当一机器人2被放置于一光伏面板阵列1时,利用所述机器人2左右两侧的摄像头22采集至少一实时图片;摄像头22设有两个,分别设在车体21的左右两侧,分别用于实时获取车体21左右两侧光伏面板阵列1的实时图像。当机器人被放置于面板阵列左下角、且车头朝向面板阵列的上边线时,其左侧摄像头不能拍摄到面板阵列的影像,但其右侧摄像头可以拍摄到位于车体右侧的面板阵列影像。同理,当机器人被放置于面板阵列右下角、且车头朝向面板阵列的上边线时,其左侧摄像头可以拍摄到位于车体左侧的面板阵列影像。
步骤S300、摄像头22将图像信息发送给数据处理设备,将所述实时图片录入至所述启动模式判断模型;当一新图片被录入至该启动模式判断模型时,该模型输出的结果就是标签类型,可以自行判断新图片对应的标签是第一标签还是第二标签。
步骤S400、判断所述机器人2在初始状态下的启动模式,所述启动模式包括左启动模式和右启动模式。由于第一标签对应左启动模式,第二标签对应右启动模式,因此计算机可以根据前一步骤中的标签类型判断出机器人2在初始状态下的启动模式。在启动模式确定之后,机器人可以自行执行启动模式对应的控制指令,在光伏面板阵列上按照预先规划的优选路径行进,并在行进中同步进行清扫,机器人需要清扫面板阵列的每一个角落,还要尽量减少重复路线。
如图7所示,本实施例所述的机器人的启动模式判定方法,在步骤S100-400之后,还可以执行机器人控制步骤,包括如下步骤S910-S950。
步骤S910、当所述机器人2行进至所述光伏面板阵列1的上边线时,控制所述机器人2原地转直角弯;若所述启动模式为左启动模式,控制所述机器人2向右转直角弯;若所述启动模式为右启动模式,控制所述机器人2向左转直角弯。
步骤S920、直行控制步骤,控制所述车体21沿着光伏面板阵列的上边线延伸方向直线行进。
步骤S930、调头控制步骤,当所述机器人2前端行进至所述光伏面板阵列1的左边线13或右边线14时,判断是否完成清扫任务;若是,执行步骤S940,若否,执行步骤S950。
步骤S940、控制所述机器人2停止行进。
步骤S950、控制所述车体21向左或向右转U形弯,返回所述直行控制步骤S920。
步骤S910-S950的目的是实现车体21在光伏面板阵列1上的往复清洗,其调头后,新的清扫轨迹与上一次直线清扫轨迹相邻或有部分重合。机器人在光伏面板阵列上按照预先规划的优选路径行进,并在行进中同步进行清扫,机器人需要清扫面板阵列的每一个角落,不留死角。
本实施例还包括数据处理设备,其包括存储器和处理器,存储器用于存储可执行程序代码;处理器连接至所述存储器,通过读取所述可执行程序代码,来运行与所述可执行程序代码对应的计算机程序,以实现上述机器人的启动模式判定方法中的至少一个步骤。所述存储器中还可以用于存储有左启动模式、右启动模式对应的控制指令集,步骤S910-S950中控制机器人实现的动作都是通过这些控制指令集来实现的。
本实施例的技术效果在于,通过摄像头实时采集车体两侧的实时影像,将实时影像录入至一模式判断模型,进而选择左启动模式或右启动模式,使得机器人被放置在光伏面板上后即可自动选择启动模式,执行相应的控制指令集,自动进行清洁,无需人工操作,无需设置启动方式按键,使得用户操作更加简单,有效提升了工作效率和操作安全性。
实施例2
实施例2包括实施例1中全部技术方案,其一个区别在于,如图8-9所示,实施例2提供的机器人,包括实施例1中的车体21、摄像头22、清洁部23和数据处理设备,还包括金属传感器24,设置于所述车体底部的左右两侧,或者,被固定至所述车体的侧壁,且延伸至所述车体的底面;当一金属传感器与一金属边框的间距小于或等于一预设阈值时,该金属传感器产生并发送信号至所述数据处理设备。
例如,当机器人被放置于面板阵列的左下角时,车体左侧的金属传感器能感应到左边框,并产生电信号,车体左侧的金属传感器位于光伏面板上,不能产生电信号。因此,当车体左侧的金属传感器产生电信号,且其右侧金属传感器无信号时,计算机即可以判断机器人的初始位置位于面板阵列的左下角,其启动模式为左启动模式。同理,当车体右侧的金属传感器产生电信号,且其左侧金属传感器无信号时,计算机即可以判断机器人的初始位置位于面板阵列的右下角,其启动模式为右启动模式。
机器人可以通过摄像头22或金属传感器24判断车体的启动模式,然而,机器人在实际工作中会有一定的误差概率,若仅使用一种方式判断,有可能会判断出错,导致机器人从面板上跌落,形成安全事故。本实施例将两次判断结果按照一定权重比例计算,得出校正后的判断结果,使得启动模式的判断结果更准确。
实施例2包括实施例1中全部技术方案,其另一区别在于,如图10所示,实施例2提供一种机器人启动模式判定方法,除了包括实施例1的步骤S100-S400,还包括如下步骤S500-S800。
S500、在步骤S400中,判断所述机器人2在初始状态下的启动模式时,数据处理设备可以将启动模式判断模型的判断结果输出为第一组别参数S1;当所述启动模式为左启动模式时,定义第一组别参数S1=0;当所述启动模式为右启动模式时,定义第一组别参数S1=1。
S600、采用金属传感器24再次判断所述机器人2在初始状态下的启动模式,输出第二组别参数。如图8所示,步骤S600采用金属传感器24再次判断启动模式包括如下步骤S610- S630。步骤S610、在所述机器人2的底部的左右两侧分别设置一个金属传感器24,定义为左传感器和右传感器;步骤S620、当一机器人2被放置于一光伏面板阵列1时,同步采集所述左传感器和所述右传感器产生的电信号;步骤S630、当所述左传感器有信号且所述右传感器无信号时,判定所述机器人2在初始状态下的启动模式为左启动模式;当所述左传感器无信号且所述右传感器有信号时,判定所述机器人2在初始状态下的启动模式为右启动模式。
当所述启动模式为左启动模式时,定义第二组别参数S2=0当所述启动模式为右启动模式时,定义第二组别参数S2=1。在车体21靠近左边线13或右边线14时,由于边缘位置的金属边框的高度较低,金属边框位于摄像传感器的死角,摄像传感器获取的影像中不存在金属边框,无法通过判断模型进行识别启动模式。金属传感器24产生并发送电信号至输出处理设备,金属传感器24具有检测范围较小、灵敏度高的特点,一般检测范围为几十毫米。该电信号代表车体21靠近了光伏面板阵列1带有金属边框的边缘,由于边框位置采集器设置在车体21的两侧,因此数据处理设备接收到金属传感器24发来的电信号意味着车体21已经靠近光伏面板阵列1的左边线13或右边线14。
S700、根据所述第一组别参数和所述第二组别参数计算校正参数S,S=K1*S1+K2*S2,其中,K1、K2为预设的权重系数,且K1+K2=1。当校正参数S趋近于0时,判定启动模式为左启动模式;当校正参数S趋近于1时,判定启动模式为右启动模式。将校正参数S与0作差,得到第一差值S-0,若该差值小于一预设阈值,例如0.1或0.2,则可以判定S趋近于0;同理,将校正参数1与S作差,得到第二差值1-S,若该差值小于一预设阈值,例如0.1或0.2,则可以判定S趋近于0,由此可以判定S趋近于0或是趋近于1。校正参数的设置可以避免因单个判断模式产生的误差造成的误判,其结果更为准确。
S800、根据校正参数的结果重新判断所述机器人2在初始状态下的启动模式。步骤S800的判断结果相对于步骤S400的判断结果更加准确,有效减少了事故发生的概率。
实施例2所述的机器人启动模式判定方法,在步骤S100-800之后,还包括实施例1所述的步骤S910-S950,在此不做赘述。
在实施例2中,在机器人2内同时设置摄像头22和金属传感器24,虽然任选其一即可判断车体的启动模式,但是机器人在实际工作中会有一定的误差概率,若仅使用一种方式判断,有可能会判断出错,导致机器人从面板上跌落,形成安全事故。
本实施例中的机器人2通过摄像头22及模式判断模型输出一模式判断结果,通过金属传感器24输出另一个模式判断结果,然后将两次判断结果按照一定权重比例计算,得出一校正参数,根据校正参数自动选择启动模式,使得启动模式的判断结果更准确。机器人放置在光伏面板上后即可自动选择启动模式进行清洁,无需人工操作,无需设置启动方式按钮,提升了工作效率。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
以上对本申请实施例所提供的一种机器人及其启动模式判定方法及数据处理设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的技术方案及其核心思想;本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例的技术方案的范围。

Claims (12)

  1. 一种机器人启动模式判定方法,其中,包括如下步骤:        
    采用深度学习算法构建一启动模式判断模型;        
    当一机器人被放置于一光伏面板阵列时,利用所述机器人左右两侧的摄像头采集至少一实时图片;        将所述实时图片录入至所述启动模式判断模型;以及        
    判断所述机器人在初始状态下的启动模式,所述启动模式包括左启动模式和右启动模式。
  2. 如权利要求1所述的机器人启动模式判定方法,其中,        
    采用深度学习算法构建一启动模式判断模型的步骤,包括如下步骤:        
    采集两个以上训练样本,每一训练样本包括一图片,每一所述图片具有一标签;        
    对所述训练样本进行分组处理,将标签相同的所述图片划分为同一组;以及        
    将分组后的训练样本录入至一卷积神经网络模型进行训练,获得一启动模式判断模型。
  3. 如权利要求2所述的机器人启动模式判定方法,其中,        
    采集两个以上训练样本的步骤,包括如下步骤:        
    多次将一机器人摆放至一光伏面板阵列的左下角;        
    每次摆放后,利用机器人的摄像头采集至少一第一图片;以及        
    为每一第一图片设置第一标签,所述第一标签对应左启动模式。
  4. 如权利要求2所述的机器人启动模式判定方法,其中,        
    采集两个以上训练样本的步骤,包括如下步骤:        
    多次将一机器人摆放至一光伏面板阵列的右下角;        
    每次摆放后,利用机器人的摄像头采集至少一第二图片;以及        
    为每一第二图片设置第二标签,所述第二标签对应右启动模式。
  5. 如权利要求1所述的机器人启动模式判定方法,其中,还包括如下步骤:        
    在判断所述机器人在初始状态下的启动模式时,输出第一组别参数;       
    采用金属传感器判断所述机器人在初始状态下的启动模式,输出第二组别参数;        
    根据所述第一组别参数和所述第二组别参数计算校正参数S,S=K1*S1+K2*S2,其中,K1、K2为预设的权重系数,且K1+K2=1;以及        
    根据校正参数的结果重新判断所述机器人在初始状态下的启动模式。
  6. 如权利要求5所述的机器人启动模式判定方法,其中,        
    采用金属传感器再次判断所述机器人在初始状态下的启动模式,包括如下步骤:        
    在所述机器人的底部的左右两侧分别设置一个金属传感器,定义为左传感器和右传感器;        
    当一机器人被放置于一光伏面板阵列时,同步采集所述左传感器和所述右传感器产生的电信号;以及        当所述左传感器有信号且所述右传感器无信号时,判定所述机器人在初始状态下的启动模式为左启动模式;当所述左传感器无信号且所述右传感器有信号时,判定所述机器人在初始状态下的启动模式为右启动模式。
  7. 如权利要求1所述的机器人启动模式判定方法,其中,        
    在判断所述机器人在初始状态下的启动模式的步骤之后,还包括如下步骤:        
    若所述启动模式为左启动模式,当所述机器人行进至所述光伏面板阵列的上边缘时,控制所述机器人向右转直角弯;若所述启动模式为右启动模式,当所述机器人行进至所述光伏面板阵列的上边缘时,控制所述机器人向左转直角弯;        
    直行控制步骤,控制所述车体沿着所述光伏面板阵列的上边缘延伸方向直线行进;以及        
    调头控制步骤,当所述机器人前端行进至所述光伏面板阵列的左边线或右边线时,判断是否完成清扫任务;若是,控制所述机器人停止行进;若否,控制所述车体向左或向右转U形弯,返回所述直行控制步骤。
  8. 一种数据处理设备,其中,包括        
    存储器,用于存储可执行程序代码;以及        
    处理器,连接至所述存储器,通过读取所述可执行程序代码,来运行与所述可执行程序代码对应的计算机程序,以执行如权利要求1-7中任一项所述的机器人启动模式判定方法。
  9. 一种机器人,其中,包括如权利要求8所述的数据处理设备。
  10. 如权利要求9所述的机器人,其中,还包括        
    车体,能够在光伏面板阵列上行进;以及       
    摄像头,设置于所述车体的左右两侧;所述摄像头用以采集所述光伏面板阵列的实时影像;        
    其中,所述数据处理设备设于所述车体内且连接至所述摄像头。
  11. 如权利要求10所述的机器人,其中,        
    所述光伏面板阵列为由两个以上光伏面板组成的阵列式平面结构;        
    所述光伏面板的每一边缘处皆设有金属边框。
  12. 如权利要求10所述的机器人,其中,还包括        
    金属传感器,设置于所述车体底部的左右两侧,或者,被固定至所述车体的侧壁,且延伸至所述车体的底面;        
    当一金属传感器与一金属边框的间距小于或等于一预设阈值时,该金属传感器产生并发送信号至所述数据处理设备。
PCT/CN2023/091029 2022-04-30 2023-04-27 一种机器人及其启动模式判定方法、数据处理设备 WO2023208084A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210469186.X 2022-04-30
CN202210469186.XA CN114995385A (zh) 2022-04-30 2022-04-30 一种机器人及其启动模式判定方法、数据处理设备

Publications (1)

Publication Number Publication Date
WO2023208084A1 true WO2023208084A1 (zh) 2023-11-02

Family

ID=83025117

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/091029 WO2023208084A1 (zh) 2022-04-30 2023-04-27 一种机器人及其启动模式判定方法、数据处理设备

Country Status (2)

Country Link
CN (1) CN114995385A (zh)
WO (1) WO2023208084A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114995385A (zh) * 2022-04-30 2022-09-02 苏州瑞得恩光能科技有限公司 一种机器人及其启动模式判定方法、数据处理设备
CN116540707A (zh) * 2023-05-11 2023-08-04 凌度(广东)智能科技发展有限公司 路径控制方法、电子设备及光伏清洁机器人

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015060009A1 (ja) * 2013-10-24 2015-04-30 シンフォニアテクノロジー株式会社 ソーラーパネル清掃装置
CN106182015A (zh) * 2016-09-21 2016-12-07 苏州瑞得恩自动化设备科技有限公司 太阳能面板清扫机器人控制系统
CN111687860A (zh) * 2020-06-20 2020-09-22 深圳怪虫机器人有限公司 一种光伏清洁机器人自主选择清洁作业路径的方法
CN112381852A (zh) * 2020-11-11 2021-02-19 苏州瑞得恩光能科技有限公司 清洁机器人的定位方法及存储介质
CN114995385A (zh) * 2022-04-30 2022-09-02 苏州瑞得恩光能科技有限公司 一种机器人及其启动模式判定方法、数据处理设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015060009A1 (ja) * 2013-10-24 2015-04-30 シンフォニアテクノロジー株式会社 ソーラーパネル清掃装置
CN106182015A (zh) * 2016-09-21 2016-12-07 苏州瑞得恩自动化设备科技有限公司 太阳能面板清扫机器人控制系统
CN111687860A (zh) * 2020-06-20 2020-09-22 深圳怪虫机器人有限公司 一种光伏清洁机器人自主选择清洁作业路径的方法
CN112381852A (zh) * 2020-11-11 2021-02-19 苏州瑞得恩光能科技有限公司 清洁机器人的定位方法及存储介质
CN114995385A (zh) * 2022-04-30 2022-09-02 苏州瑞得恩光能科技有限公司 一种机器人及其启动模式判定方法、数据处理设备

Also Published As

Publication number Publication date
CN114995385A (zh) 2022-09-02

Similar Documents

Publication Publication Date Title
WO2023208084A1 (zh) 一种机器人及其启动模式判定方法、数据处理设备
US20230309776A1 (en) Method for Controlling Cleaning Based on Dense Obstacles
JP2002325708A (ja) ロボット掃除機とそのシステム及び制御方法
US20060237037A1 (en) Robot cleaner driving method
JPH07120194B2 (ja) 無人搬送車の走行制御方法及びその装置
KR20090104393A (ko) 로봇 청소기의 제어방법
US20210064055A1 (en) Robot cleaner and method for controlling the same
JP2003180586A (ja) 自走式掃除機
US20200033878A1 (en) Vacuum cleaner
WO2023207856A1 (zh) 一种机器人及其直线行进控制方法、数据处理设备
CN111990930B (zh) 一种测距方法、装置、机器人和存储介质
EP3951541A1 (en) Method for automatically generating robot return to base code
JP2014021624A (ja) 自律走行装置、及び自律走行システム
JP2005211367A (ja) 自律走行ロボットクリーナー
JP2007117146A (ja) 自走式掃除機およびそのプログラム
JP2018196622A (ja) 電子機器、方法及びプログラム
KR20090018336A (ko) 로봇 청소기 및 그 제어방법
CN108931980B (zh) 机器人内建地图的标记方法和芯片及室内清洁机器人
CN102470268A (zh) 利用单一光源和平面传感器部测量物体的物理量的方法及利用该方法的虚拟高尔夫系统
JP2785305B2 (ja) 自走式掃除機
EP4332501A1 (en) Distance measurement method and apparatus, and robot and storage medium
CN112190186B (zh) 一种扫地机器人的路线规划方法、系统及扫地机器人
JP7023719B2 (ja) 自律走行体
JP2021027884A (ja) 自律走行型掃除機、自律走行型掃除機の制御方法、及び、プログラム
JPH01106113A (ja) 清掃ロボット装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23795516

Country of ref document: EP

Kind code of ref document: A1