WO2019232804A1 - Procédé et système de mise à jour de logiciel, et robot mobile et serveur - Google Patents

Procédé et système de mise à jour de logiciel, et robot mobile et serveur Download PDF

Info

Publication number
WO2019232804A1
WO2019232804A1 PCT/CN2018/090503 CN2018090503W WO2019232804A1 WO 2019232804 A1 WO2019232804 A1 WO 2019232804A1 CN 2018090503 W CN2018090503 W CN 2018090503W WO 2019232804 A1 WO2019232804 A1 WO 2019232804A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
obstacle
mobile robot
sample
software program
Prior art date
Application number
PCT/CN2018/090503
Other languages
English (en)
Chinese (zh)
Inventor
崔彧玮
李重兴
温任华
王子敬
Original Assignee
珊口(深圳)智能科技有限公司
珊口(上海)智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珊口(深圳)智能科技有限公司, 珊口(上海)智能科技有限公司 filed Critical 珊口(深圳)智能科技有限公司
Priority to CN201880000819.4A priority Critical patent/CN108780319A/zh
Priority to PCT/CN2018/090503 priority patent/WO2019232804A1/fr
Publication of WO2019232804A1 publication Critical patent/WO2019232804A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Definitions

  • the present application relates to the field of intelligent robots, and in particular, to a software update method, system, mobile robot, and server.
  • Mobile robots are machines that perform tasks automatically. It can accept instructions entered by the user to run, can run automatically according to the program being run, and can also act according to the principles formulated by artificial intelligence technology. This type of mobile robot can be used indoors or outdoors, can be used in industry or home, can be used to replace security inspections, replace people to clean the ground, can also be used for family accompany, auxiliary office and so on.
  • VSLAM Video Simultaneous Localization and Mapping, based on visual information and real-time positioning and map construction, referred to as VSLAM
  • VSLAM Vehicle Simultaneous Localization and Mapping, based on visual information and real-time positioning and map construction
  • a mobile robot can recognize an obstacle based on an image captured by a camera device provided thereon, and then plan a navigation route based on the position information of the identified obstacle to avoid the obstacle.
  • any recognition technology cannot be guaranteed to be completely correct. In the face of complex real environments, the mobile robot's movement may have incorrect navigation due to incorrect recognition, missing recognition, etc.
  • the purpose of this application is to provide a software update method, system, mobile robot, and server, which are used to solve the problem of mobile robots in the prior art due to unrecognized obstacles. Problems with collisions with obstacles during movement.
  • a first aspect of the present application provides a software update method for a mobile robot, the mobile robot is communicatively connected to a server, and the mobile robot includes a camera device and a detection device, wherein A software program to be updated is at least used to identify first obstacle information in an image captured by the camera device, for the mobile robot to plan a navigation route based on the identified first obstacle information, and the software update method includes The following steps: during the movement of the mobile robot in accordance with the navigation route, acquiring second obstacle information detected by the detection device on the navigation route; based on the second obstacle information and a message containing the second obstacle information generated; Generate sample information for the image of the second obstacle; send the sample information to a server; and update the software program when an update data packet is received from the server.
  • the step of generating sample information based on the second obstacle information and an image of the second obstacle that includes generating the second obstacle information includes: based on the second Obstacle information determines the relative spatial position between the corresponding second obstacle and the mobile robot; obtains an image containing the second obstacle based on the relative spatial position; and generates sample information based on the obtained image and relative spatial position .
  • the step of determining a relative spatial position between a corresponding second obstacle and a mobile robot based on the second obstacle information includes: based on the second obstacle information
  • the collision information in the control controls the mobile robot to return and move a certain distance in accordance with the navigation route, and determines the relative spatial position between the second obstacle and the mobile robot by detecting the return movement route; or based on the second obstacle information
  • the relative spatial position determines the relative spatial position between the corresponding second obstacle and the mobile robot.
  • the step of obtaining an image including the second obstacle based on a relative spatial position includes: selecting the camera device from a buffered image to correspond to the relative An image including a second obstacle taken in a spatial position; or an image including a second obstacle is re-shot based on the relative spatial position.
  • the step of generating sample information based on the acquired image and relative spatial position includes: generating sample input information based on the acquired image; based on the acquired image and the The relative spatial position generates sample output information; wherein the sample information includes the sample input information and sample output information.
  • the step of generating sample input information based on the acquired image includes: directly using the acquired image as the sample input information; or performing pre-processing on the acquired image. Processing, and using the preprocessed image as the sample input information.
  • the step of generating sample output information based on the acquired image and the relative spatial position includes: mapping the second obstacle to the relative spatial position according to the The sample input information, and obtain the sample output information; or encapsulate the relative spatial position, the sample input information, and the pre-stored physical reference information for mapping the second obstacle to the sample input information as Sample output information.
  • the software program includes a first software program for identifying an obstacle in the image and a ground boundary, and / or a second software for identifying an object in the image A program; wherein the first obstacle information includes the identified boundary line and / or the identified object.
  • the first software program and / or the second software program include a network structure and a connection manner of a neural network model.
  • the software program further includes a third software program for calculating a relative spatial position of the boundary line and a mobile robot in a physical space, based on the relative At least one of a fourth software program for planning a navigation route in space and a preset map, and a fifth software program for generating the sample information.
  • a second aspect of the present application also provides a software update method for a server, the server being in communication with at least one mobile robot, the method including the following steps: according to sample information provided by at least one of the mobile robots, Training a preset software program; wherein the software program is at least used to identify first obstacle information in an image captured by a camera device of the mobile robot for the mobile robot to use based on the identified first Obstacle information plans a navigation route; generates an update data package for updating the software program according to the training result; and sends the update data package to a mobile robot to update a software program built in the mobile robot.
  • the sample information includes sample input information and sample output information; wherein the sample input information includes images selected by a mobile robot; and the sample output information includes information based on Sample output information generated from the selected image and the relative spatial position of the mobile robot and the second obstacle, or the sample output information includes relative spatial position, sample input information, and pre-stored information for mapping the second obstacle to Physical reference information for sample input information.
  • the software update method further includes based on the received relative spatial position, the sample input information, and a pre-stored physics for mapping the second obstacle to the sample input information. Refer to the steps to get sample output information.
  • the step of training a preset software program according to sample information provided by at least one of the mobile robots includes: according to sample information provided by all mobile robots , Training a preset software program; or training a corresponding software program of each mobile robot according to sample information provided by each mobile robot.
  • the software program includes a first software program for identifying an obstacle in the image and a ground boundary, and / or a second software for identifying an object in the image A program; wherein the first obstacle information includes the identified boundary line and / or the identified object.
  • the first software program and / or the second software program include a network structure and a connection manner of a neural network model.
  • the update data packet includes a corresponding Parameters in neural networks.
  • a third aspect of the present application also provides a software update system for a mobile robot, the mobile robot is communicatively connected to a server, the mobile robot includes a camera device and a detection device, and the software program to be updated is at least used for Identifying the first obstacle information in an image captured by the camera device for the mobile robot to plan a navigation route based on the identified first obstacle information
  • the software update system includes: an acquiring unit for During the movement of the robot according to the navigation route, the second obstacle information obtained by the detection device on the navigation route is obtained; a sample generating unit is configured to generate the second obstacle information based on the second obstacle information and including The second obstacle image generates sample information; a first sending unit for sending the sample information to a server; and a first updating unit for receiving the server feedback via the first receiving unit
  • the software program is updated when the update package is updated.
  • the sample generating unit includes: a determining module, configured to determine a relative spatial position between the corresponding second obstacle and the mobile robot based on the second obstacle information An acquisition module for acquiring an image including the second obstacle based on the relative spatial position; and a sample generating module for generating sample information based on the acquired image and the relative spatial position.
  • the determining module includes: a first determining module, configured to control the mobile robot to return and move a section according to the navigation route based on the collision information in the second obstacle information Distance, and determine the relative spatial position between the second obstacle and the mobile robot by detecting the returning movement route; and a second determining module for determining a corresponding second based on the relative spatial position in the second obstacle information The relative spatial position between the obstacle and the mobile robot.
  • the acquisition module includes: a first acquisition module, configured to select, from a buffered image, a second image including a second image taken by the camera device corresponding to the relative spatial position; An image of the obstacle; and a second acquisition module for re-capturing an image containing the second obstacle based on the relative spatial position.
  • the sample generating module includes: an input sample generating module configured to generate sample input information based on the obtained image; and an output sample generating module configured to be based on the obtained image And the relative spatial position generates sample output information; wherein the sample information includes the sample input information and sample output information.
  • the sample input module is configured to directly use the acquired image as the sample input information; or the sample input module is configured to pre-process the acquired image. Processing, and using the preprocessed image as the sample input information.
  • the sample output module is configured to map the second obstacle to the sample input information according to the relative spatial position, and obtain sample output information; or
  • the sample output module is configured to encapsulate the relative spatial position, sample input information, and pre-stored physical reference information for mapping the second obstacle to the sample input information as sample output information.
  • the software program includes a first software program for identifying an obstacle in the image and a ground boundary, and / or a second software for identifying an object in the image A program; wherein the first obstacle information includes the identified boundary line and / or the identified object.
  • the first software program and / or the second software program include a network structure and a connection manner of a neural network model.
  • the software program further includes a third software program for calculating a relative spatial position of the boundary line and a mobile robot in a physical space, and is configured to be based on the relative At least one of a fourth software program for planning a navigation route in space and a preset map, and a fifth software program for generating the sample information.
  • the fourth aspect of the present application also provides a software update system for a server, the server is communicatively connected to at least one mobile robot, and the system includes: a second receiving unit for receiving at least one mobile robot Sample information; a training unit configured to train a preset software program based on the sample information provided by at least one mobile robot; wherein the software program is used to identify at least one of the images captured by the camera device of the mobile robot The first obstacle information for the mobile robot to plan a navigation route based on the identified first obstacle information; an update generating unit for generating an update data package for updating the software program according to the training result; the second A sending unit is configured to send the update data packet to a mobile robot to update a software program built in the mobile robot.
  • the sample information includes sample input information and sample output information; wherein the sample input information includes images selected by a mobile robot; and the sample output information includes information based on Sample output information generated from the selected image and the relative spatial position of the mobile robot and the second obstacle, or the sample output information includes relative spatial position, sample input information, and pre-stored information for mapping the second obstacle to Physical reference information for sample input information.
  • the software update system further includes an output sample generating unit, configured to generate a second obstacle based on the received relative spatial position, sample input information, and pre-stored information. Physical reference information mapped to sample input information generates sample output information.
  • the training unit is configured to train a preset software program according to sample information provided by all mobile robots; or the training unit is configured to The sample information provided by the mobile robot is trained on the software program of each mobile robot.
  • the software program includes a first software program for identifying an obstacle in the image and a ground boundary, and / or a second software for identifying an object in the image A program; wherein the first obstacle information includes the identified boundary line and / or the identified object.
  • the first software program and / or the second software program include a network structure and a connection mode of a neural network model.
  • the update data packet includes a corresponding Parameters in neural networks.
  • a fifth aspect of the present application also provides a software update system for updating a software program in a mobile robot, wherein the mobile robot includes a camera device and a detection device, and is characterized in that the software program to be updated is at least used for identifying
  • the first obstacle information in the image captured by the camera device is used by the mobile robot to plan a navigation route based on the identified first obstacle information.
  • the software update system includes a client system located on the mobile robot side. The client system executes the software updating method for a mobile robot according to any one of the above; and a server system communicatively connected with at least one client system, the server system executes any one of the above Used for server software update method.
  • a sixth aspect of the present application also provides a mobile robot, including a camera device for capturing images during the movement of the mobile robot; a mobile device for controlling the mobile robot to move in a controlled manner; a storage device for For storing captured images, preset physical reference information, pre-labeled object labels, and at least one program; a processing device for calling the at least one program and executing the software according to any one of claims 1-10 Update method.
  • the mobile robot is a mobile robot having a monocular camera device.
  • the mobile robot is a cleaning robot.
  • a seventh aspect of the present application further provides a server, including: a storage unit for storing at least one program; a processing unit for calling the at least one program and executing the software according to any one of claims 11-16
  • An update method to generate an update data packet for updating a mobile robot at least for identifying first obstacle information in an image captured by a camera device, for the mobile robot to base on the identified first obstacle The procedure of physical information planning navigation route.
  • An eighth aspect of the present application further provides a computer storage medium that stores at least one program that, when called, executes any one of the software updating methods for a mobile robot described above.
  • a ninth aspect of the present application further provides a computer storage medium that stores at least one program that, when called, executes any one of the software update methods for a server described above.
  • the software update method, system, mobile robot and server of the present application have the following beneficial effects: generating sample information by acquiring the second obstacle information, and receiving an update data packet generated by the server based on the sample information
  • the software program is updated from time to time to improve the accuracy of the mobile robot in identifying obstacles, thereby reducing the collision rate when moving along the navigation route planned based on the updated obstacle information.
  • FIG. 1 is a flowchart of an embodiment of a software updating method for a mobile robot according to the present application.
  • FIG. 2 shows a flowchart of a software updating method for a mobile robot according to another embodiment of the present application.
  • FIG. 3 is a flowchart of an embodiment of a software update method for a server in this application.
  • FIG. 4 shows a flowchart of a software update method based on data communication between a mobile robot and a server.
  • FIG. 5 is a schematic structural diagram of a software update system for a mobile robot according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an embodiment of a software update system for a server in this application.
  • FIG. 7 is a schematic structural diagram of a software update system according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a mobile robot according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an embodiment of a server of the present application.
  • A, B or C or "A, B and / or C” means "any of the following: A; B; C; A and B; A and C; B and C; A, B and C” .
  • An exception to this definition occurs only when a combination of elements, functions, steps, or actions is inherently mutually exclusive in some way.
  • Mobile robots perform mobile operations based on navigation control technology.
  • VSLAM Visual Simultaneous Localization and Mapping
  • a mobile robot recognizes obstacles by using the visual information provided by a visual sensor, and then plans a navigation route based on the identified obstacle information, so that the mobile robot can avoid the obstacles to move autonomously while moving along the navigation route.
  • the visual sensor includes an imaging device, and the corresponding visual information is image data (hereinafter referred to as an image).
  • mobile robots have built-in and executed software programs to identify obstacle information using image recognition technology, and then plan navigation routes based on the obstacle information for patrolling, cleaning, and so on.
  • image recognition technology e.g., image recognition technology
  • the mobile robot moves along the navigation route based on the software program, it will collide with obstacles and cannot find the object to be controlled.
  • the present application provides a software update method for a mobile robot, which is performed by software and hardware installed in the mobile robot.
  • the mobile robot collects sample information generated based on a real application environment through the software update method, and obtains an update data package for software update by means of data communication between the mobile robot and a server.
  • the mobile robot includes, but is not limited to, robots such as home-accompanied mobile robots, cleaning robots, and patrol mobile robots that can plan a navigation route based on images captured by a camera device to perform autonomous movement.
  • robots such as home-accompanied mobile robots, cleaning robots, and patrol mobile robots that can plan a navigation route based on images captured by a camera device to perform autonomous movement.
  • the mobile robot can wirelessly communicate with the server through a network interface.
  • a mobile robot is operatively coupled to a network interface to communicatively couple the robot to a network.
  • the network interface may connect the robot to a personal area network (PAN) (such as a Bluetooth network), a local area network (LAN) (such as a Wi-Fi network), and / or a wide area network (WAN, 4G, 5G, or LTE cellular network).
  • PAN personal area network
  • LAN local area network
  • WAN 4G, 5G, or LTE cellular network
  • the mobile robot includes a camera device and a detection device.
  • the mobile robot includes a camera device, and the mobile robot performs related operations according to images captured by the camera device.
  • the mobile robot may also be configured with multiple camera devices. In this application, the mobile robot performs related operations based only on images captured by one of the multiple camera devices. In this case, , Also considered as a mobile robot with a monocular camera.
  • the imaging device may capture still images at different times taken at preset time intervals.
  • the camera device can shoot video. Since the video is composed of image frames, it can continuously or discontinuously collect the image frames in the acquired video and select one frame image as one image.
  • the detection device of the mobile robot is a device for sensing the relationship between the mobile robot and an object in its application scene.
  • the detection device includes, but is not limited to, a laser ranging sensor for detecting the distance between the mobile robot and the object, A collision sensor and the like for sensing a collision relationship between a mobile robot and an obstacle.
  • the server includes, but is not limited to, a single server, a server cluster, a distributed server, a server based on a cloud architecture, and the like.
  • the software program update training is implemented on the server side, and the update data package generated on the server side is sent to the mobile robot side to update the original software program, thereby reducing the amount of data calculation on the mobile robot side. It is convenient to use the data uploaded by multiple mobile robots to perform unified training on the server, saving software update costs.
  • the software program stored in the mobile robot and executed by the mobile robot is at least used to identify the first obstacle information in the image captured by the camera device, so that the mobile robot is based on the identified first obstacle information.
  • Plan your navigation route the first obstacle includes objects placed on the ground, such as tables, chairs, tiled objects, water bottles, flower pots, and the like.
  • the first obstacle information includes, but is not limited to, position information of the first obstacle, a boundary line between the first obstacle and the ground, an object type label corresponding to the first obstacle, and the like.
  • the position information of the first obstacle may be characterized by the coordinate position of the first obstacle in the map, and the position information of the first obstacle may also be the boundary line between the first obstacle and the ground. Coordinate position in the map.
  • the boundary line between the first obstacle and the ground includes, but is not limited to, the intersection line formed between the support portion of the object and the ground, the intersection line formed by the object close to the ground, and the like.
  • Another example is the shadow line formed by the bottom edge of the low sofa and the ground.
  • the boundary line is photographed and mapped in the image.
  • the object type tag corresponding to the first obstacle is pre-screened and stored in the storage device of the mobile robot based on the environmental conditions moved by the mobile robot.
  • the object type tag is used to describe an object classification or an image feature of an object in an image that may be captured and placed in the environment.
  • the software program includes a first software program for identifying an obstacle in the image and a ground boundary line, or a second software program for identifying an object in the image, or a combination thereof.
  • the first software program includes a program that performs the following steps: identifying an image area of an object from an image captured by the camera device, and Determine the ground image area from the image; use the intersection of the ground image area and the object image area as the boundary line between the obstacle and the ground.
  • the first software program includes a network structure and a connection manner of a neural network model.
  • the neural network model may be a convolutional neural network, and the structure of the neural network model includes an input layer, at least one hidden layer, and at least one output layer.
  • the input layer is used to receive a captured image or a preprocessed image;
  • the hidden layer includes a convolution layer and an activation function layer, and may even include a normalization layer, a pooling layer, and a fusion layer. At least one of the above; the output layer is used to output an image labeled with an object type label.
  • the connection mode is determined according to the connection relationship of the layers in the neural network model.
  • connection relationship between the front and back layers based on the data transmission, the connection relationship with the data of the previous layer based on the size of the convolution kernel in each hidden layer, and the full connection are set.
  • the software program uses a neural network model to identify the boundary between the first obstacle and the ground in the image.
  • the neural network includes a trained CNN (Convolutional Neural Network, Convolutional Neural Network, CNN), and uses the CNN to identify the boundary between the first obstacle and the ground for the input image.
  • the input layer of this type of neural network is a picture obtained from the perspective of the robot.
  • the output layer size is the area that may be the ground in the preset image.
  • Each pixel on the output layer is the probability of an obstacle.
  • the middle of the input layer and the output layer includes Several convolutional layers.
  • pre-labeled pictures can be used, and the back-propagation algorithm is used to adjust the weight of the neural network so that the position of the obstacles output by the network is close to the position of the manually-labeled obstacles.
  • the processing device recognizes the boundary between the first obstacle and the ground based on the neural network.
  • the second software program includes a program that performs the following steps: an object that matches a pre-labeled object type tag can be identified from the image captured by the camera device, and then the corresponding Object type tags to characterize objects for mobile robots to do route planning and more.
  • the object type tag may be characterized by an image feature of the object, and the image feature can identify a target object in the image.
  • the object type tags include, but are not limited to, image features such as tables, chairs, sofas, flower pots, shoes, socks, tiled objects, and cups.
  • the tiled objects include, but are not limited to, floor mats, floor tile maps, and tapestries, wall paintings, etc. hanging on the walls.
  • the second software program includes a software program of an image recognition method such as an image recognition algorithm based on a neural network, an image recognition algorithm based on a wavelet moment, etc., to process, analyze, and identify the captured image and Obtain the object area corresponding to the object type label in the image.
  • the object region may be characterized by features such as the gray level of the object and the contour of the object.
  • the manner in which the object region is represented by the contour of the object includes obtaining the identified object region by a contour line extraction method.
  • the contour line extraction method includes, but is not limited to, methods such as binary, grayscale, and canny operators.
  • the similarity and consistency of the content and characteristics of the object and image, characteristics, structure, relationship, texture, and gray level are analyzed to find out Similar image targets so that the object area in the image corresponds to a pre-labeled object type tag.
  • the object type tag may be characterized by an object classification.
  • the second software program includes a neural network model (such as CNN) obtained through pre-training, and recognizes an object region corresponding to each object type label from an image by executing the neural network model.
  • the object type labels include, but are not limited to, tables, chairs, sofas, flower pots, shoes, socks, tiled objects, cups, and unknown objects.
  • the tiled objects include, but are not limited to, floor mats, floor tile maps, and tapestries, wall paintings, etc. hanging on the walls.
  • the unknown object refers to a classification that cannot be recognized by the trained neural network model and is recognized as an object, and generally includes objects randomly appearing indoors, such as debris, toys, and the like.
  • first software program and the second software program may be designed with a high degree of coupling, and a more complicated neural network model is used to simultaneously identify the boundary line and the object region corresponding to the object type label.
  • the corresponding identified boundary lines and objects (that is, object areas) in the image are provided to other software programs in the mobile robot through a program interface for the mobile robot.
  • Other software programs in the system read the first obstacle information and use the first obstacle information to perform map drawing, navigation route planning, control object identification, control decision, software update, and the like.
  • the other software program includes, but is not limited to, a third software program for calculating a relative spatial position of the boundary line and a mobile robot in a physical space, and is configured to be based on the relative spatial position and a preset map At least one of a fourth software program that plans a navigation route, and a fifth software program that generates the sample information.
  • the first software program, the second software program, and other software programs are not necessarily stored independently, and they may also be packaged in an APP.
  • a relative spatial position between the physical position corresponding to the boundary line and the mobile robot is calculated.
  • the preset physical reference information includes, but is not limited to, at least two of the following: a physical height of the camera device from the ground, a physical parameter of the camera device, and an included angle of a main optical axis of the camera device with respect to a horizontal or vertical plane.
  • FIG. 1 shows a flowchart of a software updating method for a mobile robot in an embodiment of the present application.
  • the software updating method for a mobile robot includes steps S110, S120, and steps. S130 and step S140.
  • step S110 while the mobile robot is moving in accordance with the navigation route, the second obstacle information detected by the detection device on the navigation route is obtained.
  • the second obstacle refers to an obstacle that is not recognized when the software program is executed.
  • the software program performs the following steps: designing a navigation route based on the identified first obstacle information, and controlling the mobile robot to move in accordance with the navigation route, during the movement, using the detection device of the mobile robot to continue detection If the obstacle information is detected in the currently moving route, it is determined that the corresponding obstacle is the second obstacle.
  • the obstacle information provided by the detection device is hereinafter referred to as the second obstacle information.
  • the second obstacle information includes, but is not limited to, position information of the second obstacle, relative spatial position between the second obstacle and the mobile robot, and collision information of the second obstacle.
  • the detection device may include a laser ranging sensor, an infrared sensor, a collision sensor, and the like.
  • the second obstacle information includes a distance between the mobile robot and the obstacle measured by a laser ranging sensor located in the mobile robot.
  • the second obstacle information includes collision information between the mobile robot and the obstacle, which is sensed by a collision sensor located in the mobile robot.
  • step S120 sample information is generated based on the second obstacle information and an image containing a second obstacle that generates the second obstacle information.
  • the software program to be identified includes image recognition, in order to improve the accuracy of image recognition (ie, increase the recognition rate), it is necessary to collect images that contain at least the wrong recognition object in the real environment.
  • the incorrectly identified objects include, but are not limited to, unrecognized boundary lines, unrecognized objects, and the like.
  • the generated sample information includes the image containing the incorrectly recognized object as the sample input information.
  • the sample information further includes sample output information.
  • the mobile robot marks the corresponding image area in the sample input information to indicate the position of the unrecognized second obstacle in the image, and will be used to describe the unrecognized
  • the information of the position of the second obstacle in the image is used as the sample output information.
  • the sample output information includes but is not limited to: an image marked with an unrecognized second obstacle, or parameter information for mapping the second obstacle to the sample input information including the second obstacle information .
  • step S130 the sample information is sent to a server.
  • the mobile robot After obtaining the sample input information and the sample output information matching the sample input information, the mobile robot sends the sample information to the server through the network interface. For example, the mobile robot sends the obtained sample information to the server in real time. In another example, the mobile robot temporarily stores the obtained sample information and uploads all the temporarily stored sample information to the server when the mobile robot is in an idle state (such as being fully charged and not performing a movement operation).
  • the server collects sample information of at least one mobile robot to train the backup software program to obtain a software program version and a corresponding update data package that can be more accurately identified.
  • the server sends the update data packet to the mobile robot according to a preset update condition, so that the mobile robot executes step S140.
  • step S140 the software program is updated when the update data package fed back by the server is received.
  • the mobile robot may receive the update data package and update the software program periodically or based on the server notification. For example, a mobile robot can automatically check whether an update data package needs to be downloaded for updating at startup. For another example, after receiving the notification information sent by the server, the mobile robot can confirm whether to update by the user and update in real time after the confirmation or set to update in a specified time period. After the mobile robot updates the software program, based on the updated software, obstacle information in the image captured by the imaging device is identified and subsequent operations are performed.
  • the software updating method for a mobile robot provided in this application generates sample information by acquiring second obstacle information, and updates a software program when an update data packet generated by the server based on the sample information is received, thereby improving mobile robot recognition. Obstacle accuracy reduces collision rates when moving along navigation routes planned based on updated obstacle information.
  • FIG. 2 shows a flowchart of a software updating method for a mobile robot in another embodiment of the present application. As shown in the figure, the software updating method includes steps S210 to S260. .
  • step S210 the second obstacle information detected by the detection device on the navigation route is acquired while the mobile robot moves along the navigation route.
  • Step S210 is the same as or similar to step S110 described above, and details are not described herein again.
  • step S220 a relative spatial position between the corresponding second obstacle and the mobile robot is determined based on the second obstacle information.
  • a manner of acquiring a relative spatial position between the second obstacle and the mobile robot is determined according to a type of a sensor that provides corresponding second obstacle information.
  • the collision information is detected by the collision sensor
  • the mobile robot receives the collision information from the data interface of the collision sensor, and controls the mobile robot to follow the navigation based on the collision information.
  • the route returns to move a distance, and the relative spatial position between the second obstacle and the mobile robot is determined by detecting the route of the return movement.
  • the navigation route may be a navigation route planned by the mobile robot based on the first obstacle information, or a navigation route planned by the mobile robot after detecting collision information.
  • the relative spatial position between the second obstacle and the mobile robot is determined by using a ranging sensor or a mobile device that detects the mobile robot.
  • the collision sensor of the mobile robot obtains the position information of the second obstacle through the collision, and then the processing device of the mobile robot controls the mobile robot to return to the shooting device The position of the last image captured, and based on the moving distance provided by the processing device to obtain the relative spatial position between the second obstacle that has collided with the mobile robot, or the collision distance based on the laser ranging sensor of the mobile robot.
  • the relative spatial position between the obstacle and the mobile robot is the relative spatial position between the obstacle and the mobile robot.
  • the mobile robot can also return to move an arbitrary distance according to the navigation route, and obtain the relative spatial position between the second obstacle that has collided with the mobile robot based on the moving distance provided by the processing device, or based on the laser measurement of the mobile robot.
  • the distance sensor acquires the relative spatial position between the second obstacle and the mobile robot that have collided.
  • the relative spatial position between the corresponding second obstacle and the mobile robot is determined based on the relative spatial position in the second obstacle information.
  • the relative spatial position between the second obstacle and the mobile robot is acquired by a laser ranging sensor.
  • a laser ranging sensor installed along the movement direction of the mobile robot is used to measure the distance between the obstacle on the navigation route and the mobile robot, based on According to the navigation route planned according to the first obstacle information, the distance D1 between the mobile robot and the first obstacle can be known.
  • the laser ranging sensor also measures the distance D2 between the mobile robot and the obstacle, and D2 is less than D1 , It indicates that there is a second obstacle, thereby obtaining the distance between the second obstacle and the mobile robot, and determining the deflection angle between the second obstacle and the mobile robot according to the direction control of the mobile robot, and the obtained distance And the deflection angle as the relative spatial position between the second obstacle and the mobile robot.
  • the relative spatial position may also be coordinate transformed, so as to map the relative spatial position after the coordinate transformation to the sample input information.
  • step S230 an image including a second obstacle is acquired based on the relative spatial position.
  • the mobile robot acquires an image that does contain a second obstacle according to the detected relative spatial position, so as to collect sampling information in a real environment.
  • an image including the second obstacle, which is captured by the imaging device at a corresponding relative spatial position is selected from the buffered images.
  • the mobile robot may cache at least one image during the movement along the navigation route constructed based on the first obstacle information, and extract from the cache according to the determined relative position space with the second obstacle, which matches the The image of relative position space is described. For example, when the mobile robot obtains the relative position space with the second obstacle according to the laser ranging sensor and the direction sensor, the current image captured by the imaging device is extracted from the cache to ensure that the captured image includes the second obstacle.
  • the image containing the second obstacle is re-captured based on the relative spatial position. For example, in a case where the mobile robot returns a preset distance of movement according to the navigation route, and obtains the relative spatial position between the collision second obstacle and the mobile robot, re-shooting the second obstacle at the position. Image.
  • step S240 sample information is generated based on the acquired image and the relative spatial position.
  • the sample input information in the sample information may be generated based on the obtained image including the second obstacle.
  • the image includes a first obstacle and a second obstacle.
  • the obtained image is directly used as the sample input information.
  • the obtained image is pre-processed, and the pre-processed image is used as sample input information.
  • mosaic processing can be performed on the original image.
  • the contour of the original image can be extracted, as long as the image used for training can contain the features of the boundary between the second obstacle and the ground.
  • the hidden layer of the neural network model included in the execution software can be used to process the original image.
  • the captured image is processed in at least two ways before being sent to the server.
  • the sample information may include only the sample input information, and a human-computer interaction interface is provided by the server to mark the sample output information in the sample input information.
  • the sample output information in the sample information is generated based on the acquired image and the relative spatial position.
  • the second obstacle is mapped onto the sample input information according to the relative spatial position, and the sample output information is obtained.
  • the boundary between the first obstacle and the ground in the image is identified based on the software program to be updated, and the correspondence between the pixel coordinate space of the image and the actual physical coordinate space, and the relationship between the second obstacle and the mobile robot.
  • the relative spatial position between the two obstacles and the ground is mapped into the image, then the sample output information includes the first obstacle and the second obstacle, and can display the boundary between the first obstacle and the ground Image of the line and the boundary between the second obstacle and the ground.
  • the relative spatial position, the sample input information, and the pre-stored physical reference information for mapping the second obstacle to the sample input information are packaged as the sample output information.
  • the server obtains the final sample output information through calculation.
  • the physical reference information includes, but is not limited to, a physical height of the camera device from the ground, a physical parameter of the camera device, and an included angle of a main optical axis of the camera device with respect to a horizontal or vertical plane.
  • step S250 the sample information is sent to a server.
  • Step S250 is the same as or similar to step S130, and details are not described herein again.
  • step S260 the software program is updated when the update data package fed back by the server is received.
  • the step S260 is the same as or similar to the above step S140, and details are not described herein again.
  • the application also provides a software update method for a server, the software update method is executed on the server.
  • the software updating method trains a software program backed up on a server by acquiring sample information collected by a mobile robot to obtain an update data package.
  • the sample information may be obtained according to the software update method provided in the foregoing FIG. 1 and FIG. 2, or may be obtained by other methods.
  • FIG. 3 is a flowchart of an embodiment of a software update method for a server in this application.
  • the software update method for a server includes steps S310, S320, and S330.
  • step S310 a preset software program is trained according to the sample information provided by the at least one mobile robot.
  • the software program is at least used to identify first obstacle information in an image captured by a camera device of the mobile robot, so that the mobile robot can plan a navigation route based on the identified first obstacle information.
  • the software program is a backup program corresponding to the mobile robot and stored on the server.
  • the software program includes updatable data such as internal parameters of the program and program configuration information that can be adjusted by training.
  • the software program includes a network structure and a connection mode of a neural network model. After training, parameters in the neural network model are adjusted, and the server encapsulates the parameters in the neural network model after training. Update packet.
  • the sample information includes at least sample input information of the second obstacle that cannot be identified by the backed up software program.
  • the sample input information includes at least the sample input information collected by the mobile robot according to the manner shown in FIG. 1 and FIG. 2, which is not described in detail here.
  • the sample information further includes sample output information, which is used to provide that the identifiable training results used for training are correct.
  • the sample output information includes at least the sample output information collected by the mobile robot in the manner shown in FIG. 1 and FIG. 2, which is not described in detail here.
  • the sample output information includes sample output information generated based on the selected image and the relative spatial position of the mobile robot and the second obstacle. For example, the boundary between the first obstacle and the ground in the image is identified based on the software program to be updated, and the correspondence between the pixel coordinate space of the image and the actual physical coordinate space, and the relationship between the second obstacle and the mobile robot.
  • the sample output information includes the first obstacle and the second obstacle, and can display the boundary between the first obstacle and the ground Image of the line and the boundary between the second obstacle and the ground.
  • the sample output information includes relative spatial position, sample input information, and pre-stored physical reference information for mapping the second obstacle to the sample input information.
  • the physical reference information includes, but is not limited to, a physical height of the camera device from the bottom surface of the mobile robot, physical parameters of the camera device, and an included angle of a main optical axis of the camera device with respect to a horizontal or vertical plane.
  • the server maps the second obstacle to an image corresponding to the input information of the sample based on the obtained sample output information, and uses the obtained image after the mapping as a correct and discernable training result for training. Sample output information.
  • the sample information may include only the sample input information, and the server provides another human-computer interaction interface to mark the sample output information in the sample input information.
  • the sample input information may be the image described above, or an image marked with the first obstacle information on the basis of this, and the server provides a technician with a person for marking the second obstacle. Machine-interactive interface, and uses the image marked with the second obstacle as the sample output information.
  • the server can update the mobile robot for the sample information sent by each mobile robot.
  • the server can perform a unified upgrade and update on all mobile robots in the current version after comparing and filtering the data.
  • the software program of each corresponding mobile robot is trained according to the sample information provided by each mobile robot.
  • a preset software program is trained according to sample information provided by all mobile robots.
  • the training method includes, but is not limited to, adjusting internal parameters of the software program, configuration information of the software program, and the like.
  • the software program includes the network structure and connection mode of the neural network model, and the parameters in the neural network model are trained by a back propagation algorithm to improve the accuracy of the neural network model, and the server will include The parameters in the trained neural network model are encapsulated in update packets.
  • step S320 an update data package for updating the software program is generated according to the training result.
  • the update data package may include a patch package applied to the software, a data package required for updating the software, and the like.
  • the first software program or the second software program includes a network structure and a connection manner of a neural network model.
  • the update data packet includes parameters in a corresponding neural network.
  • an update data packet obtained after training and updating the first software program or the second software program includes related parameters such as weight parameters in the corresponding CNN. , Offset parameters, etc.
  • step S330 the update data packet is sent to the mobile robot to update the software program built in the mobile robot.
  • the server may feedback the update data packet based on the update request of the mobile robot, or the server may actively push the update data packet to each mobile robot.
  • This application is used for a software update method of a server.
  • the server trains a software program based on sample information provided by the mobile robot side and generates an update data packet to send to the mobile robot to update the software, so that the software update training of the mobile robot can be performed at the service. It reduces the amount of data calculation on the mobile robot side. At the same time, it improves the update efficiency by uniformly training the data provided by all mobile robot sides.
  • FIG. 4 shows a flowchart of a software update method based on data communication between a mobile robot and a server. As shown in the figure, the method includes steps S410 to S490.
  • step S410 during the movement of the mobile robot according to the navigation route, the mobile robot obtains the second obstacle information detected by the detection device on the navigation route.
  • Step S410 is the same as or similar to step S210, and details are not described herein again.
  • step S420 the mobile robot determines a relative spatial position between the corresponding second obstacle and the mobile robot based on the second obstacle information.
  • Step S420 is the same as or similar to step S220, and details are not described herein again.
  • step S430 the mobile robot obtains an image including the second obstacle based on the relative spatial position.
  • Step S430 is the same as or similar to step S230, and details are not described herein again.
  • step S440 the mobile robot generates sample information based on the acquired image and the relative spatial position.
  • Step S440 is the same as or similar to step S240 described above, and details are not described herein again.
  • step S450 the mobile robot sends the sample information to the server.
  • Step S450 is the same as or similar to step S250, and details are not described herein again.
  • step S460 the server trains the preset software program according to the sample information provided by the mobile robot.
  • Step S460 is the same as or similar to step S310, and details are not described herein again.
  • step S470 the server generates an update data packet for updating the software program according to the training result.
  • Step S470 is the same as or similar to step S320 described above, and details are not described herein again.
  • step S480 the server sends an update data packet to the mobile robot.
  • Step S480 is the same as or similar to step S330, and details are not described herein again.
  • step S490 the mobile robot updates the software program after receiving the update data package fed back by the server.
  • Step S490 is the same as or similar to step S260, and details are not described herein again.
  • a mobile software is installed with a first software program for identifying a boundary line between an obstacle and the ground in the image, and the first software program includes a network structure and a connection manner of a neural network model.
  • the mobile robot recognizes the boundary between the first obstacle and the ground based on the first software program to determine the location of the obstacle, and plans a navigation route for obstacle avoidance movement based on the determined obstacle.
  • the obstacle since there is an obstacle that is not recognized or is not captured in the image due to a new addition (ie, the obstacle is not recognized) (hereinafter, to distinguish it from the first obstacle, the obstacle is called A second obstacle), the mobile robot collides with the second obstacle while the mobile robot moves according to the navigation route.
  • the collision sensor provided on the mobile robot controls the mobile robot to return to a preset distance in accordance with the navigation route based on the collision information, and obtains the collision second obstacle and movement based on the movement sensor of the mobile robot.
  • the relative spatial position between the robots, and an image containing the second obstacle is re-captured at this position.
  • the mobile robot generates sample information including sample input information and sample output information based on the image containing the second obstacle and the relative spatial position between the second obstacle and the mobile robot.
  • the mobile robot preprocesses the image containing the second obstacle to obtain sample input information, and maps the second obstacle to the sample input information based on the relative spatial position between the second obstacle and the mobile robot to obtain sample output information. .
  • the mobile robot sends the sample information to the server.
  • the server trains the first software program to be updated based on the received sample information and generates an update data packet to send to the mobile robot.
  • the mobile robot updates the first software program after receiving the update data packet, so that the mobile robot recognizes obstacles and plans a navigation route based on the updated software program.
  • the application also provides a software update system for a mobile robot, the mobile robot is in communication with a server, and the software update system is set in the mobile robot, so that the mobile robot collects a real application environment based on the software update system
  • the generated sample information is obtained through the data communication between the mobile robot and the server to obtain an update data package for software update.
  • the mobile robot includes a camera device and a detection device.
  • the mobile robot includes a camera device, and the mobile robot performs related operations according to images captured by the camera device.
  • the mobile robot may also be configured with multiple camera devices. In this application, the mobile robot performs related operations based only on images captured by one of the multiple camera devices. In this case, , Also considered as a mobile robot with a monocular camera.
  • the imaging device may capture still images at different times taken at preset time intervals.
  • the camera device can shoot video. Since the video is composed of image frames, it can continuously or discontinuously collect the image frames in the acquired video and select one frame image as one image.
  • the detection device of the mobile robot is a device for sensing the relationship between the mobile robot and an object in its application scene.
  • the detection device includes, but is not limited to, a laser ranging sensor for detecting the distance between the mobile robot and the object, A collision sensor and the like for sensing a collision relationship between a mobile robot and an obstacle.
  • a software program stored in the mobile robot and executed by the mobile robot is used to identify at least the first obstacle information in the image captured by the camera device, for the mobile robot to plan navigation based on the identified first obstacle information route.
  • the software program includes a first software program for identifying an obstacle in the image and a ground boundary line, or a second software program for identifying an object in the image, or a combination thereof.
  • the first software program includes a program that performs the following steps: identifying an image area of an object from an image captured by the camera device, and Determine the ground image area from the image; use the intersection of the ground image area and the object image area as the boundary line between the obstacle and the ground.
  • the first software program includes a network structure and a connection manner of a neural network model.
  • the software program uses a neural network to identify the boundary between the first obstacle and the ground in the image.
  • the neural network includes a trained CNN (Convolutional Neural Network, Convolutional Neural Network, CNN), and uses the CNN to identify the boundary between the first obstacle and the ground for the input image.
  • CNN Convolutional Neural Network, Convolutional Neural Network
  • the second software program includes a program that performs the following steps: an object that matches a pre-labeled object type tag can be identified from the image captured by the camera device, and then the corresponding Object type tags to characterize objects for mobile robots to do route planning and more.
  • the object type tag may be characterized by an image feature of the object, and the image feature can identify a target object in the image.
  • the object type tags include, but are not limited to, image features such as tables, chairs, sofas, flower pots, shoes, socks, tiled objects, and cups.
  • the tiled objects include, but are not limited to, floor mats, floor tile maps, and tapestries, wall paintings, etc. hanging on the walls.
  • the second software program includes a software program of an image recognition method such as an image recognition algorithm based on a neural network, an image recognition algorithm based on a wavelet moment, etc., to process, analyze, and identify the captured image and Obtain the object area corresponding to the object type label in the image.
  • the object region may be characterized by features such as the gray level of the object and the contour of the object.
  • the manner in which the object region is represented by the contour of the object includes obtaining the identified object region by a contour line extraction method.
  • the contour line extraction method includes, but is not limited to, methods such as binary, grayscale, and canny operators.
  • the similarity and consistency are sought through the correspondence between the content and characteristics of the object and the image, features, structure, relationship, texture and gray, etc. Similar image targets so that the object area in the image corresponds to a pre-labeled object type tag.
  • the object type tag may be characterized by an object classification.
  • the second software program includes a neural network model (such as CNN) obtained through pre-training, and recognizes an object region corresponding to each object type label from an image by executing the neural network.
  • the object type labels include, but are not limited to, tables, chairs, sofas, flower pots, shoes, socks, tiled objects, cups, and unknown objects.
  • the tiled objects include, but are not limited to, floor mats, floor tile maps, and tapestries, wall paintings, etc. hanging on the walls.
  • the unknown object refers to a classification that cannot be recognized by the trained neural network model and is recognized as an object, and generally includes objects randomly appearing indoors, such as debris, toys, and the like.
  • first software program and the second software program may be designed with a high degree of coupling, and a more complicated neural network model is used to simultaneously identify the boundary line and the object region corresponding to the object type label.
  • the corresponding identified boundary lines and objects (that is, object areas) in the image are provided to other software programs in the mobile robot through a program interface for the mobile robot.
  • Other software programs in the system read the first obstacle information and use the first obstacle information to perform map drawing, navigation route planning, control object identification, control decision, software update, and the like.
  • the other software program includes, but is not limited to, a third software program for calculating a relative spatial position of the boundary line and a mobile robot in a physical space, and is configured to be based on the relative spatial position and a preset map At least one of a fourth software program that plans a navigation route, and a fifth software program that generates the sample information.
  • the first software program, the second software program, and other software programs are not necessarily stored independently, and they may also be packaged in an APP.
  • a relative spatial position between the physical position corresponding to the boundary line and the mobile robot is calculated.
  • the preset physical reference information includes, but is not limited to, at least two of the following: a physical height of the camera device from the ground, a physical parameter of the camera device, and an included angle of a main optical axis of the camera device with respect to a horizontal or vertical plane.
  • FIG. 5 is a schematic structural diagram of a software update system for a mobile robot in an embodiment of the present application.
  • the software update system includes an obtaining unit 51, a sample generating unit 52, a first A sending unit 53 and a first updating unit 54.
  • the obtaining unit 51 is configured to obtain second obstacle information detected by the detection device on the navigation route during the movement of the mobile robot according to the navigation route.
  • the sample generating unit 52 is configured to generate sample information based on the second obstacle information and an image of the second obstacle including the second obstacle information.
  • the sample generation unit 52 includes a determination module, an acquisition module, and a sample generation module.
  • the determining module is configured to determine a relative spatial position between the corresponding second obstacle and the mobile robot based on the second obstacle information.
  • the determination module includes a first determination module and a second determination module.
  • the first determining module is configured to control the mobile robot to return and move a distance according to the navigation route based on the collision information in the second obstacle information, and determine the distance between the second obstacle and the mobile robot by detecting the route of the return movement.
  • Relative spatial position is configured to determine a relative spatial position between the corresponding second obstacle and the mobile robot based on the relative spatial position in the second obstacle information.
  • the obtaining module is configured to obtain an image including the second obstacle based on the relative spatial position.
  • the acquisition module includes a first acquisition module and a second acquisition module.
  • the first acquisition module is configured to select, from the buffered images, an image including a second obstacle, which is captured by the imaging device at a position corresponding to the relative space.
  • the second acquisition module is configured to re-capture an image including a second obstacle based on the relative spatial position.
  • the sample generating module is used for generating sample information based on the acquired image and the relative spatial position.
  • the sample information includes sample input information and sample output information.
  • the sample generating module includes an input sample generating module and an output sample generating module.
  • the input sample generation module is configured to generate sample input information based on the acquired image.
  • the sample input module is configured to directly use the acquired image as the sample input information.
  • the sample input module is configured to preprocess the acquired image, and use the preprocessed image as the sample input information.
  • the output sample generation module is configured to generate sample output information based on the acquired image and the relative spatial position.
  • the sample output module is configured to map the second obstacle to the sample input information according to the relative spatial position, and obtain sample output information.
  • a sample output module is configured to encapsulate the relative spatial position, sample input information, and pre-stored physical reference information for mapping the second obstacle to the sample input information as sample output information .
  • the first sending unit 53 is configured to send the sample information to a server.
  • the first updating unit 54 is configured to update the software program when an update data packet fed back by the server is received via the first receiving unit.
  • the working mode of each module in the software updating system for a mobile robot in this application is the same as or similar to the corresponding steps in the software updating method for a mobile robot described above, and is not repeated here.
  • the application also provides a software update system for a server, and the server is in communication connection with at least one mobile robot.
  • the server includes, but is not limited to, a single server, a server cluster, a distributed server, a server based on a cloud architecture, and the like.
  • Software program update training is implemented on the server side, and the update data package generated on the server side is sent to the mobile robot side to update the original software program, thereby reducing the amount of data calculation on the mobile robot side, and at the same time, it is convenient to use multiple
  • the data uploaded by each mobile robot side are uniformly trained on the server side, saving software update costs.
  • FIG. 6 is a schematic structural diagram of an embodiment of a software update system for a server in this application.
  • the software update system includes a second receiving unit 61, a training unit 62, and an update.
  • the second receiving unit 61 receives sample information provided by at least one mobile robot.
  • the sample information includes at least sample input information of a second obstacle that cannot be identified by the backed up software program.
  • the sample input information includes at least the sample input information collected by the mobile robot according to the manner shown in FIG. 1 and FIG. 2, which is not described in detail here.
  • the sample information further includes sample output information, which is used to provide that the identifiable training results used for training are correct.
  • the sample output information includes at least the sample output information collected by the mobile robot in the manner shown in FIG. 1 and FIG. 2, which is not described in detail here.
  • the sample output information includes sample output information generated based on the selected image and the relative spatial position of the mobile robot and the second obstacle. For example, the boundary between the first obstacle and the ground in the image is identified based on the software program to be updated, and the correspondence between the pixel coordinate space of the image and the actual physical coordinate space, and the relationship between the second obstacle and the mobile robot.
  • the sample output information includes the first obstacle and the second obstacle, and can display the boundary between the first obstacle and the ground Image of the line and the boundary between the second obstacle and the ground.
  • the sample output information includes relative spatial position, sample input information, and pre-stored physical reference information for mapping the second obstacle to the sample input information.
  • the physical reference information includes, but is not limited to, a physical height of the camera device from the bottom surface of the mobile robot, physical parameters of the camera device, and an included angle of a main optical axis of the camera device with respect to a horizontal or vertical plane.
  • the software update system further includes an output sample generating unit that outputs The sample generating unit is configured to generate sample output information based on the received relative spatial position, sample input information, and pre-stored physical reference information for mapping the second obstacle to the sample input information.
  • the sample information may include only the sample input information, and the server provides another human-computer interaction interface to mark the sample output information in the sample input information.
  • the sample input information may be the image described above, or an image marked with the first obstacle information on the basis of this, and the server provides a technician with a person for marking the second obstacle. Machine-interactive interface, and uses the image marked with the second obstacle as the sample output information.
  • the training unit 62 is configured to train a preset software program according to the sample information provided by the at least one mobile robot.
  • the training method includes, but is not limited to, adjusting internal parameters of the software program, configuration information of the software program, and the like.
  • the software program includes the network structure and connection mode of the neural network model, and the parameters in the neural network model are trained by a back propagation algorithm to improve the accuracy of the neural network model, and the server will include
  • the parameters in the trained neural network model are encapsulated in update packets.
  • the software program is at least used to identify first obstacle information in an image captured by a camera device of the mobile robot, for the mobile robot to plan a navigation route based on the identified first obstacle information.
  • the software program is a backup program corresponding to the mobile robot and stored on the server.
  • the software program includes updatable data such as internal parameters of the program and program configuration information that can be adjusted by training.
  • the software program includes a network structure and a connection mode of a neural network model. After training, parameters in the neural network model are adjusted, and the server encapsulates the parameters in the neural network model after training. Update packet.
  • the training unit is configured to train a preset software program according to sample information provided by all mobile robots. In other embodiments, the training unit is configured to train a corresponding software program of each mobile robot according to the sample information provided by each mobile robot.
  • the update generating unit 63 is configured to generate an update data packet for updating the software program according to the training result.
  • the second sending unit 64 is configured to send the update data packet to a mobile robot to update a software program built in the mobile robot.
  • the working modes of the modules in the software update system for the server of the present application are the same as or similar to the corresponding steps in the software update method for the server, and are not repeated here.
  • the application also provides a software update system for updating a software program in a mobile robot.
  • the mobile robot includes a camera device and a detection device.
  • a software program stored in the mobile robot and executed by the mobile robot is used to identify at least the first obstacle information in the image captured by the camera device, for the mobile robot to plan navigation based on the identified first obstacle information route.
  • FIG. 7 is a schematic structural diagram of a software update system of the present application in an implementation manner.
  • the software update system includes a client system 71 on the mobile robot side, and at least one client system 71 communication connection server system 72.
  • FIG. 1 to FIG. 2 A specific implementation manner in which the client system 71 executes the software update method for a mobile robot is shown in FIG. 1 to FIG. 2 and corresponding descriptions thereof, and details are not described herein again.
  • FIG. 3 A specific implementation manner of the server-side system 72 executing the software update method for the server is shown in FIG. 3 and the corresponding description, and details are not described herein again.
  • the specific implementation of the interaction between the client system 71 and the server system 72 in the software update system is shown in FIG. 4 and its corresponding description, and is not repeated here.
  • the present application also provides a mobile robot.
  • the mobile robot performs behaviors such as obstacle recognition, navigation route planning, and movement control through a software program provided on the mobile robot.
  • the mobile robot includes a camera device and a detection device.
  • the mobile robot includes a camera device, and the mobile robot performs related operations according to images captured by the camera device.
  • the mobile robot may also be configured with multiple camera devices. In this application, the mobile robot performs related operations based only on images captured by one of the multiple camera devices. In this case, , Also considered as a mobile robot with a monocular camera.
  • the imaging device may capture still images at different times taken at preset time intervals.
  • the camera device can shoot video. Since the video is composed of image frames, it can continuously or discontinuously collect the image frames in the acquired video and select one frame image as one image.
  • the detection device of the mobile robot is a device for sensing the relationship between the mobile robot and an object in its application scene.
  • the detection device includes, but is not limited to, a laser ranging sensor for detecting the distance between the mobile robot and the object, A collision sensor and the like for sensing a collision relationship between a mobile robot and an obstacle.
  • the mobile robot includes, but is not limited to, a sweeping robot, a patrol robot, a home companion robot, and the like.
  • FIG. 8 is a schematic structural diagram of a mobile robot according to an embodiment of the present application.
  • the mobile robot includes a camera device 81, a mobile device 82, a storage device 83, and a processing device 84.
  • the imaging device 81, the mobile device 82, and the storage device 83 are all connected to the processing device 84.
  • the imaging device 81 is used to capture images during the movement of the mobile robot.
  • the mobile robot includes a camera device, and the mobile robot performs operations such as obstacle recognition, robot positioning, robot movement control, and the like according to images captured by the camera device.
  • the mobile robot includes one or more camera devices, and the mobile robot performs related operations only based on images captured by one of the camera devices.
  • the imaging device includes, but is not limited to, a camera, a video camera, a camera module integrated with an optical system or a CCD chip, a camera module integrated with an optical system and a CMOS chip, and the like.
  • the power supply system of the camera device may be controlled by the power supply system of the mobile robot, and the camera device captures images of the route traveled by the mobile robot during its movement.
  • the camera device may be disposed on the casing of the mobile robot and assembled at a side position or a top edge position of the mobile robot.
  • the imaging device is mounted on the top surface of the cleaning robot and at a position on the body side.
  • the mounting position of the camera device is also related to the range of the field of view (also known as the angle of view) of the camera device, the height of the camera device relative to the ground, and the included angle of the main optical axis of the camera device relative to the horizontal or vertical plane. Therefore, the position of the camera in the mobile robot is not limited herein.
  • an imaging device equipped with a mobile robot has an adjustment member that can adjust the included angle. During the movement of the mobile robot, the adjustment device is adjusted to cause the imaging device to capture an image including the ground.
  • the adjusting member may be, for example, a deflecting mechanism and a telescopic mechanism described in Patent Application No.
  • the camera device is mounted on the top edge of the cleaning robot and has a viewing angle of 60 °, and the angle between the main optical axis of the camera device and the horizontal plane is 15 °.
  • the included angle of the main optical axis of the imaging device with respect to the horizontal plane may be other values, as long as it can ensure that the imaging device can capture the ground image area when capturing the image.
  • the included angle between the optical axis and the perpendicular or horizontal line is only an example, rather than limiting the included angle accuracy to 1 °.
  • the included angle Accuracy can be higher, such as 0.1 °, 0.01 °, etc., and no endless examples are given here.
  • the mobile device 82 is used to controlly move the mobile robot as a whole.
  • the moving device 82 adjusts a moving distance, a moving direction, a moving speed, a moving acceleration, and the like under the control of the processing device 84.
  • the mobile device 82 includes a driving unit and at least two roller sets. Wherein, at least one of the at least two roller groups is a controlled roller group.
  • the driving unit is connected to the processing device, and the driving unit is configured to drive the controlled wheel group to roll based on a movement control instruction output by the processing device.
  • the driving unit includes a driving motor, and the driving motor is connected to the roller group for directly driving the roller group to roll.
  • the driving unit may include one or more processors (CPUs) or micro processing units (MCUs) dedicated to controlling a driving motor.
  • the micro processing unit is configured to convert the information or data provided by the processing device into an electric signal for controlling a driving motor, and control the rotation speed, steering, etc. of the driving motor to adjust the movement according to the electric signal The moving speed and direction of the robot.
  • the information or data is an off-angle determined by the processing device.
  • the processor in the driving unit may be shared with the processor in the processing device or may be independently set.
  • the driving unit functions as a slave processing device, the processing device functions as a master device, and the driving unit performs movement control based on the control of the processing device.
  • the driving unit is shared with a processor in the processing device.
  • the drive unit receives data provided by the processing device through a program interface.
  • the driving unit is configured to control the controlled wheel group to roll based on a movement control instruction provided by the processing device.
  • the storage device 83 is configured to store images captured by the imaging device 12, preset physical reference information, pre-labeled object tags, and at least one program.
  • the image is stored in the storage device 83 by being captured by the imaging device.
  • the physical reference information includes, but is not limited to, a physical height of the camera device from the bottom surface of the mobile robot, physical parameters of the camera device, and an included angle of a main optical axis of the camera device with respect to a horizontal or vertical plane.
  • the technician measures the distance between the imaging center of the imaging device and the ground in advance, and saves the distance as the physical height or the initial value of the physical height in the storage device 83.
  • the physical height may also be obtained by calculating design parameters of the mobile robot in advance. According to the design parameters of the mobile robot, the included angle of the main optical axis of the camera device with respect to the horizontal or vertical plane, or the initial value of the included angle can also be obtained.
  • the saved included angle can be determined by increasing / decreasing the adjusted deflection angle based on the initial value of the included angle.
  • the saved physical height is at the initial value of the physical height. It is determined on the basis of increasing / decreasing the adjusted height.
  • the physical parameters of the imaging device include the angle of view and the focal length of the lens group.
  • the object tags are pre-screened and stored in the storage device 83 based on the environmental conditions moved by the mobile robot.
  • the object tag is used to describe an object classification or an image feature of an object in an image that may be captured and placed in the environment.
  • the object tag may be characterized by an image feature of the object that is capable of identifying a target object in the image.
  • the object tags include, but are not limited to, image features such as tables, chairs, sofas, flower pots, shoes, socks, and tiled objects.
  • the tiled objects include, but are not limited to, floor mats, floor tile maps, and tapestries, wall paintings, etc. hanging on the walls.
  • the object tags may be characterized by object classification.
  • the program stored in the storage device includes a neural network model (such as CNN) obtained through pre-training, and an object region corresponding to each object type label is identified from an image by executing the neural network model.
  • the object type labels include, but are not limited to, tables, chairs, sofas, flower pots, shoes, socks, tiled objects, cups, and unknown objects.
  • the tiled objects include, but are not limited to, floor mats, floor tile maps, and tapestries, wall paintings, etc. hanging on the walls.
  • the unknown object refers to a classification that cannot be recognized by the trained neural network model and is recognized as an object, and generally includes objects randomly appearing indoors, such as debris, toys, and the like.
  • the program stored in the storage device 83 further includes a related program for performing software update processing based on an image captured by a single imaging device (a monocular imaging device), which is called and executed by a processing device to be described later.
  • a single imaging device a monocular imaging device
  • the storage device 83 includes, but is not limited to, a high-speed random access memory and a non-volatile memory.
  • a non-volatile memory For example, one or more disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the storage device 83 may further include a memory remote from one or more processors, such as a network-attached memory accessed via an RF circuit or an external port and a communication network (not shown), wherein The communication network may be the Internet, one or more intranets, a local area network (LAN), a wide area network (WLAN), a storage area network (SAN), or the like, or a suitable combination thereof.
  • the memory controller can control access to the storage device by other components of the robot, such as the CPU and peripheral interfaces.
  • the processing device 84 performs data communication with the storage device 83, the imaging device 81, and the mobile device 82.
  • the processing device 84 may include one or more processors.
  • the processing device 84 is operatively coupled with a volatile memory and / or a non-volatile memory in the storage device 83.
  • the processing device 84 may execute instructions stored in a memory and / or a non-volatile storage device to perform operations in the robot, such as obtaining obstacle information, generating sample information, updating software programs, and the like.
  • the processor may include one or more general-purpose microprocessors, one or more application-specific processors (ASICs), one or more digital signal processors (DSPs), and one or more field programmable logic arrays (FPGAs).
  • ASICs application-specific processors
  • DSPs digital signal processors
  • FPGAs field programmable logic arrays
  • the processing device is also operatively coupled with an I / O port and an input structure that enables a robot to interact with various other electronic devices, and the input structure can enable a user to interact with a computing device. Therefore, the input structure may include a button, a keyboard, a mouse, a touchpad, and the like.
  • the other electronic equipment may be a mobile motor in a mobile device in the robot, or a slave processor dedicated to controlling the mobile device in the robot, such as an MCU (Microcontroller Unit, MCU).
  • MCU Microcontroller Unit
  • the processing device is respectively connected to a storage device, an imaging device, and a mobile device through a data cable.
  • the processing device interacts with the storage device through data reading and writing technology, and the processing device interacts with the camera device and the mobile device through an interface protocol.
  • the data read-write technology includes, but is not limited to, high-speed / low-speed data interface protocols, database read-write operations, and the like.
  • the interface protocol includes, but is not limited to, an HDMI interface protocol, a serial interface protocol, and the like.
  • FIG. 1 to FIG. 2 a specific implementation manner of the software updating method performed by the processing device 84 is shown in FIG. 1 to FIG. 2 and corresponding descriptions thereof, and details are not described herein again.
  • the mobile robot of the present application obtains the second obstacle information through the processing device to generate sample information, and updates the software program when receiving an update data packet generated by the server based on the sample information, thereby improving the accuracy of the mobile robot in identifying obstacles. , Thereby reducing collision rates when moving along a navigation route planned based on updated obstacle information.
  • the application also provides a server.
  • the server trains the software program backed up on the server, and sends the update data package generated on the server side to the mobile robot side to update the original software program, thereby reducing the amount of data calculation on the mobile robot side, and It is convenient to use the data uploaded by multiple mobile robots for unified training on the server side, saving software update costs.
  • FIG. 9 is a schematic structural diagram of an embodiment of a server of the present application.
  • the server includes a storage unit 91 and a processing unit 92.
  • the storage unit 91 is configured to store at least one program.
  • the program is called by a processing unit 92 described later to execute the above-mentioned software update method for a server.
  • the storage unit also stores a software program to be updated, captured images, preset physical reference information, and pre-labeled object tags for use in generating an update data packet.
  • the software program to be updated is at least used to identify first obstacle information in an image captured by the camera device, so that the mobile robot can plan a navigation route based on the identified first obstacle information.
  • the physical reference information includes, but is not limited to, a physical height of the camera device from the ground, physical parameters of the camera device, and an included angle of a main optical axis of the camera device with respect to a horizontal or vertical plane.
  • the object tags are pre-screened and stored in the storage unit 91 based on the environmental conditions moved by the mobile robot.
  • the object tag is used to describe an object classification or an image feature of an object in an image that may be captured and placed in the environment.
  • the object type tag may be characterized by an image feature of the object, and the image feature can identify a target object in the image.
  • the object type tags include, but are not limited to, image features such as tables, chairs, sofas, flower pots, shoes, socks, tiled objects, and cups.
  • the tiled objects include, but are not limited to, floor mats, floor tile maps, and tapestries, wall paintings, etc. hanging on the walls.
  • the object type tag may be characterized by an object classification.
  • the second software program includes a neural network model (such as CNN) obtained through pre-training, and recognizes an object region corresponding to each object type label from an image by executing the neural network model.
  • the object type labels include, but are not limited to: tables, chairs, sofas, flower pots, shoes, socks, tiled objects, cups, unknown objects, etc.
  • the tiled objects include, but are not limited to, floor mats, floor tile maps, and tapestries, wall paintings, etc. hanging on the walls.
  • the unknown object refers to a classification that cannot be recognized by the trained neural network model and is recognized as an object, and generally includes objects randomly appearing indoors, such as debris, toys, and the like.
  • the storage unit may include a high-speed random access memory, and may further include a non-volatile memory, such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the storage unit also includes a memory controller that can control access to the memory by other components of the device, such as a CPU and peripheral interfaces.
  • the processing unit 92 is configured to call at least one program in the storage unit and execute a software update method for the server.
  • the processing unit 92 performs data communication with the storage unit 91.
  • the processing unit 92 may execute instructions stored in a storage unit to perform operations in a server.
  • a specific implementation manner of the processing unit executing the software update method for the server is shown in FIG. 3 and a corresponding description thereof, and details are not described herein again.
  • the server of the present application trains the software program based on the sample information provided by the mobile robot side and generates an update data packet to send to the mobile robot for updating the software, so that the software update training of the mobile robot can be performed on the server side, reducing the mobile robot The amount of data calculation on the side, and at the same time, through the unified training of the data provided by all mobile robot sides, the update efficiency is improved.
  • the present application also provides a computer storage medium that stores at least one program that, when called, executes any one of the foregoing software update method methods for a mobile robot.
  • the present application also provides a computer storage medium, where the storage medium stores at least one program that, when called, executes any one of the foregoing software update methods for a server.
  • the technical solution of the present application that is essentially or contributes to the existing technology may be embodied in the form of a software product, and the computer software product may include one or more instructions stored thereon that are executable by the computer.
  • Machine-readable media, the instructions when executed by one or more machines, such as a computer, computer network, or other electronic device, may cause the one or more machines to perform operations according to embodiments of the present application. For example, each step in the positioning method of a robot is performed.
  • the machine-readable medium may include, but is not limited to, a floppy disk, an optical disk, a CD-ROM (Compact Disk-Read Only Memory), a magneto-optical disk, a ROM (Read Only Memory), a RAM (Random Access Memory), an EPROM (Erasable (Except programmable read-only memory), EEPROM (electrically erasable programmable read-only memory), magnetic or optical cards, flash memory, or other types of media / machine-readable media suitable for storing machine-executable instructions.
  • the storage medium may be located in a robot or a third-party server, such as a server that provides an application store. There are no restrictions on specific application stores, such as Huawei Application Store, Huawei Application Store, and Apple Application Store.
  • This application can be used in many general-purpose or special-purpose computing system environments or configurations.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • the present application can also be practiced in distributed computing environments in which tasks are performed by remote processing devices connected through a communication network.
  • program modules may be located in local and remote computer storage media, including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Security & Cryptography (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un procédé et un système de mise à jour de logiciel ainsi qu'un robot mobile et un serveur. Le procédé de mise à jour de logiciel comprend les étapes suivantes : lorsque le robot mobile se déplace conformément à un itinéraire de navigation, obtention de deuxièmes informations d'obstacle détectées par un dispositif de détection sur l'itinéraire de navigation ; génération d'informations d'échantillon sur la base des deuxièmes informations d'obstacle et d'une image comprenant un deuxième obstacle qui génère les deuxièmes informations d'obstacle ; envoi des informations d'échantillon à une extrémité de service ; et mise à jour d'un programme logiciel lorsqu'un paquet de données de mise à jour renvoyé par l'extrémité de service est reçu. Selon la présente invention, le fait d'obtenir les deuxièmes informations d'obstacle pour générer les informations d'échantillon et de mettre à jour le programme logiciel lorsque le paquet de données de mise à jour généré par l'extrémité de service sur la base des informations d'échantillon est reçu permet d'améliorer la précision d'identification de l'obstacle par le robot mobile, de sorte que le taux de collision est réduit lorsque le robot mobile se déplace le long de l'itinéraire de navigation planifié sur la base des informations d'obstacle mises à jour.
PCT/CN2018/090503 2018-06-08 2018-06-08 Procédé et système de mise à jour de logiciel, et robot mobile et serveur WO2019232804A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880000819.4A CN108780319A (zh) 2018-06-08 2018-06-08 软件更新方法、系统、移动机器人及服务器
PCT/CN2018/090503 WO2019232804A1 (fr) 2018-06-08 2018-06-08 Procédé et système de mise à jour de logiciel, et robot mobile et serveur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/090503 WO2019232804A1 (fr) 2018-06-08 2018-06-08 Procédé et système de mise à jour de logiciel, et robot mobile et serveur

Publications (1)

Publication Number Publication Date
WO2019232804A1 true WO2019232804A1 (fr) 2019-12-12

Family

ID=64029072

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/090503 WO2019232804A1 (fr) 2018-06-08 2018-06-08 Procédé et système de mise à jour de logiciel, et robot mobile et serveur

Country Status (2)

Country Link
CN (1) CN108780319A (fr)
WO (1) WO2019232804A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827120A (zh) * 2022-05-05 2022-07-29 深圳市大道智创科技有限公司 机器人的远程交互方法、装置及计算机设备

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108780319A (zh) * 2018-06-08 2018-11-09 珊口(深圳)智能科技有限公司 软件更新方法、系统、移动机器人及服务器
CN109739239B (zh) * 2019-01-21 2021-09-21 天津迦自机器人科技有限公司 一种用于巡检机器人的不间断仪表识别的规划方法
CN111583336B (zh) * 2020-04-22 2023-12-01 深圳市优必选科技股份有限公司 一种机器人及其巡检方法和装置
EP4145339A4 (fr) * 2020-05-11 2023-05-24 Huawei Technologies Co., Ltd. Procédé et système de détection de zone de conduite de véhicule, et véhicule à conduite automatique mettant en oeuvre le système
CN112269379B (zh) * 2020-10-14 2024-02-27 北京石头创新科技有限公司 障碍物识别信息反馈方法
CN113017492A (zh) * 2021-02-23 2021-06-25 江苏柯林博特智能科技有限公司 一种基于清洁机器人的物体识别智能控制系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885443A (zh) * 2012-12-20 2014-06-25 联想(北京)有限公司 用于即时定位与地图构建单元的设备、系统和方法
CN107223200A (zh) * 2016-12-30 2017-09-29 深圳前海达闼云端智能科技有限公司 一种导航方法、装置及终端设备
CN107643755A (zh) * 2017-10-12 2018-01-30 南京中高知识产权股份有限公司 一种扫地机器人的高效控制方法
WO2018093055A1 (fr) * 2016-11-17 2018-05-24 Samsung Electronics Co., Ltd. Système du type robot mobile et robot mobile
CN108780319A (zh) * 2018-06-08 2018-11-09 珊口(深圳)智能科技有限公司 软件更新方法、系统、移动机器人及服务器

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101920498A (zh) * 2009-06-16 2010-12-22 泰怡凯电器(苏州)有限公司 实现室内服务机器人同时定位和地图创建的装置及机器人
US9704043B2 (en) * 2014-12-16 2017-07-11 Irobot Corporation Systems and methods for capturing images and annotating the captured images with information
US10540777B2 (en) * 2015-06-10 2020-01-21 Hitachi, Ltd. Object recognition device and object recognition system
TWI578739B (zh) * 2015-06-25 2017-04-11 Chunghwa Telecom Co Ltd Obstacle diagnosis system and method thereof
CN105922990B (zh) * 2016-05-26 2018-03-20 广州市甬利格宝信息科技有限责任公司 一种基于云端机器学习的车辆环境感知和控制方法
CN106228110B (zh) * 2016-07-07 2019-09-20 浙江零跑科技有限公司 一种基于车载双目相机的障碍物及可行驶区域检测方法
CN107818293A (zh) * 2016-09-14 2018-03-20 北京百度网讯科技有限公司 用于处理点云数据的方法和装置
CN107871129B (zh) * 2016-09-27 2019-05-10 北京百度网讯科技有限公司 用于处理点云数据的方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885443A (zh) * 2012-12-20 2014-06-25 联想(北京)有限公司 用于即时定位与地图构建单元的设备、系统和方法
WO2018093055A1 (fr) * 2016-11-17 2018-05-24 Samsung Electronics Co., Ltd. Système du type robot mobile et robot mobile
CN107223200A (zh) * 2016-12-30 2017-09-29 深圳前海达闼云端智能科技有限公司 一种导航方法、装置及终端设备
CN107643755A (zh) * 2017-10-12 2018-01-30 南京中高知识产权股份有限公司 一种扫地机器人的高效控制方法
CN108780319A (zh) * 2018-06-08 2018-11-09 珊口(深圳)智能科技有限公司 软件更新方法、系统、移动机器人及服务器

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827120A (zh) * 2022-05-05 2022-07-29 深圳市大道智创科技有限公司 机器人的远程交互方法、装置及计算机设备

Also Published As

Publication number Publication date
CN108780319A (zh) 2018-11-09

Similar Documents

Publication Publication Date Title
WO2019232806A1 (fr) Procédé et système de navigation, système de commande mobile et robot mobile
CN109074083B (zh) 移动控制方法、移动机器人及计算机存储介质
WO2019232804A1 (fr) Procédé et système de mise à jour de logiciel, et robot mobile et serveur
WO2020113452A1 (fr) Procédé et dispositif de surveillance pour cible mobile, système de surveillance et robot mobile
WO2021026831A1 (fr) Robot mobile, et procédé de commande et système de commande associés
US10705535B2 (en) Systems and methods for performing simultaneous localization and mapping using machine vision systems
US11400600B2 (en) Mobile robot and method of controlling the same
WO2019114219A1 (fr) Robot mobile et son procédé de contrôle et son système de contrôle
CN109643127B (zh) 构建地图、定位、导航、控制方法及系统、移动机器人
WO2019095681A1 (fr) Procédé et système de positionnement, et robot approprié
WO2020140271A1 (fr) Procédé et appareil de commande de robot mobile, robot mobile, et support de stockage
WO2019090833A1 (fr) Système et procédé de positionnement et robot mettant en œuvre lesdits système et procédé
US10513037B2 (en) Control method and system, and mobile robot using the same
CA3117899A1 (fr) Procede et appareil permettant de combiner des donnees pour construire un plan en relief
WO2022078467A1 (fr) Procédé et appareil de recharge automatique de robot, robot et support de stockage
WO2018228256A1 (fr) Système et procédé de détermination d'emplacement cible de tâche d'intérieur par mode de reconnaissance d'image
CN110928301A (zh) 一种检测微小障碍的方法、装置及介质
US11348276B2 (en) Mobile robot control method
WO2021143543A1 (fr) Robot et son procédé de commande
Momeni-k et al. Height estimation from a single camera view
Maier et al. Vision-based humanoid navigation using self-supervised obstacle detection
WO2018228254A1 (fr) Dispositif électronique mobile et procédé pour utilisation dans un dispositif électronique mobile
WO2019113859A1 (fr) Procédé et dispositif de construction d'une paroi virtuelle basés sur la vision artificielle, procédé de construction d'une carte et dispositif électronique portable
JP7327596B2 (ja) 自律移動装置、自律移動装置のレンズの汚れ検出方法及びプログラム
WO2019104739A1 (fr) Dispositif de restriction, robot visuel automoteur et son procédé de commande

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921674

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/02/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18921674

Country of ref document: EP

Kind code of ref document: A1