WO2023036083A1 - Procédé et système de traitement de données de capteur, et support de stockage lisible - Google Patents

Procédé et système de traitement de données de capteur, et support de stockage lisible Download PDF

Info

Publication number
WO2023036083A1
WO2023036083A1 PCT/CN2022/117055 CN2022117055W WO2023036083A1 WO 2023036083 A1 WO2023036083 A1 WO 2023036083A1 CN 2022117055 W CN2022117055 W CN 2022117055W WO 2023036083 A1 WO2023036083 A1 WO 2023036083A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
robot
processor
sensor
obstacle
Prior art date
Application number
PCT/CN2022/117055
Other languages
English (en)
Chinese (zh)
Inventor
胡传振
Original Assignee
汤恩智能科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 汤恩智能科技(上海)有限公司 filed Critical 汤恩智能科技(上海)有限公司
Publication of WO2023036083A1 publication Critical patent/WO2023036083A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals

Definitions

  • the present application relates to the technical field of robots, in particular to a sensor data processing method, a system including a robot and a sensor, a mobile robot and a readable storage medium.
  • Robots usually use various sensors such as laser radar, optical camera, depth camera, and ultrasonic sensor to collect external environmental data.
  • the data collected by various sensors is uniformly transmitted to the main processor of the robot, and the main processor performs data processing and decision-making control of the robot. .
  • the depth camera can obtain a variety of data such as high-definition color pictures, infrared pictures, point cloud data, and point cloud confidence information. Transmission of these data to the main processor will consume a large amount of bandwidth and affect the processing frame rate.
  • the purpose of this application is to provide a sensor data processing method, a system including a robot and a sensor, a mobile robot and a readable storage medium, which are used to solve the problem that the data collected by the sensor in the prior art is controlled by the host.
  • Processor processing results in high pressure on transmission bandwidth and computing resources.
  • the first aspect disclosed in the present application provides a sensor data processing method for a system including a robot and a sensor, comprising the following steps: collecting data of the robot's surrounding environment through a plurality of sensors; The multiple sensors perform obstacle detection on the collected environmental data to generate local data; the multiple sensors send the local data generated respectively to a processor, wherein the processor is the main processor of the robot; The processor processes the received plurality of local data into a global obstacle distribution map.
  • the second aspect disclosed in the present application provides a system including a robot and a sensor, including: a plurality of sensors for collecting data of the surrounding environment of the robot, and each sensor performs obstacle detection on the collected environmental data to generate local data and sent to the processor; the memory is used to store instructions executed by one or more processors of the robot; and the processor is the main processor of the robot and is used to process the received partial data A map for the global obstacle distribution.
  • the third aspect disclosed in the present application provides a mobile robot, including: a plurality of sensors for collecting data of the surrounding environment of the robot, and each sensor performs obstacle detection on the collected environmental data to generate local data and send it to the processing device; mobile device, for performing a mobile operation; storage device, for storing at least one program; processing device, connected with the plurality of sensors, mobile device, and storage device, for executing the at least one program to perform The sensor data processing method as described in the first aspect above.
  • the fourth aspect disclosed in the present application provides a readable storage medium, the readable storage medium stores at least one computer program, and when the computer program is run by a processor, it controls the device where the storage medium is located to execute the above-mentioned first The sensor data processing method described in the aspect.
  • the sensor data processing method for a system including a robot and a sensor collects data of the surrounding environment of the robot through multiple sensors, and performs obstacle detection on the collected environmental data respectively, and then Local data is generated according to the obstacle detection results, and the local data generated by multiple sensors are further sent to the processor of the robot, and the processor fuses the multiple received local data into a global obstacle distribution map, thereby realizing the multi- Process the collected data on each sensor, and then send the processed data to the processor, which reduces the amount of data transmitted between multiple sensors and the processor, reduces the pressure on the transmission bandwidth, and improves the processing efficiency of the processor. , reducing the computing resource consumption of the processor.
  • FIG. 1 shows a schematic diagram of a scene where a robot constructs an obstacle distribution map in an embodiment of the present application.
  • FIG. 2 is a schematic diagram of the hardware structure of the robot in an embodiment of the present application.
  • FIG. 3 is a flow chart of an embodiment of the sensor data processing method of the present application.
  • FIG. 4 is a schematic diagram of a local obstacle distribution map generated by two sensors according to obstacle detection results in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a processor processing an acquired local obstacle distribution map in an embodiment of the present application.
  • Fig. 6 shows a flow chart of another embodiment of the sensor data processing method of the present application.
  • FIG. 7 is a flowchart of a method for adjusting the data collection mode of sensors by the processor of the robot in an embodiment.
  • Fig. 8 is a functional block diagram of the internal processor of the sensor in an embodiment of the present application.
  • Fig. 9 is a system block diagram of an embodiment of a system including a robot and a sensor of the present application.
  • Fig. 10 is a system block diagram of another embodiment of the system including a robot and a sensor of the present application.
  • Fig. 11 shows a block diagram of a mobile robot of the present application in an embodiment.
  • module may refer to or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) that executes one or more software or firmware programs, and and/or memory, combinational logic, and/or other suitable hardware components that provide the described functionality, or may be part of such hardware components.
  • ASIC Application Specific Integrated Circuit
  • processor shared, dedicated, or group
  • memory combinational logic, and/or other suitable hardware components that provide the described functionality, or may be part of such hardware components.
  • the processor may be a microprocessor, a digital signal processor, a microcontroller, etc., and/or any combination thereof.
  • the processor may be a single-core processor, a multi-core processor, etc., and/or any combination thereof.
  • the sensor data processing method of the present application is suitable for robots that process data from various sensors.
  • the present application provides a sensor data processing method for a system including a robot and a sensor.
  • the robot described in the system including a robot and a sensor is a mobile robot
  • the mobile robot refers to Autonomous mobile devices capable of building maps in physical space, including but not limited to: drones, industrial robots, home companion mobile devices, medical mobile devices, household cleaning robots, commercial cleaning robots, smart vehicles, and patrolling One or more of robots, etc.
  • it is a service robot that performs certain tasks in commercial scenarios (such as cleaning robots, patrol robots, or robots that deliver meals/items) or a household robot that performs cleaning or entertainment tasks in household scenarios (such as sweeping robots. or companion robot, etc.).
  • the physical space refers to the actual three-dimensional space where the mobile robot is located, which can be described by abstract data constructed in the space coordinate system.
  • the physical space includes, but is not limited to, family residences, public places (such as offices, shopping malls, hospitals, underground parking lots, and banks) and the like.
  • the physical space usually refers to the indoor space, that is, the space has boundaries in the directions of length, width, and height. In particular, it includes physical spaces such as shopping malls and waiting halls, which have a large space range and a high degree of scene repetition.
  • the sensors described in the system including robots and sensors include, for example, laser sensors, ultrasonic sensors, infrared sensors, optical cameras (such as monocular cameras or binocular cameras), depth cameras (such as ToF sensors), millimeter wave radar sensors, etc.;
  • a laser sensor can determine its distance to an obstacle according to the time difference between the time it emits a laser beam and the time it receives a laser beam;
  • the distance of the mobile robot relative to the obstacle as another example, the binocular camera device can use the triangulation principle to determine the distance of the mobile robot relative to the obstacle according to the images captured by its two cameras; as another example, ToF (Time of Flight , time of flight)
  • the infrared light projector of the sensor projects infrared light outwards, and the infrared light is reflected after encountering the measured obstacle, and is received by the receiving module. By recording the time from emission to reception of the infrared light, the illuminated Obstacle depth information.
  • the obstacles include but are not limited to physical space partitions (the physical space partitions include but are not limited to one or more of doors, floor-to-ceiling windows, screens, walls, columns, and rows of access gates, etc. species), tables, chairs, cabinets, stairs, escalators, and scattered individual barriers (such as flower pots), human bodies, etc.
  • a system including a robot and sensors refers to a robot system with multiple sensors on the robot body.
  • multiple sensors are set on the robot body.
  • FIG. 1 it shows a schematic diagram of a scene where a robot constructs an obstacle distribution map in an embodiment of the present application.
  • the four sensors continuously detect the surrounding environment in the working state of the robot 100; in actual implementation, the positions of the sensors 401, 402, 403 and 404 on the robot 100 body will be set in corresponding positions according to the functions of the sensors.
  • a fisheye camera or laser sensor is installed on the top of the robot body
  • a binocular vision camera is installed on the front of the robot
  • ultrasonic sensors are installed on the left and right sides or one side of the robot
  • laser sensors or ToF sensors are installed on the front or rear of the robot.
  • a system including a robot and a sensor refers to a wireless communication system in which the sensor is set outside the robot body, such as in the working environment of the robot, for example, in the aisle/ In physical spaces such as aisles, walls or pillars; what’s more, the sensors are set on other independent and mobile robot electronic equipment, such as user terminal equipment (such as smart phones, tablet computers, surveillance cameras, Sensors on user terminal devices such as smart screens), for example, by binding a smartphone to a robot, the sensor of the smartphone can exchange data with the robot.
  • user terminal equipment such as smart phones, tablet computers, surveillance cameras, Sensors on user terminal devices such as smart screens
  • the system consists of a robot and external sensors
  • the data transmission between the robot and the external sensor is carried out through wireless communication, which may include but not limited to wireless communication (Wireless Fidelity, Wi-Fi), Bluetooth (Blue Tooth) and other wireless communication methods.
  • wireless communication Wireless Fidelity, Wi-Fi
  • Bluetooth Bluetooth
  • the sensor data processing method of the present application uses the data of the surrounding environment of the robot collected by multiple sensors, so that the multiple sensors can detect obstacles on the collected environmental data to generate local data, and then the multiple sensors will generate each
  • the local data are sent to the processor, and the processor processes the received multiple local data into a global obstacle distribution map.
  • the method provided by this application collects the data of the surrounding environment of the robot through multiple sensors, and detects obstacles on the collected environmental data respectively, and then generates local data according to the obstacle detection results, and then sends the local data generated by each sensor to To the processor of the robot, the processor fuses the multiple received local data into a global obstacle distribution map, thereby realizing the processing of collected data on multiple sensors, and then sending the processed data to the processor , reducing the amount of data transmitted between multiple sensors and the processor, reducing the pressure on the transmission bandwidth, improving the processing efficiency of the processor, and reducing the consumption of computing resources of the processor.
  • FIG. 2 shows a schematic diagram of the hardware structure of the robot in an embodiment of the present application.
  • the robot 100 includes a processor 300, depth cameras 401 to 404, memory 160 and motion Components (mobile device) 170 and the like.
  • the processor 300 may be used to read and execute computer readable instructions.
  • the processor 300 may mainly include a controller, an arithmetic unit, and a register.
  • the controller is mainly responsible for instruction decoding, and sends out control signals for the operations corresponding to the instructions.
  • the arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations, and logic operations, and can also perform address operations and conversions.
  • the register is mainly responsible for saving the register operands and intermediate operation results temporarily stored during the execution of the instruction.
  • the hardware architecture of the processor 300 may be an application specific integrated circuit (ASIC) architecture, a MIPS architecture, an ARM architecture, or an NP architecture, and so on.
  • ASIC application specific integrated circuit
  • the processor 300 may include one or more processing units, for example: the processor 300 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal Processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) wait.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • graphics processing unit graphics processing unit
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • the processor 300 can be used to receive point cloud data, image data and local obstacle distribution map data sent by the depth cameras 401 to 404, and fuse these data to obtain a complete two-dimensional obstacle distribution map, and then control the robot 100 to complete specific tasks such as item delivery, room cleaning/cleaning, etc. according to the two-dimensional obstacle distribution map.
  • the processor 300 can receive point cloud data, image data and local obstacle distribution map data sent by the depth cameras 401 to 404 through wireless communication. .
  • the depth cameras 401 to 404 are used to regularly collect image information of the environment around the robot 100 when the robot 100 is moving.
  • the depth cameras 401 to 404 collect image information in a manner similar to that of ordinary cameras.
  • the image is projected onto a photosensitive element, which can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing, and the DSP converts the digital image signal into a standard RGB, YUV, etc. image format. Signal.
  • the depth cameras 401 to 404 are also used to obtain distances between obstacles in the environment and the depth cameras, and the distances between the two can be used to construct an obstacle distribution map for the robot 100 .
  • the depth cameras 401 to 404 also include internal processors, which are used to pre-process the data collected by the depth cameras, such as denoising, coordinate transformation, cluster analysis, and obstacle detection. , target recognition, etc., and then transmit the processed data to the processor 300 .
  • internal processors which are used to pre-process the data collected by the depth cameras, such as denoising, coordinate transformation, cluster analysis, and obstacle detection. , target recognition, etc., and then transmit the processed data to the processor 300 .
  • the internal processors in the depth cameras 401 to 404 may also include one or more processing units, such as an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural -network processing unit, NPU), etc.
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the memory 160 is coupled with the processor 300 for storing various software programs and/or multiple sets of instructions.
  • the memory 160 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices or other non-volatile solid-state storage devices.
  • the memory 160 can store operating systems, such as embedded operating systems such as uCOS, VxWorks, and RTLinux.
  • the memory 160 can also store a communication program, which can be used to communicate with a smart terminal, one or more servers, or additional devices.
  • the memory 160 can be used to store global obstacle distribution map data, in which the positions and sizes of obstacles are marked, and the robot 100 can use the stored global obstacle distribution map data Control your own movement to avoid collisions with obstacles.
  • the moving part 170 is used to control the movement of the robot 100 according to instructions issued by the processor 300 .
  • the motion component 170 may include multiple motion-related components, such as motors, transmission shafts, wheels, and the like.
  • the moving part 170 is used to realize various forms of movement of the robot 100, such as forward movement, backward movement, leftward movement, rightward movement, arcuate movement and the like.
  • the moving part 170 is used for performing a movement operation under control.
  • the moving part 170 may include a walking mechanism and a driving mechanism, wherein the walking mechanism may be arranged at the bottom of the mobile robot, and the driving mechanism is built in the housing of the mobile robot.
  • the traveling mechanism may adopt the way of traveling wheels.
  • the traveling mechanism may, for example, include at least two universal traveling wheels, and the at least two universal traveling wheels realize forward, backward, Movements such as turning and turning.
  • the traveling mechanism may, for example, include a combination of a plurality of straight traveling wheels and at least one auxiliary steering wheel, wherein, when the at least one auxiliary steering wheel is not involved, the plurality of straight traveling wheels It is mainly used for forward and backward, and when the at least one auxiliary steering wheel participates and cooperates with the plurality of straight-traveling wheels, it can realize steering, rotation and other movements.
  • the drive mechanism may be, for example, a drive motor, and the drive motor may be used to drive the traveling wheels in the traveling mechanism to move.
  • the driving motor may be, for example, a reversible driving motor, and a speed change mechanism may also be provided between the driving motor and the axle of the road wheel.
  • the structure shown in FIG. 2 does not constitute a specific limitation on the robot 100 .
  • the robot 100 may include more or fewer components than shown in the illustration, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components may be realized by hardware or software, or a combination of software and hardware.
  • the present application will be described in more detail by taking the robot 100 illustrated in FIG. 1 as an example in which the sensor is a depth camera (for example, a ToF sensor) and the obstacle 200 is a table.
  • the sensor is a depth camera (for example, a ToF sensor) and the obstacle 200 is a table.
  • the depth camera 401 in the forward direction of the robot 100 continues the data of the corresponding area when the robot 100 is moving.
  • the data collected by the depth camera 401 may include but not limited to: high-definition color image data, infrared image data, point cloud data, point cloud confidence Degree information and other data.
  • the depth camera 401 has a computing processing device (ie, the internal processor of the depth camera itself), such as a central processing unit (Central Processing Unit, CPU), a system on chip (System on Chip, SOC) and other internal processors of the depth camera 401.
  • the depth camera 401 performs data processing on the collected data through its own computing and processing device.
  • the depth camera 401 can denoise the collected point cloud data, and then convert the coordinate system of the point cloud data from the sensor coordinate system to the robot coordinate system. Further clustering and correlation analysis are performed on the point cloud data, and obstacle detection or target recognition can also be performed on the point cloud data, and the obstacle is identified as the table 200 and the like.
  • the depth camera 402 checks whether the forward direction of itself and the robot 100 is in the opposite direction.
  • the checking method can be determined by processing the data collected by itself, or can be determined by preset external parameters. If it is completely opposite, control itself to stop data collection, and at the same time suspend data transmission to the processor 300 of the robot 100 .
  • the depth cameras 403 and 404 check whether they are not in the forward direction of the robot 100.
  • the checking method can be determined by processing the data collected by themselves, or can be determined by preset external parameters, if If the inspection result is not in the forward direction of the robot 100, you can control yourself to reduce the amount of data collected, such as reducing the frequency of data collection, reducing the resolution of the collected data, etc., and can also reduce the amount of data sent to the robot processor 300 , for example, only send data including obstacle information to the robot processor 300 .
  • the depth camera 401 transmits the processed data to the processor 300 of the robot 100 , and the processor 300 of the robot 100 performs subsequent processing such as data fusion.
  • a depth camera can obtain local two-dimensional obstacle map data by processing point cloud data, and then send part of the two-dimensional obstacle map data to the robot processor 300, and the robot processor 300 receives local data sent by multiple depth cameras. Two-dimensional obstacle map data and multiple local two-dimensional obstacle map data are fused into a complete global two-dimensional obstacle distribution map.
  • FIG. 3 shows a flow chart of the sensor data processing method of the present application in an embodiment, as shown in the figure,
  • the sensor data processing method includes the following steps:
  • Step S201 the data of the surrounding environment of the robot is collected by a plurality of sensors; in this embodiment, the depth cameras 401 to 404 collect image data of the surrounding environment of the robot 100 .
  • the depth cameras 401 to 404 can be set on the robot 100 or in the working environment of the robot, and are used to regularly collect environmental image data around the robot 100 .
  • the depth cameras can be evenly distributed along the body circumference of the robot 100, and the image acquisition area corresponding to each depth camera can have A certain overlap, so as to avoid the blind area of image data acquisition.
  • the number of depth cameras installed on the robot 100 in the embodiment of the present application does not constitute an important factor for the number of depth cameras that can be installed on the robot 100.
  • Restrictions on the number of sensors, the number of sensors set on the robot 100 can be more than 4, and the multiple sensors set on the robot 100 are not limited to the same sensor, but can be a variety of sensors that collect environmental data around the robot through different working principles.
  • the multiple sensors are temporarily described as depth cameras as an example.
  • the environmental image data collected by the depth cameras 401 to 404 are of various types, including but not limited to: image data, infrared image data, point cloud data, point cloud confidence, and the like.
  • the image data can support multiple resolutions, from low resolution to high resolution.
  • the image data may be a color image or a black and white image or the like.
  • Infrared image data is image data obtained by measuring the heat radiated from objects. Compared with visible light image data, infrared image data has poorer resolution, contrast, signal-to-noise ratio, and visual effects.
  • Point cloud data is a collection of sampling points with spatial coordinates acquired by the depth cameras 401 to 404. Due to the large number and density, it is called "point cloud".
  • the sampling points include rich information, which can include but not limited to: three-dimensional coordinates ( X, Y, Z), colors, categorical values, intensity values, timestamps, etc.
  • Step S202 the multiple sensors perform obstacle detection on the collected environmental data to generate local data; Steps 401 to 404 respectively preprocess the collected point cloud data.
  • the preprocessing of point cloud data may include but not limited to: point cloud denoising, point cloud simplification, point cloud registration, point cloud hole filling, and the like.
  • the depth cameras 401 to 404 use internal processors to complete the preprocessing of point cloud data, and the internal processors complete the point cloud preprocessing by executing instructions related to preprocessing.
  • the preprocessing performed by each sensor using its internal processor further includes, each sensor converts the coordinate system in the local data generated by it from the sensor coordinate system to Robot coordinate system.
  • the plurality of sensors are depth cameras installed on the robot body as an example, when the environmental data collected by the depth cameras is used for obstacle detection, after the depth cameras preprocess the point cloud data, Coordinate transformation can also be performed on the processed point cloud data, that is, the depth camera converts the point cloud data from the depth camera coordinate system according to the conversion relationship determined by the preset external parameters such as its position and angle on the robot body is the robot coordinate system; through the transformation of this coordinate system, the point cloud data provided by the depth camera enables the robot 100 to subsequently construct a global obstacle distribution map centered on the robot 100 itself based on the point cloud data.
  • the plurality of sensors are depth cameras arranged outside the robot body as an example
  • the environmental data collected by the depth cameras is used for obstacle detection
  • the depth cameras preprocess the point cloud data
  • Coordinate transformation can also be performed on the processed point cloud data.
  • the conversion relationship between the sensor coordinate system and the robot coordinate system can be obtained by the robot (such as robot positioning information, moving distance information, angle and attitude information, etc.), for example, the robot obtains the conversion relationship between the sensor coordinate system and the robot coordinate system through the information provided by its own SLAM or VSLAM system, or the inertial navigation module, or the odometer module.
  • the preprocessing performed by each sensor using its internal processor can also perform point cloud data segmentation. Since each frame of point cloud data contains a large number of ground points, the point cloud segmentation on the ground is removed. After that, it is convenient for subsequent obstacle detection.
  • the preprocessing performed by each sensor using its internal processor can also perform cluster analysis on the point cloud data, and classify the point cloud data into different types of point cloud data collections.
  • the same point cloud data Collections have similar or identical properties.
  • an unsupervised clustering algorithm is usually used to form multiple point cloud data sets, and each point cloud data set represents a possible obstacle.
  • the plurality of sensors detect obstacles on the collected environment data.
  • the depth cameras 401 to 404 respectively detect obstacles in the point cloud data.
  • the depth camera’s perception of the environment can be divided into two levels. Low-level perception is also called obstacle detection. It only needs to detect obstacles in front of it. High-level perception can be regarded as target recognition, which requires further Classification.
  • Obstacle detection refers to extracting potential obstacle objects from point cloud data to obtain information such as their orientation, size, shape, orientation, etc., which are generally added by Bounding Box or described by polygons.
  • the target recognition is to find the point cloud block with the highest similarity with the target point cloud in the point cloud data according to the standard target point cloud or the standard point cloud feature description vector, and if the similarity satisfies the corresponding threshold condition, the The point cloud block is identified as the target point cloud.
  • the plurality of sensors detect obstacles on the collected environmental data, and generate a local obstacle distribution map according to the obstacle detection results.
  • the plurality of sensors detect obstacles on the collected environmental data to obtain obstacle data, and construct a top view by grid method to obtain local two-dimensional obstacle map data;
  • the The obstacle detection result obtained after the depth camera detects the obstacles in the point cloud data is an obstacle point cloud with a bounding box.
  • the bounding box is a polygonal frame. Projecting the obtained obstacle point cloud to the image plane can be Two-dimensional obstacle distribution map.
  • the obstacle point cloud can be used to construct a top view using the grid method to obtain a local obstacle distribution map.
  • each depth camera can generate a corresponding local obstacle distribution map.
  • FIG. 4 shows a schematic diagram of a local obstacle distribution map generated by two sensors according to the obstacle detection results in an embodiment of the present application.
  • the depth camera 403 and depth Taking the camera 404 as an example, when the robot 100 is moving in one direction (the direction indicated by the arrow in the box used to represent the robot 100 in the figure), it is assumed that the depth camera 403 and the depth camera 404 are at the moment when the time stamp is 0.5s Obstacle detection is performed on the collected environmental data and the obstacle data of obstacle O1 and obstacle O2 are respectively obtained.
  • the depth camera 403 and the depth camera 404 each construct a top view by grid method to obtain two local two-dimensional obstacle maps Data (obtained by processing the point cloud data), the map data carries the acquired time stamp information, that is, 0.5s, and the point cloud data is converted from the depth camera coordinate system based on the aforementioned depth camera 403 and depth camera 404 is the robot coordinate system, the (a) and (b) figures in Figure 4 can be obtained respectively, wherein the (a) figure in Figure 4 represents the local obstacle distribution map of the depth camera 403 at the current position; Figure 4 Figure (b) in (b) represents the local obstacle distribution map of the depth camera 404 at the current position.
  • the plurality of sensors perform obstacle detection on the collected environmental data to obtain obstacle data, and use structured data to represent information such as the type and coordinates of obstacles, thereby obtaining a structured form of Local obstacle distribution map.
  • the step of performing obstacle detection on the collected environmental data by the plurality of sensors to generate local data includes: generating local obstacle data according to an obstacle detection result.
  • each of the sensors in the plurality of sensors does not use its internal processor to construct its local obstacle distribution map, but each sensor generates a map for constructing its local obstacle according to the detection result of the obstacle.
  • Distributing the source data of the map so that the source data is sent to the main processor of the robot, so that the main processor generates the local obstacle of each sensor at the current position after receiving the obstacle data sent by the multiple sensors distribution map.
  • the collected environmental data is subjected to obstacle detection to generate point cloud data.
  • the point cloud data is a collection of sampling points carrying spatial coordinates and Timestamped data.
  • preprocessing point cloud data may include but not limited to: point cloud denoising, point Cloud simplification, point cloud registration, point cloud hole filling, etc.; processing of converting point cloud data from the depth camera coordinate system to the robot coordinate system; processing of segmenting and removing point cloud on the ground; or/and processing point cloud data Cluster analysis and other processing operations.
  • Step S203 the plurality of sensors send the local data generated respectively to the processor, wherein the processor is the main processor of the robot; in this embodiment, the main processor is, for example, the above-mentioned Processor 300 described in 2.
  • the preprocessing performed by each sensor using its internal processor further includes: the plurality of sensors send the local data generated respectively
  • the step of sending to the processor further includes dividing the partial data generated by each sensor into partial partial data with different priorities according to preset rules.
  • the conditions of the preset rule include whether the local data generated by the sensor contains an obstacle and the distance between the obstacle and the robot.
  • the depth cameras 401 to 404 respectively set different priorities for part of the point cloud data obtained respectively.
  • the point cloud data of the depth camera 401 is divided into multiple partial point cloud data by its internal processor, and different partial point cloud data may or may not contain obstacles. Among them, the distance between the obstacle and the robot in the part of the point cloud data containing the obstacle is not necessarily different.
  • different priorities may be set for part of the point cloud data according to whether the part of the point cloud data contains obstacles and the distance between the obstacle and the robot.
  • part of point cloud data A does not contain obstacles
  • part of point cloud data B contains obstacles
  • the distance between the obstacle and the robot is 3 meters
  • part of point cloud data C contains obstacles
  • the distance between the obstacle and the robot is 3 meters. If the distance is 7 meters, you can set the priority of part of point cloud data A to 3, the priority of part of point cloud data B to 1, and the priority of part of point cloud data C to 2, the smaller the priority value, The higher the priority. That is, the order of priority from high to low is partial point cloud data B, partial point cloud data C, and partial point cloud data A.
  • the step of sending the partial data generated by the plurality of sensors to the processor further includes sending the partial data generated by each sensor to the processor according to its preset priority.
  • the depth camera 401 first sends part of the point cloud data B to the main processor of the robot according to the priority sequence determined above, and then sends part of the point cloud data C to the main processor of the robot. controller, and finally send part of the point cloud data A to the main processor of the robot.
  • the priority of some point cloud data reflects the importance of this part of point cloud data to the robot. The higher the priority, the more important it is.
  • the distance between obstacles in part point cloud data B and the robot is smaller than that of part point cloud data
  • the distance between the obstacle in C and the robot, so part of the point cloud data B is preferentially sent to the processor 300 of the robot, and processed by the processor 300 first, so as to avoid collision between the obstacle and the robot.
  • the depth cameras 401 to 404 respectively send the partial point cloud data to the processor 300 according to the priority of the partial point cloud data.
  • the part of point cloud data with high priority is sent to the processor 300 first, and the part of point cloud data with low priority is sent to the processor 300 or not sent to the processor 300 after the part of point cloud data with high priority is finished sending.
  • the priority of part of point cloud data A is 3, the priority of part of point cloud data B is 1, and the priority of part of point cloud data C is 2, then part of point cloud data B is sent first, Part of the point cloud data C is sent next, part of the point cloud data A is sent last, or not sent.
  • the depth cameras 401 to 404 may further include buffers for buffering part of the point cloud data for a period of time, and then send the part of the point cloud data cached according to the corresponding priority.
  • each sensor before the plurality of sensors send the local data generated by each to the processor, each sensor is also preset with a priority, and each sensor sends to the processing of the robot according to its own preset priority.
  • the device sends partial data.
  • the condition of each sensor's preset priority takes into account, for example, that each sensor is on the current travel route of the robot For example, when the robot is running forward and forward, the sensors located on the robot's travel route (such as sensors set on the aisle or column in front of the robot's walking direction) have priority over other positions. Sensors, other locations such as sensors in certain locations behind the robot.
  • the condition of the preset priority of each sensor is the position where the sensor is arranged on the robot body, for example, the priority of the sensor arranged on the front side of the robot body
  • the priority is higher than the sensors arranged on the left and right sides of the robot body, and the priority of the sensors arranged on the left and right sides of the robot body is higher than that of the sensors arranged on the rear side of the robot body.
  • the preset priority of sending local data from the depth camera 401 to the main processor of the robot is higher than that of the depth cameras 403 and 404, and the depth cameras 403 and 404 are sent to the main processor of the robot.
  • the preset priority of the main processor of the robot to send partial data is higher than that of the depth camera 402; or, when the robot walks along the wall or around the edge on the left side, the preset priority of the depth camera 403 to send partial data to the main processor of the robot
  • the priority is higher than that of the depth camera 404; and when the robot walks along the wall or around the side, the preset priority of the depth camera 404 to send local data to the main processor of the robot is higher than that of the depth camera 403.
  • step S203 the multiple sensors send to the processor of the robot in descending order of their own priorities or the priorities of partial local data in the local data produced by themselves.
  • each sensor has a communication module to send local data to the robot through wired transmission
  • the depth cameras 401 to 404 each have a communication module, and the communication module is used to realize communication with the robot.
  • the two-way communication of the processor 300 can be performed through, for example, a network port, a USB port, a serial port, a controller area network (Controller Area Network, CAN) bus, and the like.
  • each sensor has a wireless communication module for sending local data to the robot through wireless transmission.
  • Step S204 the processor processes the received plurality of local data into a global obstacle distribution map.
  • the steps In S204 the processor fuses the multiple received local obstacle distribution maps into a global obstacle distribution map at the current location.
  • each of the multiple sensors described above for step S202 does not use its internal processor to construct its local obstacle distribution map, but each sensor
  • the step S204 of processing the received plurality of local data into a global obstacle distribution map by the processor includes: the main processing The sensor receives the local obstacle data sent by the multiple sensors to generate a local obstacle distribution map of each sensor at the current position; and performs fusion processing on multiple local obstacle distribution maps to generate a global obstacle distribution map of the current position .
  • the processor fuses the received plurality of local data into a global obstacle distribution map of the robot's current position.
  • the processor 300 receives the local obstacle distribution maps sent from the depth cameras 401 to 404 respectively, and fuses the received multiple local obstacle distribution maps into one Complete global obstacle distribution map.
  • the processor 300 of the robot can use a variety of fusion algorithms for the fusion of multiple local obstacle distribution maps.
  • the processor uses the probability The occupancy grid map algorithm is fused into a global obstacle distribution map.
  • the processor 300 can use the partial point cloud data from the depth cameras 401 to 404 to correct the obstacle information in the local obstacle distribution map to obtain more accurate obstacle information, and then correct the obstacle information in the global obstacle distribution map Item information is updated.
  • the processor of the robot calculates the displacement information of the robot at the current timestamp according to the timestamps carried in the local data generated by each sensor, and updates the displacement information in the local data according to the displacement information. Obstacle information; superimpose the updated local data of multiple sensors to form a global obstacle distribution map of the current position.
  • the timestamps carried in the partial data generated by each sensor for example, when the processor of the robot receives the partial data sent by a sensor, the processor of the robot will The time stamp carried in the local data generated by the sensor calculates the displacement information of the robot under the current time stamp, and displaces the obstacle position in the local obstacle distribution map to the estimated position, that is, the main processor updates the In the local obstacle distribution map, the local obstacle distribution maps of multiple sensors are superimposed to form a global obstacle distribution map of the robot at the current position.
  • FIG. 5 shows a schematic diagram of processing the obtained local obstacle distribution map by the processor in one embodiment of the present application.
  • the depth camera 403 and the depth camera 404 shown in FIG. 4 are still obtained.
  • the local obstacle distribution map Take the local obstacle distribution map as an example.
  • the depth camera 403 and the depth camera 404 detect obstacles on the collected environmental data at the time stamp of 0.5s and obtain the obstacles O1 and O2 respectively.
  • Obstacle data, the depth camera 403 and the depth camera 404 respectively obtain two local two-dimensional obstacle maps (ie (a) and (b) in the illustration), the map data carries the acquired time stamp information , that is 0.5s, when the processor 300 of the robot 100 obtains two local two-dimensional obstacle maps, as time goes by, for example, the time has reached the moment of 1.2s, at the time of 0.7s from 0.5s to 1.2s During this period, when the robot 100 moves in one direction (the direction indicated by the arrow in the box used to represent the robot 100 in the figure), and continues to move a certain distance, then the processor of the robot 100 is based on the sensor (the depth camera) 403 or the time stamp (0.5s) carried in the local data generated by the depth camera 404) calculates the displacement information of the robot 100 under the current time stamp (1.2s), and divides the obstacles O1 and O1 in the local obstacle distribution map The position of the obstacle O2 is displaced to the estimated position, that is, the theoretical position of the robot relative to the obstacle
  • the processor distributes the obstacles in the local obstacle distribution map
  • the displacement of the positions of object O1 and obstacle O2 to the estimated position is equivalent to updating (a) and (b) in the illustration, that is, a new illustration is obtained, that is, (c) and (c) in Figure 5 and (d) Figure.
  • the depth camera 403 and the depth camera 404 have converted the coordinate system of the local data (local two-dimensional obstacle map in this embodiment) from the sensor coordinate system to the robot coordinate system in their respective internal processors; and
  • the local two-dimensional obstacle maps of the depth camera 403 and the depth camera 404 are the same time stamp (1.2s), in other words, the local two-dimensional obstacle maps of the depth camera 403 and the depth camera 404 are the same time stamp (1.2s).
  • the main processor superimposes the (c) figure and (d) figure in the illustration under the robot coordinate system, that is, the (c) figure and (d) figure ) are superimposed on the same dimension (the time dimension and the space dimension represented by the coordinate system) to obtain a map (e), which is the global obstacle distribution map at the current position after fusion processing by the robot 100.
  • the above only takes the two sensors of the depth camera 403 and the depth camera 404 and two time stamps (0.5s and 1.2s) as examples for illustration.
  • the described Multiple sensors and the main processor of the robot will continue the above process, that is, the robot will obtain the global obstacle distribution map at each position, and as the robot traverses the entire work scene, the obstacle distribution of the entire work scene will eventually be formed map.
  • the processor 300 controls the movement of the robot 100 according to the global obstacle distribution map and robot tasks.
  • the processor 300 has obtained the global obstacle distribution map and the obstacle distribution map of the entire work scene according to different positions, that is, the robot 100 can be determined according to the tasks currently to be completed by the robot 100, such as destination navigation, item delivery, and overlay cleaning.
  • the motion path of the robot 100 is used to control the current motion of the robot 100 according to the motion path, such as controlling the robot 100 to move forward, to the left, to the right, etc., so that the robot 100 can complete a predetermined task.
  • FIG. 6 shows a flow chart of another embodiment of the sensor data processing method of the present application.
  • the sensor data processing method includes the following steps:
  • step S301 the depth cameras 401 to 404 collect image data of the environment around the robot 100 .
  • step S302 the depth cameras 401 to 404 respectively preprocess the collected point cloud data.
  • step S303 the depth cameras 401 to 404 respectively detect obstacles in the point cloud data.
  • step S304 the depth cameras 401 to 404 generate a local obstacle distribution map according to the obstacle detection results.
  • step S305 the depth cameras 401 to 404 respectively set different priorities for some point cloud data in the point cloud data.
  • step S306 the depth cameras 401 to 404 respectively send local obstacle distribution maps to the processor 300, and send partial point cloud data according to priority.
  • step S307 the processor 300 fuses the multiple local obstacle distribution maps to obtain a global obstacle distribution map.
  • step S308 the processor 300 controls the movement of the robot 100 according to the global obstacle distribution map and the robot task.
  • the sensor data processing method used in the system including the robot and the sensor collects the data of the surrounding environment of the robot through multiple sensors, and performs obstacle detection on the collected environmental data respectively, and then generates local obstacles according to the obstacle detection results distribution map, and further send the local obstacle distribution map generated by multiple sensors to the processor of the robot, and the processor fuses the received multiple local obstacle distribution maps into a global obstacle distribution map, thereby realizing the multi- Process the collected data on each sensor, and then send the processed data to the processor, which reduces the amount of data transmitted between multiple sensors and the processor, reduces the pressure on the transmission bandwidth, and improves the processing efficiency of the processor. , reducing the computing resource consumption of the processor.
  • the sensor data processing method of the present application further includes a step in which the processor adjusts one or more working modes of the sensors according to the working state of the robot; the working mode includes a data collection mode or/and Data sending method.
  • adjusting the working mode of the one or more sensors includes adjusting data collection frequency, adjusting data resolution, or adjusting data sending frequency.
  • FIG. 7 is a flowchart of a method for adjusting the data collection mode of sensors by the processor of the robot in one embodiment.
  • the scheme for implementing processor 300 to adjust the data collection methods of depth cameras 401 and 402 includes:
  • Step S401 the processor 300 determines the depth camera for adjusting the data collection method according to the working state of the robot 100 .
  • the working state of the robot 100 may include, but not limited to: going straight, turning left, turning right, walking in a "bow" shape, walking along a wall, and the like.
  • the "bow” walking here is a working state in the cleaning task of the robot 100, and is used to realize sequential cleaning of areas. Walking along the wall is also a working state in the cleaning task. It is used to clean along the wall or wipe the floor. It can be divided into cleaning along the left wall and cleaning along the right wall.
  • the data collection methods of different depth cameras can be adjusted.
  • the importance of multiple depth cameras is different.
  • the robot 100 has a higher demand for data sent by the depth camera with high importance, and a lower demand for data sent by the depth camera with low importance. Therefore, the processor 300 of the robot 100 may choose to receive data sent by more depth cameras with high importance, and receive less data sent by depth cameras with low importance.
  • the depth camera 401 is the camera in front of the moving direction of the robot 100
  • the depth camera 402 is the camera behind the moving direction of the robot 100 as an example for illustration.
  • the depth camera 401 and the depth camera 402 are respectively used to collect the data of the front and rear of the robot 100 moving direction, the importance of the data collected by the depth camera 401 is high, and the importance of the data collected by the depth camera 402 Therefore, the processor 300 determines the depth camera 401 and the depth camera 402 as the depth cameras that need to adjust the data collection method.
  • step S402 the processor 300 generates a control command for the depth camera for adjusting the data collection mode.
  • the processor 300 respectively generates corresponding control instructions for the depth cameras 401 and 402 that need to adjust the data collection method, and the control instructions are used to indicate how to adjust the data collection method of the depth cameras.
  • the data collection method may include but not limited to: one or more of data collection frequency, data resolution, and data sending frequency.
  • the control instruction generated by the processor 300 for the depth camera 401 includes increasing the data resolution, increasing the frequency of data collection or/and data transmission, etc.
  • the control instruction generated for the depth camera 402 includes reducing the data resolution, reducing the frequency of data collection or / and data transmission frequency, etc.
  • step S403 the processor 300 sends a control command for adjusting the data collection mode to the depth cameras 401 and 402.
  • step S404 and step S405 the depth camera 401 and the depth camera 402 respectively adjust the data collection mode according to the control instruction.
  • the control instruction received by the depth camera 401 adjusts the data collection method to improve the quality
  • the control instruction received by the depth camera 402 adjusts the data collection method to reduce the quality.
  • step S406 and step S407 the depth camera 401 and the depth camera 402 respectively send data according to the adjusted data collection mode.
  • the current data collection method of the depth camera 401 has improved quality, collection frequency and transmission frequency
  • the current data collection method of the depth camera 402 has improved quality, collection frequency and and the transmission frequency has decreased.
  • the processor 300 can concentrate computing resources on processing more important data sent by the depth camera, thereby effectively utilizing the network bandwidth between the processor 300 and the depth camera and computing resources of the processor 300 .
  • FIG. 8 shows a functional block diagram of an internal processor of the sensor described in the present application in an embodiment.
  • the depth camera 401 has a computing processing device, such as a system on chip (System on Chip, SoC) and the like.
  • SoC 1000 includes: interconnection unit 1050, which is coupled to processor 1010; system agent unit 1070; bus controller unit 1080; integrated memory controller unit 1040; 1020, which may include integrated graphics logic, an image processor, an audio processor, and a video processor; a static random access memory (SRAM) unit 1030; a direct memory access (DMA) unit 1060.
  • the coprocessor 1020 includes a special-purpose processor, such as a network or communication processor, a compression engine, a GPGPU, a high-throughput MIC processor, or an embedded processor, among others.
  • the present application also provides a system including a robot and a sensor, which includes a plurality of sensors, a memory, and a processor.
  • the memory and the processor are both the memory and the processor of the computing device installed on the robot body.
  • the plurality of sensors are used to collect data of the surrounding environment of the robot, and each sensor performs obstacle detection on the collected environmental data to generate local data and send it to the processor; in an embodiment, the sensors include, for example, a laser sensors, ultrasonic sensors, infrared sensors, optical cameras (such as monocular or binocular cameras), depth cameras (such as ToF sensors), millimeter-wave radar sensors, etc.
  • the memory is used to store instructions executed by one or more processors of the robot; in an embodiment, the memory may include high-speed random access memory, and may also include non-volatile memory, such as one or more Disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. In some embodiments, the one or more memories may also include memory remote from the one or more processors, such as network-attached memory accessed via RF circuitry or an external port and a communication network, which may be The Internet, one or more intranets, local area networks, wide area networks, storage area networks, etc., or an appropriate combination thereof. A memory controller may control access to memory by other components of the device, such as the CPU and peripheral interfaces.
  • non-volatile memory such as one or more Disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the one or more memories may also include memory remote from the one or more processors, such as network-attached memory accessed via RF circuitry or an external port and a
  • the processor is the main processor of the robot, and is configured to process the received plurality of local data into a global obstacle distribution map.
  • the processor 300 may be used to read and execute computer readable instructions.
  • the processor 300 may mainly include a controller, an arithmetic unit, and a register.
  • the controller is mainly responsible for instruction decoding, and sends out control signals for the operations corresponding to the instructions.
  • the arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations, and logic operations, and can also perform address operations and conversions.
  • the register is mainly responsible for saving the register operands and intermediate operation results temporarily stored during the execution of the instruction.
  • the hardware architecture of the processor 300 may be an application specific integrated circuit (ASIC) architecture, a MIPS architecture, an ARM architecture, or an NP architecture, and so on.
  • ASIC application specific integrated circuit
  • the processor 300 may include one or more processing units, for example: the processor 300 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal Processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) wait.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • graphics processing unit graphics processing unit
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • the processor can be used to receive point cloud data, image data and local obstacle distribution map data sent by the plurality of sensors, and fuse these data to obtain a complete two-dimensional obstacle distribution map, and then obtain a complete two-dimensional obstacle distribution map according to the two-dimensional obstacle distribution map.
  • the object distribution map controls the robot to complete specific tasks such as item delivery, room cleaning/cleaning, etc.
  • FIG. 9 is a system block diagram of an embodiment of a system including a robot and a sensor in the present application.
  • a robot system with a plurality of sensors specifically, according to the functional requirements of the mobile robot, a plurality of sensors 401, 402, 403 and 404 are arranged on the robot body 400, and the computing device 405 in the robot body 400 includes the memory 406 and processor 407.
  • sensors such as four sensors 401, 402, 403 and 404 are arranged on the robot body 400 shown in the figure to continuously detect the surrounding environment; Functions are set in corresponding positions, for example, a fisheye camera or laser sensor is set on the top of the robot body, a binocular vision camera is set on the front side of the robot, ultrasonic sensors are set on the left and right sides or one side of the robot, and a laser sensor is set on the front or rear side of the robot Or ToF sensor etc.
  • a fisheye camera or laser sensor is set on the top of the robot body
  • a binocular vision camera is set on the front side of the robot
  • ultrasonic sensors are set on the left and right sides or one side of the robot
  • a laser sensor is set on the front or rear side of the robot Or ToF sensor etc.
  • FIG. 10 is a system block diagram of another embodiment of the system including robots and sensors in this application.
  • the system including robots and sensors refers to sensors 401, 402 , 403 and 404 are set in a wireless communication system formed outside the robot body 400 , and the computing device 405 in the robot body 400 includes the memory 406 and the processor 407 .
  • the sensors 401, 402, 403, and 404 are set in physical spaces such as the aisle/passage, the wall or the column where the robot often cruises, and what is more, the sensors 401, 402, 403 and 404 are set on other independent and mobile robot electronic devices, such as sensors on user terminal equipment (such as smart phones, tablet computers, monitoring cameras, smart screens, etc.) Bind the robot so that the sensor of the smartphone can exchange data with the robot.
  • the system is composed of the robot and the external sensor.
  • the data transmission between the robot and the external sensor is carried out through wireless communication, which can include but not It is limited to wireless communication methods such as wireless network (Wireless Fidelity, Wi-Fi), Bluetooth (Blue Tooth).
  • the present application also provides a mobile robot.
  • FIG. 11 is a block diagram of the mobile robot of the present application in an embodiment.
  • the mobile robot of the present application includes multiple sensors 401, 402, 403 and 404 , the mobile device 408 , the storage device 406 and the processing device 407 .
  • the plurality of sensors 401, 402, 403 and 404 are arranged on the robot body 400 for collecting the data of the surrounding environment of the robot, and each sensor performs obstacle detection on the collected environmental data to generate local data and send it to the processor ;
  • the plurality of sensors arranged on the robot body include, for example, laser sensors, ultrasonic sensors, infrared sensors, optical cameras (such as monocular cameras or binocular cameras), depth cameras (such as ToF sensors), millimeter-wave radars sensors etc.
  • the moving device 408 is used to perform a moving operation; in an embodiment, the moving device 408 is used to control the movement of the robot according to instructions issued by the processor.
  • Mobile devices may include multiple motion-related components such as motors, drive shafts, wheels, and the like.
  • the mobile device 408 is used to realize various motion forms of the robot, such as forward motion, backward motion, left motion, right motion, bow motion, etc. to complete specific tasks such as item distribution, Room cleaning/washing etc.
  • the computing device 405 of the robot includes a storage device 406 and a processing device 407, wherein the storage device 406 is used to store at least one program; the processing device 407 and the plurality of sensors 401, 402, 403 and 404 move
  • the device 408 is connected to the storage device 406, and is used to execute the at least one program, so as to execute at least one embodiment described above for the sensor data processing method, such as the implementation described in any one of the implementations corresponding to FIG. 3 or execute at least one embodiment described above for the sensor data processing method, such as any one of the embodiments described in the implementation manners corresponding to FIG. 6 or FIG. 7 .
  • Another aspect of the present application also provides a computer-readable storage medium, which stores a computer program.
  • the device where the storage medium is located implements at least one embodiment described above for the sensor data processing method.
  • the device where the storage medium is located implements at least one embodiment described above for the sensor data processing method, such as FIG. 6 Or any one of the described embodiments in the implementation manners corresponding to FIG. 7 .
  • the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the computer-readable storage medium may include read-only memory, random access memory, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, USB flash drive, removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and can be accessed by a computer. Also, any connection is properly termed a computer-readable medium.
  • the instruction is sent from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair wire, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave
  • coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave
  • computer readable and writable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be directed to non-transitory, tangible storage media.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disc, and blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. copy data.
  • the functions described in the computer programs of the methods described in this application may be implemented by hardware, software, firmware or any combination thereof.
  • the functions When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • the steps of the methods or algorithms disclosed in this application can be embodied by processor-executable software modules, where the processor-executable software modules can be located on a tangible, non-transitory computer readable and writable storage medium.
  • Tangible, non-transitory computer-readable storage media can be any available media that can be accessed by a computer.
  • each block in the flowchart or block diagram may represent a module, a program segment, or a part of the code, and the module, program segment, or part of the code contains one or more operable Execute instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or can be implemented by a combination of dedicated hardware and computer instructions.
  • the sensor data processing method for a system including a robot and a sensor in one of the above-mentioned embodiments of the disclosure of the present application collects data of the surrounding environment of the robot through multiple sensors, and performs obstacle detection on the collected environmental data respectively. , and then generate local data according to the obstacle detection results, and further send the local data generated by multiple sensors to the processor of the robot, and the processor will fuse the multiple received local data into a global obstacle distribution map, thus realizing Process the collected data on multiple sensors, and then send the processed data to the processor, which reduces the amount of data transmitted between multiple sensors and the processor, reduces the pressure on the transmission bandwidth, and improves the performance of the processor.
  • the processing efficiency reduces the computing resource consumption of the processor.
  • a sensor data processing method for a system comprising a robot and a sensor comprising:
  • the multiple sensors perform obstacle detection on the collected environmental data, and generate a local obstacle distribution map according to the obstacle detection results;
  • the plurality of sensors send the local obstacle distribution maps generated respectively to a processor, wherein the processor is the main processor of the robot;
  • the processor fuses the received multiple local obstacle distribution maps into a global obstacle distribution map.
  • the senor is a depth camera, and is arranged on the robot or in the working environment of the robot, and the depth camera is used to collect point cloud data of the surrounding environment of the robot .
  • the depth camera performs coordinate conversion on the point cloud data, so as to convert the point cloud data from the depth camera coordinate system to the robot coordinate system.
  • the depth camera divides the point cloud data into a plurality of partial point cloud data, and sets corresponding priorities for the plurality of partial point cloud data.
  • setting corresponding priorities for the plurality of partial point cloud data includes:
  • the depth camera determines the priority corresponding to the part of the point cloud data according to whether the part of the point cloud data contains an obstacle and the distance between the obstacle and the robot.
  • the depth camera sends the part of the point cloud data to the processor according to the priority corresponding to the part of the point cloud data.
  • the processor determines the depth camera for adjusting the data collection mode according to the working state of the robot
  • the processor controls the determined depth camera to adjust the data collection mode
  • the determined depth camera sends data to the processor according to the adjusted data collection mode.
  • the data collection method includes one or more of the following combinations: data collection frequency, data resolution, and data transmission frequency.
  • a system comprising a robot and a sensor, comprising:
  • a plurality of sensors are used to collect data of the surrounding environment of the robot, and perform obstacle detection on the collected environmental data, and generate a local obstacle distribution map according to the obstacle detection results, and then generate the local obstacle distribution map sent to the processor;
  • memory for storing instructions to be executed by one or more processors of the robot
  • the processor is one of the processors of the robot, configured to fuse the received multiple local obstacle distribution maps into a global obstacle distribution map.
  • a readable storage medium wherein instructions are stored on the readable storage medium, and when the instructions are executed on a system comprising a robot and a sensor, the system executes the sensor described in any one of embodiments 1-8 data processing method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Optics & Photonics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

Procédé et système de traitement de données de capteur, robot mobile et support de stockage lisible. Le procédé consiste : à collecter, par une pluralité de capteurs, des données d'environnement ambiant d'un robot (S201) ; à réaliser respectivement une détection d'obstacles sur les données d'environnement collectées, puis à générer des cartes de distribution d'obstacles locaux en fonction des résultats de détection d'obstacles (S202) ; à envoyer, en outre, à un processeur du robot, les cartes de distribution d'obstacles locaux qui sont générées par la pluralité de capteurs (S203) ; et à fusionner, par le processeur, la pluralité de cartes de distribution d'obstacles locaux reçues en une carte de distribution d'obstacles globale (S204). Ainsi, le traitement de données collectées sur une pluralité de capteurs est réalisé, et les données traitées sont ensuite envoyées à un processeur, ce qui permet de réduire le volume de données des données qui est transmis entre la pluralité de capteurs et le processeur, de réduire la pression sur une bande passante de transmission, d'améliorer l'efficacité de traitement du processeur et de réduire la consommation de ressources informatiques du processeur.
PCT/CN2022/117055 2021-09-08 2022-09-05 Procédé et système de traitement de données de capteur, et support de stockage lisible WO2023036083A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111048851.X 2021-09-08
CN202111048851.XA CN113721631A (zh) 2021-09-08 2021-09-08 传感器数据处理方法、系统及可读存储介质

Publications (1)

Publication Number Publication Date
WO2023036083A1 true WO2023036083A1 (fr) 2023-03-16

Family

ID=78682512

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/117055 WO2023036083A1 (fr) 2021-09-08 2022-09-05 Procédé et système de traitement de données de capteur, et support de stockage lisible

Country Status (2)

Country Link
CN (2) CN113721631A (fr)
WO (1) WO2023036083A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113721631A (zh) * 2021-09-08 2021-11-30 汤恩智能科技(上海)有限公司 传感器数据处理方法、系统及可读存储介质
CN117291476B (zh) * 2023-11-27 2024-02-13 南京如昼信息科技有限公司 基于遥控机器人的城市排水管道的评估方法及系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060097265A (ko) * 2005-03-04 2006-09-14 주식회사 대우일렉트로닉스 멀티센서유닛이 구비된 로봇 청소기 및 그 청소방법
CN106681330A (zh) * 2017-01-25 2017-05-17 北京航空航天大学 基于多传感器数据融合的机器人导航方法及装置
CN108646739A (zh) * 2018-05-14 2018-10-12 北京智行者科技有限公司 一种传感信息融合方法
CN109682381A (zh) * 2019-02-22 2019-04-26 山东大学 基于全向视觉的大视场场景感知方法、系统、介质及设备
CN109738904A (zh) * 2018-12-11 2019-05-10 北京百度网讯科技有限公司 一种障碍物检测的方法、装置、设备和计算机存储介质
CN109752724A (zh) * 2018-12-26 2019-05-14 珠海市众创芯慧科技有限公司 一种图像激光一体式导航定位系统
CN111123925A (zh) * 2019-12-19 2020-05-08 天津联汇智造科技有限公司 一种移动机器人导航系统以及方法
CN113721631A (zh) * 2021-09-08 2021-11-30 汤恩智能科技(上海)有限公司 传感器数据处理方法、系统及可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060097265A (ko) * 2005-03-04 2006-09-14 주식회사 대우일렉트로닉스 멀티센서유닛이 구비된 로봇 청소기 및 그 청소방법
CN106681330A (zh) * 2017-01-25 2017-05-17 北京航空航天大学 基于多传感器数据融合的机器人导航方法及装置
CN108646739A (zh) * 2018-05-14 2018-10-12 北京智行者科技有限公司 一种传感信息融合方法
CN109738904A (zh) * 2018-12-11 2019-05-10 北京百度网讯科技有限公司 一种障碍物检测的方法、装置、设备和计算机存储介质
CN109752724A (zh) * 2018-12-26 2019-05-14 珠海市众创芯慧科技有限公司 一种图像激光一体式导航定位系统
CN109682381A (zh) * 2019-02-22 2019-04-26 山东大学 基于全向视觉的大视场场景感知方法、系统、介质及设备
CN111123925A (zh) * 2019-12-19 2020-05-08 天津联汇智造科技有限公司 一种移动机器人导航系统以及方法
CN113721631A (zh) * 2021-09-08 2021-11-30 汤恩智能科技(上海)有限公司 传感器数据处理方法、系统及可读存储介质

Also Published As

Publication number Publication date
CN115328153A (zh) 2022-11-11
CN113721631A (zh) 2021-11-30

Similar Documents

Publication Publication Date Title
US11941873B2 (en) Determining drivable free-space for autonomous vehicles
US11788861B2 (en) Map creation and localization for autonomous driving applications
US10192113B1 (en) Quadocular sensor design in autonomous platforms
US10496104B1 (en) Positional awareness with quadocular sensor in autonomous platforms
WO2023036083A1 (fr) Procédé et système de traitement de données de capteur, et support de stockage lisible
US10437252B1 (en) High-precision multi-layer visual and semantic map for autonomous driving
US10794710B1 (en) High-precision multi-layer visual and semantic map by autonomous units
JP6571274B2 (ja) レーザ深度マップサンプリングのためのシステム及び方法
US9443350B2 (en) Real-time 3D reconstruction with power efficient depth sensor usage
WO2020029758A1 (fr) Procédé et appareil de détection tridimensionnelle d'objet, procédé et appareil de commande de conduite intelligente, support et dispositif
WO2022104774A1 (fr) Procédé et appareil de détection de cible
WO2020186444A1 (fr) Procédé de détection d'objet, dispositif électronique, et support de stockage informatique
CN112904370A (zh) 用于激光雷达感知的多视图深度神经网络
US20230136860A1 (en) 3d surface structure estimation using neural networks for autonomous systems and applications
US20230135088A1 (en) 3d surface reconstruction with point cloud densification using deep neural networks for autonomous systems and applications
US12100230B2 (en) Using neural networks for 3D surface structure estimation based on real-world data for autonomous systems and applications
US20230213945A1 (en) Obstacle to path assignment for autonomous systems and applications
JP2021528732A (ja) 運動物体検出およびスマート運転制御方法、装置、媒体、並びに機器
US20230136235A1 (en) 3d surface reconstruction with point cloud densification using artificial intelligence for autonomous systems and applications
Ouyang et al. A cgans-based scene reconstruction model using lidar point cloud
US11308324B2 (en) Object detecting system for detecting object by using hierarchical pyramid and object detecting method thereof
Machkour et al. Monocular based navigation system for autonomous ground robots using multiple deep learning models
JP2023066377A (ja) 自律システム及びアプリケーションのための人工知能を使用した点群高密度化を有する3d表面再構成
WO2023283929A1 (fr) Procédé et appareil permettant d'étalonner des paramètres externes d'une caméra binoculaire
Sun et al. Detection and state estimation of moving objects on a moving base for indoor navigation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22866546

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22866546

Country of ref document: EP

Kind code of ref document: A1