WO2020192000A1 - 一种基于自主导航的畜禽信息感知机器人与地图构建方法 - Google Patents

一种基于自主导航的畜禽信息感知机器人与地图构建方法 Download PDF

Info

Publication number
WO2020192000A1
WO2020192000A1 PCT/CN2019/101574 CN2019101574W WO2020192000A1 WO 2020192000 A1 WO2020192000 A1 WO 2020192000A1 CN 2019101574 W CN2019101574 W CN 2019101574W WO 2020192000 A1 WO2020192000 A1 WO 2020192000A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
robot
time
filter
odometer
Prior art date
Application number
PCT/CN2019/101574
Other languages
English (en)
French (fr)
Inventor
林涛
任国强
林智贤
徐金凡
蒋焕煜
丁冠中
应义斌
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Priority to US17/426,596 priority Critical patent/US11892855B2/en
Publication of WO2020192000A1 publication Critical patent/WO2020192000A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/246Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/247Arrangements for determining position or orientation using signals provided by artificial sources external to the vehicle, e.g. navigation beacons
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/60Intended control result
    • G05D1/644Optimisation of travel parameters, e.g. of energy consumption, journey time or distance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/024Guidance services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/58Random or pseudo-random number generators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the invention relates to an intelligent agricultural equipment and method for livestock and poultry, in particular to a livestock and poultry information perception robot and a map construction method based on autonomous navigation.
  • the livestock and poultry industry in my country mainly adopts manual inspection, fixed-point monitoring and mobile inspection methods for information collection.
  • Manual inspections require breeders to conduct inspections and inspections regularly every day.
  • the labor intensity is high, the monitoring efficiency is low, and it is dangerous and does not meet the requirements of animal welfare.
  • Fixed-point monitoring is the installation of testing equipment in a fixed location, which has high cost, few monitoring parameters, and poor flexibility, and has great limitations.
  • the mobile inspection mainly places inspection equipment on the track and controls its movement on the track through remote control and other methods. However, it is difficult to lay the track, high cost, low applicability, and it is inconvenient to lay the track twice in the existing breeding environment.
  • the present invention provides an autonomous navigation-based
  • the livestock and poultry information perception robot and map construction method use robot technology to completely replace the functions of the breeders in the work link to realize fully automatic breeding and production.
  • a livestock and poultry information perception robot based on autonomous navigation 1.
  • It includes a four-wheeled trolley and an autonomous navigation system, a motion module, and an information acquisition module installed on the four-wheeled trolley.
  • a lifting platform is set above the four-wheeled trolley;
  • the autonomous navigation system includes a lidar, RGB-D camera, inertial measurement unit and odometer, and main control module for information processing and control; lidar, RGB-D camera, inertial measurement unit, odometer and main control module are all fixedly installed on the four-wheeled trolley, Lidar, RGB-D camera, inertial measurement unit, and odometer are all connected to the main control module;
  • the motion module includes a DC motor group for controlling the robot to travel, a push rod motor for controlling the up and down movement of the lifting platform, and A microcontroller that controls the rotation of the DC motor unit and the push rod motor;
  • the DC motor unit includes four DC motors installed at the bottom four corners of the four-wheel trolley.
  • the output shafts of the four DC motors are connected to the four wheels at the bottom four corners of the four-wheel trolley.
  • the rod motor is fixed on the bottom of the four-wheeled trolley, the output end of the push rod motor is connected to the lifting platform through the transmission structure, and the operation of the push rod motor drives the lifting platform to move up and down;
  • the information collection module includes a device for collecting animal behavior information A thermal imager, an environment detection sensor module for collecting environmental information, and a wireless transmission module for data transmission; the thermal imager is placed on both sides of the bottom of the lifting platform, and the environment detection sensor module is placed on the lifting platform.
  • the DC motor group and the push rod motor are both connected to the microcontroller, the microcontroller and the main control module are connected, and the main control module receives the surrounding environment information to control the operation of the DC motor group and the push rod motor, thereby controlling the movement of the four-wheeled trolley And the lifting of the lifting platform, and use SLAM for autonomous map construction and navigation.
  • the thermal imager and the environmental detection sensor module are electrically connected to the wireless transmission module, the wireless transmission module is connected to an external wireless receiver, and the external wireless receiver receives and stores the environmental perception information collected by the thermal imager and the environmental detection sensor module And processing.
  • the environmental detection sensor module includes a variety of sensors, including but not limited to sound sensors, temperature sensors, humidity sensors, light intensity sensors, hydrogen sulfide sensors, ammonia sensors, and carbon dioxide sensors.
  • Step S1 Control the robot to move in the indoor working environment, and use lidar, RGB-D camera, inertial measurement unit and odometer to obtain the surrounding environment information during the movement, including obstacle distance information, image and depth information, and local coordinate system
  • the pose information and odometer information, the pose information includes the first real-time global coordinates, and the odometer information includes the second real-time global coordinates, speed, heading angle and angular velocity of the wheels;
  • Step S2 The main control module receives the surrounding environment information for processing, and obtains the real-time global coordinates of the robot in the world coordinate system through coordinate transformation;
  • the world coordinate system is a three-dimensional coordinate system with the environment as the coordinate origin; the local coordinate system is a three-dimensional coordinate system with a four-wheeled car as the coordinate origin.
  • Step S3 Use the positioning global coordinates, speed, heading angle and angular velocity of the robot in the world coordinate system as the state vector of the Kalman filter; the global coordinates are composed of the first real-time global coordinates, the second real-time global coordinates and the odometer information Processed
  • Step S4 Construct the state model of the Kalman filter according to the state vector, construct the observation model of the Kalman filter according to the observation model of the odometer, the observation model of the inertial measurement unit, and the observation model of the lidar, and compare the state according to the Kalman filter algorithm
  • the model and the observation model are solved to obtain the global optimal solution of the state vector at time t;
  • Step S5 Combine the image information collected by the RGB-D camera and the Monte Carlo real-time positioning and map construction algorithm to determine the state model of the Kalman filter described in Step S4 and the global optimal solution of the state vector under the observation model;
  • the robot moves in the area to be constructed on the map.
  • the obstacle distance information collected by the lidar is used to determine whether the robot turns and whether there are obstacles during the movement.
  • the image information collected by the RGB-D camera is used to determine whether there are characteristic road markings;
  • the lidar, inertial measurement unit, and odometer perform feature matching on the information data collected in the region to be constructed on the map to obtain the pose in the world coordinate system, where the pose is the global coordinates and heading angle of the robot in the world coordinate system;
  • the control vector of the Kalman filter state model is the pose in the world coordinate system
  • the control vector of the state model of the Kalman filter is the optimal solution of the state vector
  • Step S6 Iteratively solve the state model and the observation model of the Kalman filter to obtain the positioning position, and then construct and obtain a global map.
  • the step S4 is specifically:
  • X c(t) represents the state vector at time t
  • f(X c(t) ) is the nonlinear state transfer function of the state vector X c(t) at time t
  • W t is the process noise of the Kalman filter
  • ⁇ t is the time interval between two adjacent moments
  • Z Las is the observation model of the lidar
  • Z IMU is the observation model of the inertial measurement unit
  • W 1(t) is the noise of the lidar and the inertial measurement unit
  • h 1 is the observation matrix of the first sub-filter
  • Zodom is the observation model of the odometer
  • W 2(t) is the noise of the odometer and the inertial measurement unit
  • h 2 is the observation matrix of the second sub-filter
  • Q 1(t') is the covariance of the measured noise of each sub-filter at time t
  • P 1(t') is the estimated error covariance of each sub-filter at time t
  • ⁇ 1 represents the first sub-filter
  • Q 2(t') is the covariance of the measured noise of each sub-filter at time t
  • P 2(t') is the estimated error covariance of each sub-filter at time t
  • ⁇ 2 represents the first Distribution coefficient of the weights of the two sub-filters
  • Is the global optimal solution of the state vector at time t Is the global optimal solution of the state vector of each subfilter at time t.
  • the step S6 is specifically:
  • S602 Divide the map area to be constructed into multiple grids, use lidar to scan the map area to be constructed, set the grid with obstacles to 1, and set the grid without obstacles to 0, so as to obtain a local grid Grid map, as the initial global map;
  • S603 Establish the particles according to the Monte Carlo algorithm, use the position of the particles as the possible positioning position of the robot, weighted fusion of the real-time global coordinates obtained by the odometer and the inertial measurement unit to obtain the new positioning position of the robot, specifically:
  • P is the location position after weighted fusion
  • Podam is the second real-time global coordinate obtained by the odometer
  • P IMU is the first real-time global coordinate obtained by the inertial measurement unit
  • ⁇ ⁇ t' is the odometer weight
  • ⁇ t' is the positioning Duration
  • is the duration of the second real-time global coordinates obtained by the odometer to reach a stable positioning
  • n represents the time index parameter, which depends on the actual situation, generally takes 3;
  • a Gaussian distribution with a mean of 0 and a variance of ⁇ 2 is used to describe the particle weight update method, and the particle weight of the Monte Carlo algorithm is updated.
  • the updated particle weight is:
  • e represents the natural constant, Represents the initial plane position of the i-th particle, Represents the weight of the i-th particle at time k, and k represents time;
  • N represents the total number of particles
  • P i is the position of the i-th weighted fused particles
  • F(i) represents the cumulative weight of the i-th particle
  • the particle weight is considered Larger, copy the current particle as a new particle, and set the weight to 1/N;
  • step S6063 Repeat the polynomial resampling of step S6062 for N times, generate N new particles, complete the particle update, and use the position of the particle obtained by the final update as the positioning position of the robot in the world coordinate system.
  • the characteristic road sign image information is also integrated. If the RGB-D camera presents no cage information for livestock and poultry, and the current positioning global coordinates and the distance between the edge obstacles scanned by the laser are less than the threshold, the navigation system issues to the motion module Turn signal, and then move forward to the next intersection and issue the turn signal again until the navigation mode is turned off.
  • the present invention also includes the path planning process of the autonomous navigation system, which specifically includes:
  • Step 1 Input the environment map, estimate the pose of the robot through Monte Carlo algorithm, and match the pose of the robot on the grid map and the real working environment;
  • Step 2 Input the pose of the navigation target
  • Step 3 The autonomous navigation main control module integrates the above information, uses Dijkstra's algorithm for global path planning, and obtains the optimal path;
  • Step 4 The autonomous navigation main control module performs local real-time planning, and transmits the control signals of the linear velocity and angular velocity of the robot to the motion module to complete the motor control of the robot and realize the autonomous navigation and movement of the robot.
  • the input map in step 1 can be an environment map built by the robot or an existing working environment map; if it is an existing working environment map, the map construction part is omitted, and the route planning is performed directly after the map format conversion is completed.
  • the global path planning in step 3 plans the overall path according to the given target position and the global map, and calculates the optimal route from the robot to the target position as the robot's global route.
  • the dynamic window approach is used to plan the route of each cycle according to map information and possible obstacles in the vicinity, and comprehensively evaluate the time, collision and other conditions to select the optimal route , And calculate the linear velocity and angular velocity in the travel cycle to achieve real-time obstacle avoidance.
  • the invention adds the positioning information of the inertial measurement unit and the characteristic image information on the basis of the lidar and the odometer, and utilizes the advantages and complementarity of various positioning in a cooperative manner, thereby obtaining the optimal solution of the state vector of the robot; reusing the characteristics
  • the speed and accuracy of image information processing can effectively solve the unique positioning problem in the symmetrical mode in the livestock and poultry house, forming high-precision pose prediction information, and reducing the particles in the real-time positioning and composition of the Monte Carlo algorithm Therefore, it is possible to further improve the positioning accuracy of the robot (low-power processor), ensure the positioning effect of the livestock and poultry information perception robot in a symmetrical environment, complete the map construction, and meet the application requirements of the livestock and poultry house environment.
  • the present invention can realize autonomous map construction and path planning, realize autonomous navigation in working environment, and use detection sensor module and wireless transmission module to collect and transmit in real time
  • a variety of environmental information, animal behavior and health information solves the problems of low efficiency, high cost, and heavy dependence on breeders of existing livestock and poultry house inspection methods.
  • the present invention integrates laser, odometer, inertial measurement unit and image information, utilizes the advantages and complementarity of various positioning in a collaborative manner, combines the characteristics of precise and rapid identification of characteristic road signs, and forms highly accurate pose prediction information , Thereby reducing the number of particles in the real-time positioning and map construction method of the Monte Carlo algorithm, which can greatly improve the positioning accuracy of the robot (low-power processor), thereby further improving the positioning accuracy of the robot in the livestock house.
  • the present invention has significant advantages:
  • the present invention combines robots and autonomous navigation technology to provide a livestock and poultry information perception robot and method based on autonomous navigation, which replaces the breeder for daily inspection work, and realizes that unmanned autonomously performs environmental information on the structured livestock and poultry breeding environment Automatic detection and automatic collection of animal behavior.
  • the unit positioning information and the characteristic image information utilize the advantages and complementarities of various positioning in a collaborative manner to obtain the optimal solution of the robot's state vector.
  • the positioning accuracy of the livestock and poultry information perception robot can ensure the positioning effect of the livestock and poultry information perception robot in a symmetrical environment, complete the map construction, and meet the application requirements of the livestock and poultry house environment.
  • ROS Robot Operation System
  • SLAM Simultaneous localization and mapping, real-time positioning and map construction
  • the invention overcomes the severe dependence of existing technologies and applications on the breeders, and aims to replace the breeders with a livestock and poultry information sensing/collecting robot based on autonomous navigation without changing the structure of the breeding environment itself, combining robots and autonomous navigation technology Carry out daily inspection work, realize the automatic detection of environmental information and animal behavior (health status) of the breeding environment, prevent the adverse effects caused by people entering the livestock and poultry houses, and provide the sustainable and healthy development of poultry breeding and animal husbandry.
  • Technical guarantee with the advantages of high efficiency, high economic benefit, wide applicability, etc., has extremely high industry value and application value.
  • Figure 1 is a structural diagram of the device provided by the present invention.
  • Figure 2 is the overall flow chart of the method of the autonomous navigation system of the present invention.
  • Figure 3 is a flowchart of map construction of the autonomous navigation system of the present invention.
  • Figure 4 is a flow chart of the path planning method of the autonomous navigation system of the present invention.
  • Figure 5 is a map construction result diagram of the autonomous navigation system of the present invention.
  • DC motor unit 1 microcontroller 2, four-wheeled trolley 3, push rod motor 4, inertial measurement unit 5, thermal imager 6, lifting pan/tilt 7, environmental detection sensor module 8, RGB-D camera 9, Lidar 10, wireless transmission module 11, main control module 12, and power management module 13.
  • FIG. 1 it includes a four-wheeled trolley 3 and an autonomous navigation system, a motion module, and an information collection module installed on the four-wheeled trolley 3.
  • a lifting platform 7 is provided on the four-wheeled trolley 3.
  • the autonomous navigation system includes a lidar 10 for obtaining information about the surrounding environment, an RGB-D camera 9, an inertial measurement unit 5 and an odometer, and a main control module 12 for information processing and control; the lidar 10 , RGB-D camera 9, inertial measurement unit 5, odometer, and main control module 12 are all fixedly installed on the four-wheeled trolley 3. Lidar 10, RGB-D camera 9, inertial measurement unit 5, and odometer are all connected to the main The control module 12; the RGB-D camera 9 and the lidar 10 are located in the front of the four-wheeled trolley 3, and the RGB-D camera 9 faces directly forward.
  • the motion module includes a DC motor unit for controlling the movement of the robot 1, a push rod motor 4 for controlling the up and down movement of the lifting platform 7, and a microcontroller that controls the rotation speed of the DC motor group 1 and the push rod motor 4 2;
  • DC motor unit 1 includes four DC motors installed at the bottom four corners of the four-wheel trolley 3. The output shafts of the four DC motors are connected to the four wheels at the bottom four corners of the four-wheel trolley 3.
  • the push rod motor 4 is fixed to the four-wheel trolley 3 On the bottom, the output end of the push rod motor 4 is connected to the lifting platform 7 through the transmission structure, and the push rod motor 4 drives the lifting platform 7 to move up and down; the DC motor group 1 and the push rod motor 4 are both connected to the microcontroller 2.
  • the controller 2 is connected to the main control module 12, and the main control module 12 receives the surrounding environment information to control the operation of the DC motor group 1 and the push rod motor 4, and then controls the movement of the four-wheeled trolley 3 and the lifting of the platform 7 up and down.
  • the main part of the robot is a four-wheeled trolley 3 with a lifting platform 7, a lifting platform 7 with a lifting function is installed on the four-wheeled trolley 3, and a thermal imager 6 and an environmental detection sensor module 8 are installed on the lifting platform 7.
  • the information of different locations can be collected according to task requirements, but it is not limited to only realize the information perception function.
  • the information collection module includes a thermal imager 6 for collecting animal behavior information, an environment detection sensor module 8 for collecting environmental information, and a wireless transmission module 11 for data transmission; the thermal imager 6 and The environment detection sensor module 8 is electrically connected to the wireless transmission module 11, and the wireless transmission module 11 is connected to an external wireless receiver; the thermal imager 6 is placed on both sides of the bottom of the lifting platform 7, and the environment detection sensor module 8 is placed on the lifting cloud On stage 7.
  • the thermal imager of the information acquisition module adopts a symmetrical installation method, which can simultaneously collect and detect animal behavior states at different heights on both sides of the aisle of the livestock and poultry house.
  • the specific implementation also includes a power management module 13, which is connected to the lidar 10, RGB-D camera 9, inertial measurement unit 5, odometer, main control module 12, DC motor unit 1, push rod motor 4 and micro-control 2.
  • the power management module 13 provides power required for normal operation of the robot components.
  • the lifting platform 7 drives the thermal imager 6 and the environment detection sensor module 8 to move up and down to sense and collect the required environment information for livestock and poultry cages of different heights.
  • the lidar 10 is used to measure the distance between the robot and the surrounding obstacles in real time
  • the RGB-D camera 9 is used to obtain the RGB image and depth information of the surrounding environment of the robot
  • the inertial measurement unit 5 is used to obtain the pose information of the robot.
  • the main control module 12 is based on the ROS platform and combines Kalman filter and particle filter algorithms to realize autonomous map construction and path planning.
  • the environmental detection sensor module 8 includes a variety of sensors, including but not limited to sound sensors, temperature sensors, humidity sensors, light intensity sensors, hydrogen sulfide sensors, ammonia sensors, and carbon dioxide sensors.
  • Autonomous navigation is divided into two parts: autonomous map construction and path planning.
  • the autonomous map construction process includes the following steps:
  • Step S1 Control the robot to move in the indoor working environment through the robot operating system ROS platform, and use lidar, RGB-D camera, inertial measurement unit and odometer to obtain surrounding environment information during the movement, including obstacle distance information, image and depth Information, pose information and odometer information in the local coordinate system.
  • the pose information includes the first real-time global coordinates
  • the odometer information includes the second real-time global coordinates, speed, heading angle and angular velocity of the wheels;
  • Step S2 The main control module receives the surrounding environment information for processing, and obtains the real-time global coordinates of the robot in the world coordinate system through coordinate transformation;
  • Step S3 Use the positioning global coordinates, speed, heading angle and angular velocity of the robot in the world coordinate system as the state vector of the Kalman filter; the global coordinates are obtained by the first real-time global coordinates, the second real-time global coordinates and the odometer information processing ;
  • Step S4 Construct the state model of the Kalman filter according to the state vector, construct the observation model of the Kalman filter according to the observation model of the odometer, the observation model of the inertial measurement unit, and the observation model of the lidar, and compare the state according to the Kalman filter algorithm The model and the observation model are solved to obtain the optimal solution of the state vector;
  • X c(t) represents the state vector at time t
  • f(X c(t) ) is the nonlinear state transfer function of the state vector X c(t) at time t
  • W t is the process noise of the Kalman filter
  • ⁇ t is the time interval between two adjacent moments
  • Z Las is the observation model of the lidar
  • Z IMU is the observation model of the inertial measurement unit
  • W 1(t) is the noise of the lidar and the inertial measurement unit
  • h 1 is the observation matrix of the first sub-filter
  • Zodom is the observation model of the odometer
  • W 2(t) is the noise of the odometer and the inertial measurement unit
  • h 2 is the observation matrix of the second sub-filter
  • Step S5 Construct the motion state transition model and motion observation model of the Monte Carlo algorithm
  • S502 The robot moves in the area to be constructed on the map, the obstacle distance information collected by the lidar is used to determine whether the robot turns and whether there are obstacles during the movement, and the image information collected by the RGB-D camera is used to determine whether there are characteristic road markings; Lidar, inertial measurement unit and odometer perform feature matching on the information data collected in the region to be constructed on the map to obtain the pose in the world coordinate system.
  • the control vector of the Kalman filter state model is the pose in the world coordinate system
  • the control vector of the Kalman filter state model is the optimal solution of the state vector, that is, the five states contained in the state vector value;
  • Step S6 The state model and the observation model of the Kalman filter are iteratively solved by the subsequent particle iterative update process to obtain the positioning position, and then the global map is constructed and obtained.
  • S602 Divide the map area to be constructed into multiple grids, use lidar to scan the map area to be constructed, set the grid with obstacles to 1, and set the grid without obstacles to 0, so as to obtain a local grid Grid map, as the initial global map;
  • S603 Establish the particles according to the Monte Carlo algorithm, use the position of the particles as the possible positioning position of the robot, weighted fusion the real-time global coordinates obtained by the odometer and the inertial measurement unit to obtain the new positioning position of the robot, specifically:
  • P is the location position after weighted fusion
  • Podam is the second real-time global coordinate obtained by the odometer
  • P IMU is the first real-time global coordinate obtained by the inertial measurement unit
  • ⁇ ⁇ t' is the odometer weight
  • ⁇ t' is the positioning Duration
  • is the duration of the second real-time global coordinates obtained by the odometer to reach a stable positioning
  • n represents the time index parameter, which depends on the actual situation, generally takes 3;
  • a Gaussian distribution with a mean of 0 and a variance of ⁇ 2 is used to describe the particle weight update method, and the particle weight of the Monte Carlo algorithm is updated.
  • the updated particle weight is:
  • e represents the natural constant, Represents the initial position of the i-th particle, Represents the weight of the i-th particle at time k, and k represents time;
  • N represents the total number of particles
  • P i is the position of the i-th weighted fused particles
  • F(i) represents the cumulative weight of the i-th particle
  • the particle weight is considered Larger, copy the current particle as a new particle, and set the weight to 1/N;
  • step S6063 Repeat the polynomial resampling of step S6062 for N times, generate N new particles, complete the particle update, and use the position of the particle obtained by the final update as the positioning position of the robot in the world coordinate system.
  • the characteristic road sign image information is also integrated. If the RGB-D camera presents no cage information of livestock and poultry in the image, and the current positioning global coordinates and the distance between the edge obstacles scanned by the laser are less than the threshold, the navigation system issues a turn signal to the motion module , Then move forward to the next intersection and issue the turn signal again until the navigation mode is turned off.
  • the path planning process of the autonomous navigation system includes the following steps:
  • Step 1 Input the environment map, the input environment map is the global map constructed by the robot autonomously in the above steps, and the global map is rasterized. Estimating the pose of the robot by Monte Carlo algorithm, matching the pose of the robot's global map and the real working environment after occupying the grid;
  • Step 2 Input the pose of the navigation target
  • Step 3 The autonomous navigation main control module integrates the above information, uses Dijkstra's algorithm for global path planning, and obtains the optimal path.
  • Global path planning According to the given target location and global map, the overall path planning is performed, and the optimal route from the robot to the target location is calculated as the robot's global route.
  • Step 4 The autonomous navigation main control module performs local real-time planning, and transmits the control signals of the linear velocity and angular velocity of the robot to the motion module to complete the motor control of the robot and realize the autonomous navigation and movement of the robot.
  • the dynamic window approach is used to plan the travel route of each cycle according to the map information and possible obstacles nearby, comprehensively evaluate the time, collision and other conditions to select the optimal route, and calculate the line within the travel period Speed and angular velocity, real-time obstacle avoidance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Optics & Photonics (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种基于自主导航的畜禽信息感知机器人与地图构建方法,畜禽信息感知机器人包括四轮小车(3)和自主导航系统、运动模块、信息采集模块,自主导航系统包括激光雷达(10)、RGB-D相机(9)、惯性测量单元(5)和里程计和主控模块(12);运动模块包括直流电机组(1)、推杆电机(4)和微控制器(2);信息采集模块包括热成像仪(6)、环境检测传感器模组(8)和无线传输模块(11);控制机器人在室内工作环境移动,同时利用激光雷达(10)、RGB-D相机(9)、惯性测量单元(5)和里程计获取移动过程的周围环境信息,进行数据处理得到定位位置,构建全局地图,能提高机器人的定位精度,满足应用需求;克服了对饲养人员的严重依赖性,实现对养殖环境的自动检测,具有效率高、经济效益高、适用性广等优点。

Description

一种基于自主导航的畜禽信息感知机器人与地图构建方法 技术领域
本发明涉及一种针对畜禽的智能农业装备和方法,具体涉及一种基于自主导航的畜禽信息感知机器人与地图构建方法。
背景技术
随着社会的不断发展,集约化、规模化和智能化是养殖业发展的必然趋势。这种养殖结构一方面可以有效提高生产力,节约社会资源,降低成本,带来更高的社会经济效益;另一方面也符合动物福利的要求,是实施标准化生产、提高畜产品质量的必要基础。动物行为、环境因子等信息是规模养殖生产过程的重要指标,如何高效采集并管理这些信息为精准化、智能化养殖生产带来新的挑战。
目前我国畜禽业主要采用人工巡检、定点监测和移动式巡检的方式进行信息采集。人工巡检需要饲养员每天定时进行巡视和检查,其劳动强度高、监测效率低且具有一定危险性也不符合动物福利要求。定点监测是在固定位置安装检测设备,其成本高、监测参数少、灵活性差,有较大的局限性。移动式巡检主要将巡检设备放置在轨道上,通过遥控等方式控制其在轨道的运动。但轨道的铺设难度大、成本高、适用性低,不便在现有养殖环境进行二次铺轨。随着养殖行业逐渐向大型集约化、规模化的生产模式进行转型,相对结构化的畜禽舍构造在国内得到了迅速推广,这也使得机器人技术可以在畜禽养殖行业中可以得到广泛的应用。近年来也有相关学者不断尝试将机器人技术、人工智能技术、信息处理技术等应用到畜禽养殖行业,但仍需要饲养员进入禽舍近距离发出控制信号,智能程度较低,应用潜力不大。而畜禽舍内相对结构化的环境为机器人技术的应用提供便利条件,使得智能机器人自主导航技术能更好地完成畜禽舍中的任务。但畜禽舍的对称式结构,也导致了定位过程的对称模式,使得机器人无法确定当前的唯一位姿,无法准确匹配所监测的动物信息、环境信息与机器人在禽舍内的相对位置。
发明内容
为了克服现存技术的弊端和解决背景技术中存在的问题,为有效解决畜禽舍环境对称结构下,基于激光传统自主导航存在对称模式下的唯一定位问题, 本发明提供了一种基于自主导航的畜禽信息感知机器人与地图构建方法,用机器人技术完全代替饲养人员在工作环节的功能,实现全自动养殖生产。
为了实现本发明的目的,本发明的具体技术方案如下:
一、一种基于自主导航的畜禽信息感知机器人:
包括四轮小车和安装在四轮小车上的自主导航系统、运动模块、信息采集模块,四轮小车上方设有升降云台;所述的自主导航系统包括用于获取周围环境信息的激光雷达、RGB-D相机、惯性测量单元和里程计以及用于信息处理和控制的主控模块;激光雷达、RGB-D相机、惯性测量单元、里程计和主控模块均固定安装在四轮小车上,激光雷达、RGB-D相机、惯性测量单元、里程计均连接到主控模块;所述的运动模块包括用于控制机器人行进的直流电机组、用于控制升降云台上下升降移动的推杆电机以及控制直流电机组进而推杆电机转速转向的微控制器;直流电机组包括安装于四轮小车底部四角的四个直流电机,四个直流电机的输出轴连接到四轮小车底部四角的四个车轮,推杆电机固定于四轮小车底部上,推杆电机输出端经传动结构连接到升降云台,推杆电机运行带动升降云台上下升降运动;所述的信息采集模块包括用于采集动物行为信息的热成像仪、用于采集环境信息的环境检测传感器模组和用于数据传输的无线传输模块;热成像仪置于升降云台底部的两侧,环境检测传感器模组置于升降云台上。
还包括和电源管理模块,电源管理模块连接到激光雷达、RGB-D相机、惯性测量单元、里程计、主控模块、直流电机组、推杆电机和微控制器。
所述的直流电机组和推杆电机均连接到微控制器,微控制器和主控模块连接,由主控模块接收周围环境信息进而控制直流电机组和推杆电机运行,进而控制四轮小车的移动和升降云台的升降,并使用SLAM进行地图自主构建和导航。
所述的热成像仪和环境检测传感器模组和无线传输模块电连接,无线传输模块连接到外部无线接收器,外部无线接收器接收热成像仪和环境检测传感器模组采集的环境感知信息进行存储和处理。
所述的环境检测传感器模组包括有多种传感器,包括但不限于声音传感器、温度传感器、湿度传感器、光照强度传感器、硫化氢传感器、氨气传感器、二氧化碳传感器。
二、一种自主导航地图自主构建方法:
步骤S1:控制机器人在室内工作环境移动,同时利用激光雷达、RGB-D相机、惯性测量单元和里程计获取移动过程的周围环境信息,包括障碍物距离信 息、图像及深度信息、局部坐标系下的位姿信息和里程计信息,位姿信息包括第一实时全局坐标,里程计信息包括第二实时全局坐标、速度、航向角和车轮的角速度;
步骤S2:主控模块接收周围环境信息进行处理,通过坐标变换得到机器人在世界坐标系下的实时全局坐标;
世界坐标系是以环境为坐标原点的三维立体坐标系;局部坐标系是以四轮小车为坐标原点的三维立体坐标系。
步骤S3:将机器人在世界坐标系下的定位全局坐标、速度、航向角以及角速度作为卡尔曼滤波器的状态向量;所述全局坐标由第一实时全局坐标、第二实时全局坐标和里程计信息处理获得;
步骤S4:根据状态向量构建卡尔曼滤波器的状态模型,根据里程计的观测模型、惯性测量单元的观测模型以及激光雷达的观测模型构建卡尔曼滤波器的观测模型,根据卡尔曼滤波算法对状态模型和观测模型进行求解获得t时刻下状态向量的全局最优解;
步骤S5:结合RGB-D相机采集的图像信息和蒙特卡罗即时定位与地图构建算法,确定步骤S4所述的卡尔曼滤波器的状态模型和观测模型下所述状态向量的全局最优解;
S501:机器人在地图待构建区域移动,通过激光雷达采集的障碍物距离信息判断机器人移动过程中是否转弯和是否有障碍物,通过RGB-D相机采集的图像信息判断是否有拍到特征道路标记;激光雷达、惯性测量单元和里程计对在地图待构建区域采集到的信息数据进行特征匹配得到世界坐标系下的位姿,所述位姿是机器人在世界坐标系下的全局坐标及航向角;
S502:作以下判断处理:
当机器人移动过程中无转弯、无障碍物或RGB-D相机没有拍到特征道路标记时,则卡尔曼滤波器的状态模型的控制向量为世界坐标系下的位姿;
当机器人移动过程中有转弯、有障碍物或RGB-D相机拍到特征道路标记时,则卡尔曼滤波器的状态模型的控制向量为状态向量的最优解;
步骤S6:对卡尔曼滤波器的状态模型和观测模型进行迭代求解得到定位位置,进而构建获得全局地图。
所述的步骤S4,具体为:
S401:t时刻,状态向量X c(t)构建为X c(t)=[x t,y tt,v tt] T,其中,x t、y t为机器人在世界坐标系下的定位全局坐标,θ t为航向角,v t为速度,ω t为角速度,T为矩阵转置;
S402:按照以下构建卡尔曼滤波器状态模型:
X c(t+1)=f(X c(t+1))+W t
Figure PCTCN2019101574-appb-000001
其中,X c(t)表示t时刻的状态向量,f(X c(t))为t时刻的状态向量X c(t)的非线性状态转移函数,W t为卡尔曼滤波器过程噪声,Δt为相邻两个时刻之间的时间间隔;
S403:将卡尔曼滤波器拆分为各自并列独立的第一子滤波器和第二子滤波器的两部分,其中:
第一子滤波器的观测模型为Z t+1=h 1X c(t)+W 1(t),具体为:
Figure PCTCN2019101574-appb-000002
其中,Z Las为激光雷达的观测模型,Z IMU为惯性测量单元的观测模型,W 1(t)为激光雷达与惯性测量单元的噪声,h 1为第一子滤波器的观测矩阵;
第二子滤波器的观测模型为Z 2(t+1)=h 2X c(t)+W 2(t),具体的:
Figure PCTCN2019101574-appb-000003
其中,Z odom为里程计的观测模型,W 2(t)为里程计与惯性测量单元的噪声,h 2为第二子滤波器的观测矩阵;
S404:采用以下公式将滤波器的过程噪声W t的协方差Q (t)、滤波器的估计误差协方差P (t)进行处理并分配到第一子滤波器和第二子滤波器,具体为:
Q 1(t′)=α 1 -1Q (t)
P 1(t′)=(1-α 1 -1)P (t)
Q 2(t′)=a 2 -1Q (t)
P 2(t′)=(1-a 2 -1)P (t)
Figure PCTCN2019101574-appb-000004
其中,Q 1(t’)为各子滤波器的t时刻量测噪声的协方差,P 1(t’)为各子滤波器t时刻的估计误差协方差,α 1表示第一子滤波器权值的分配系数;Q 2(t’)为各子滤波器的t时刻量测噪声的协方差,P 2(t’)为各子滤波器t时刻的估计误差协方差,α 2表示第二子滤波器权值的分配系数;
Figure PCTCN2019101574-appb-000005
为t时刻状态向量的全局最优解,
Figure PCTCN2019101574-appb-000006
为各子滤波器t时刻状态向量的全局最优解。
所述的步骤S6,具体为:
S601:将运动观测模型转换为似然函数;
S602:将地图待构建区域均分为多个栅格,采用激光雷达扫描地图待构建区域,将存在障碍物的栅格设置为1,将没有障碍物的栅格设置为0,从而得到局部栅格地图,并作为初始全局地图;
S603:根据蒙特卡洛算法建立粒子,以粒子的位置作为机器人可能的定位位置,将里程计和惯性测量单元获得的实时全局坐标进行加权融合获得机器人新定位位置,具体为:
P i=P odam·θ Δt+P IMU·(1-θ Δt′)
Figure PCTCN2019101574-appb-000007
其中,P是加权融合后的定位位置,P odam是里程计获得的第二实时全局坐标,P IMU是惯性测量单元获得的第一实时全局坐标,θ Δt′是里程计权重,Δt′是定位持续时间;γ是里程计获得的第二实时全局坐标到达稳定的定位持续时间,n表示时间指数参数,由实际情况而定,一般取3;
S604:采用均值为0、方差为σ 2的高斯分布描述粒子权重更新方法,更新蒙特卡罗算法的粒子权重,更新后的粒子权重为:
Figure PCTCN2019101574-appb-000008
其中,
Figure PCTCN2019101574-appb-000009
是第i个粒子时刻k的平面位置;e表示自然常数,
Figure PCTCN2019101574-appb-000010
表示第i个粒子初始的平面位置,
Figure PCTCN2019101574-appb-000011
表示第i个粒子时刻k的权重,k表示时刻;
然后对更新后的粒子权重进行归一化处理;
S605:根据更新后的粒子权重计算机器人的当前定位位置:
Figure PCTCN2019101574-appb-000012
其中,n表示粒子的总数,P i是加权融合后的第i个粒子的位置;
S606:根据更新后的粒子权重,舍弃权重
Figure PCTCN2019101574-appb-000013
较小的粒子,包括权值较大的粒子,具体如下:
S6061:对更新后的所有粒子的权重采用多项式重采样,构建离散累积分布函数:
Figure PCTCN2019101574-appb-000014
其中,F(i)表示第i个粒子的累计权重;
S6062:在[0,1]上产生服从均匀分布的随机数集合{u j},u j表示上述产生的随机数集合,j表示上述集合中随机产生第j个随机数,然后进行以下判断:
当累计权重F (i)≤u j时,则认为粒子的权重
Figure PCTCN2019101574-appb-000015
较小;
当累计权重F (i)>u j时,则认为粒子的权重
Figure PCTCN2019101574-appb-000016
较大,复制当前粒子作为新粒子,且权重设置为1/N;
S6063:重复步骤S6062的多项式重采样进行N次,生成N个新粒子,完成粒子更新,以最终更新得到的粒子的位置作为机器人在世界坐标系上的定位位置。
本发明具体实施中还融合特征路标图像信息,如果RGB-D相机呈现图像中没有畜禽的笼具信息,且当前定位全局坐标与激光扫到的边缘障碍距离小于阈值,导航系统向运动模块发布转弯信号,然后前行至下一路口位置,再次发布转弯信号,直至导航模式关闭。
本发明还包括自主导航系统的路径规划过程,具体包括:
步骤1:输入环境地图,通过蒙特卡罗算法对机器人的位姿进行估计,匹配机器人在栅格地图与真实工作环境的位姿;
步骤2:输入导航目标的位姿;
步骤3:自主导航主控模块整合上述信息,利用迪杰斯特拉算法进行全局路径规划,获得最优路径;
步骤4:自主导航主控模块进行本地实时规划,将机器人行进的线速度、角速度的控制信号传输给运动模块,完成对机器人的电机控制,实现机器人的自主导航移动。
所述步骤1的输入地图可以是机器人自主构建的环境地图或既有的工作环 境地图;若是既有的工作环境地图,则省略地图构建部分,在完成地图格式转换后直接进行路线规划。
步骤3中的全局路径规划根据给定目标位置和全局地图进行总体路径的规划,计算机器人到目标位置的最优路线,作为机器人的全局路线。
所述步骤4中的本地实时规划中,利用动态窗口法(Dynamic window approach)根据地图信息和附近可能出现的障碍物规划更改每个周期的行进路线,综合评价时间、碰撞等条件选取最优路线,并计算行进周期内的线速度和角速度,实现实时避障。
本发明在激光雷达与里程计的基础上加入惯性测量单元定位信息和特征图像信息,以协同的方式发挥各种定位的优点和互补性,从而得到机器人的状态向量的最优解;再利用特征图像信息处理的快速性和精确性的特点,可有效解决畜禽舍内对称模式下的唯一定位问题,形成精确度较高的位姿预测信息,同时降低蒙特卡罗算法即时定位与构图的粒子数量,由此能够进一步提高机器人(低功率处理器)的定位精度,保证畜禽信息感知机器人在对称环境的定位效果,完成地图构建,满足畜禽舍环境的应用需求。
本发明通过激光、里程计、惯性测量单元和图像信息有效融合,可以实现地图自主构建和路径规划,实现在工作环境下的自主导航,并利用检测传感器模组和无线传输模块,实时采集并传输多种环境信息、动物行为健康等信息,解决了现有畜禽舍巡检方式效率低、成本高且对饲养人员依赖严重的问题。
同时本发明融合激光、里程计、惯性测量单元和图像信息,以协同的方式发挥各种定位的优点和互补性,结合特征路标识别精确和快速的特点,形成精确度较高的位姿预测信息,从而降低蒙特卡罗算法即时定位与地图构建方法中的粒子数量,能够极大地提高机器人(低功率处理器)的定位精度,从而进一步提高机器人在畜禽舍内的定位精度。
本发明与现有技术相比,其显著优点为:
1)本发明结合机器人和自主导航技术,提供了一种基于自主导航的畜禽信息感知机器人与方法,替代饲养员进行日常巡检工作,实现无人自主对结构化畜禽养殖环境进行环境信息的自动检测和动物行为的自动采集。
2)结合卡尔曼滤波算法,根据里程计的观测模型、惯性测量单元的观测模型以及激光雷达的观测模型构建卡尔曼滤波器的观测模型,在传统的激光雷达与里程计的基础上加入惯性测量单元定位信息和特征图像信息,以协同的方式发挥各种定位的优点和互补性,从而得到机器人的状态向量的最优解。
3)再结合特征路标识别精确和快速的特点,形成精确度较高的位姿预测信息,从而降低蒙特卡罗算法即时定位与地图构建方法中的粒子数量,能够极大地提高机器人(低功率处理器)的定位精度,保证畜禽信息感知机器人在对称环境的定位效果,完成地图构建,满足畜禽舍环境的应用需求。
4)首次将ROS(Robot Operation System)作为机器人操作系统应用到畜禽养殖行业中,以SLAM(Simultaneous localization and mapping,即时定位与地图构建)技术为核心,集成卡尔曼滤波、粒子滤波算法,实现机器人的路径规划,便于后续对产品进行二次开发。
本发明克服现存技术及应用对饲养人员的严重依赖性,旨在不改变养殖环境本身结构的情况下,结合机器人和自主导航技术,用基于自主导航的畜禽信息感知/采集机器人来替代饲养员进行日常巡检工作,实现对养殖环境进行环境信息和动物行为(健康状态)自动检测,防止人进入畜禽舍后造成的不良影响,为家禽养殖和畜牧业的可持续性、健康发展提供了技术保障,具有效率高、经济效益高、适用性广等优点,具有极高的行业价值和应用价值。
附图说明
图1为本发明提供的装置结构图;
图2为本发明自主导航系统的方法整体流程图;
图3为本发明自主导航系统地图构建流程图;
图4为本发明自主导航系统的路径规划方法流程图;
图5为本发明自主导航系统的地图构建结果图。
图中:直流电机组1、微控制器2、四轮小车3、推杆电机4、惯性测量单元5、热成像仪6、升降云台7、环境检测传感器模组8、RGB-D相机9、激光雷达10、无线传输模块11、主控模块12、电源管理模块13。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面结合附图对本发明的技术方案、原理和特征进行描述,所举实例只用于解释本发明,并非用于限定本发明的范围。基于本发明的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的其他实施例,都属于本发明保护的范围。
如图1所示,包括四轮小车3和安装在四轮小车3上的自主导航系统、运动模块、信息采集模块,四轮小车3上方设有升降云台7。
如图1所示,自主导航系统包括用于获取周围环境信息的激光雷达10、RGB-D相机9、惯性测量单元5和里程计以及用于信息处理和控制的主控模块 12;激光雷达10、RGB-D相机9、惯性测量单元5、里程计和主控模块12均固定安装在四轮小车3上,激光雷达10、RGB-D相机9、惯性测量单元5、里程计均连接到主控模块12;RGB-D相机9和激光雷达10位于四轮小车3前部,RGB-D相机9朝向正前方。
如图1所示,运动模块包括用于控制机器人行进的直流电机组1、用于控制升降云台7上下升降移动的推杆电机4以及控制直流电机组1进而推杆电机4转速转向的微控制器2;直流电机组1包括安装于四轮小车3底部四角的四个直流电机,四个直流电机的输出轴连接到四轮小车3底部四角的四个车轮,推杆电机4固定于四轮小车3底部上,推杆电机4输出端经传动结构连接到升降云台7,推杆电机4运行带动升降云台7上下升降运动;直流电机组1和推杆电机4均连接到微控制器2,微控制器2和主控模块12连接,由主控模块12接收周围环境信息进而控制直流电机组1和推杆电机4运行,进而控制四轮小车3的移动和升降云台7的升降。
机器人主体部分是具有升降云台7的四轮小车3,四轮小车3上安装了具有升降功能的升降云台7,升降云台7上安装有热成像仪6和环境检测传感器模组8,可根据任务需求采集不同位置的信息,但不限于仅实现信息感知功能。
如图1所示,信息采集模块包括用于采集动物行为信息的热成像仪6、用于采集环境信息的环境检测传感器模组8和用于数据传输的无线传输模块11;热成像仪6和环境检测传感器模组8和无线传输模块11电连接,无线传输模块11连接到外部无线接收器;热成像仪6置于升降云台7底部的两侧,环境检测传感器模组8置于升降云台7上。信息采集模块的热成像仪采用对称的安装方式,可实现同时收集检测畜禽舍过道两侧不同高度的动物行为状态。
具体实施还包括和电源管理模块13,电源管理模块13连接到激光雷达10、RGB-D相机9、惯性测量单元5、里程计、主控模块12、直流电机组1、推杆电机4和微控制器2,电源管理模块13为机器人各部件提供正常工作所需的电力。
升降云台7带动热成像仪6和环境检测传感器模组8上下升降对高度不同的畜禽笼具进行感知采集所需环境信息。
激光雷达10用于实时测量机器人与周围环境障碍物的距离,RGB-D相机9用于获取机器人周围环境的RGB图像和深度信息,惯性测量单元5用于获取机器人的位姿信息,里程计用于获取机器人的里程计信息。主控模块12基于ROS平台并且结合卡尔曼滤波与粒子滤波算法实现地图自主构建与路径规划。
环境检测传感器模组8包括有多种传感器,包括但不限于声音传感器、温 度传感器、湿度传感器、光照强度传感器、硫化氢传感器、氨气传感器、二氧化碳传感器。
如图2所示,本发明具体实施例及其实施过程如下:
自主导航分为地图自主构建和路径规划两个部分。
地图自主构建过程包括以下步骤:
步骤S1:通过机器人操作系统ROS平台控制机器人在室内工作环境移动,同时利用激光雷达、RGB-D相机、惯性测量单元和里程计获取移动过程的周围环境信息,包括障碍物距离信息、图像及深度信息、局部坐标系下的位姿信息和里程计信息,位姿信息包括第一实时全局坐标,里程计信息包括第二实时全局坐标、速度、航向角和车轮的角速度;
步骤S2:主控模块接收周围环境信息进行处理,通过坐标变换得到机器人在世界坐标系下的实时全局坐标;
步骤S3:将机器人在世界坐标系下的定位全局坐标、速度、航向角以及角速度作为卡尔曼滤波器的状态向量;全局坐标由第一实时全局坐标、第二实时全局坐标和里程计信息处理获得;
步骤S4:根据状态向量构建卡尔曼滤波器的状态模型,根据里程计的观测模型、惯性测量单元的观测模型以及激光雷达的观测模型构建卡尔曼滤波器的观测模型,根据卡尔曼滤波算法对状态模型和观测模型进行求解获得状态向量的最优解;
S401:t时刻,状态向量X c(t)构建为X c(t)=[x t,y tt,v tt] T,其中,x t、y t为机器人在世界坐标系下的定位全局坐标,θ t为航向角,v t为速度,ω t为角速度,T为矩阵转置;
S402:按照以下构建卡尔曼滤波器状态模型:
X c(t+1)=f(X c(t+1))+W t
Figure PCTCN2019101574-appb-000017
其中,X c(t)表示t时刻的状态向量,f(X c(t))为t时刻的状态向量X c(t)的非线性状态转移函数,W t为卡尔曼滤波器过程噪声,Δt为相邻两个时刻之间的时间间隔;
S403:将卡尔曼滤波器拆分为各自并列独立的第一子滤波器和第二子滤波器的两部分,其中:
第一子滤波器的观测模型为Z t+1=h 1X c(t)+W 1(t),具体为:
Figure PCTCN2019101574-appb-000018
其中,Z Las为激光雷达的观测模型,Z IMU为惯性测量单元的观测模型,W 1(t)为激光雷达与惯性测量单元的噪声,h 1为第一子滤波器的观测矩阵;
第二子滤波器的观测模型为Z 2(t+1)=h 2X c(t)+W 2(t),具体的:
Figure PCTCN2019101574-appb-000019
其中,Z odom为里程计的观测模型,W 2(t)为里程计与惯性测量单元的噪声,h 2为第二子滤波器的观测矩阵;
S404:采用以下公式将滤波器的过程噪声W t的协方差Q (t)、滤波器的估计误差协方差P (t)进行处理并分配到第一子滤波器和第二子滤波器,具体为:
Q 1(t′)=α 1 -1Q (t)
P 1(t′)=(1-α 1 -1)P (t)
Q 2(t′)=α 2 -1Q (t)
P 2(t′)=(1-α 2 -1)P (t)
Figure PCTCN2019101574-appb-000020
步骤S5:构建蒙特卡罗算法的运动状态转移模型与运动观测模型
S501:结合RGB-D相机采集的图像信息和蒙特卡罗即时定位与地图构建算法,确定步骤S4所述的卡尔曼滤波器的状态模型和观测模型下所述状态向量的全局最优解;
S502:机器人在地图待构建区域移动,通过激光雷达采集的障碍物距离信息判断机器人移动过程中是否转弯和是否有障碍物,通过RGB-D相机采集的图像信息判断是否有拍到特征道路标记;激光雷达、惯性测量单元和里程计对在地图待构建区域采集到的信息数据进行特征匹配得到世界坐标系下的位姿。
S503:作以下判断处理:
当机器人移动过程中无转弯、无障碍物或RGB-D相机没有拍到特征道路标 记时,则卡尔曼滤波器的状态模型的控制向量为世界坐标系下的位姿;
当机器人移动过程中有转弯、有障碍物或RGB-D相机拍到特征道路标记时,则卡尔曼滤波器的状态模型的控制向量为状态向量的最优解,即状态向量中包含的五个值;
步骤S6:对卡尔曼滤波器的状态模型和观测模型采用后续的粒子迭代更新处理进行迭代求解得到定位位置,进而构建获得全局地图。
S601:将运动观测模型转换为似然函数;
S602:将地图待构建区域均分为多个栅格,采用激光雷达扫描地图待构建区域,将存在障碍物的栅格设置为1,将没有障碍物的栅格设置为0,从而得到局部栅格地图,并作为初始全局地图;
S603:根据蒙特卡洛算法建立粒子,以粒子的位置作为机器人可能的定位位置,将里程计和惯性测量单元获得的实时全局坐标进行加权融合获得机器人新定位位置,具体为:
P i=P odam·θ Δt+P IMU·(1-θ Δt′)
Figure PCTCN2019101574-appb-000021
其中,P是加权融合后的定位位置,P odam是里程计获得的第二实时全局坐标,P IMU是惯性测量单元获得的第一实时全局坐标,θ Δt′是里程计权重,Δt′是定位持续时间;γ是里程计获得的第二实时全局坐标到达稳定的定位持续时间,n表示时间指数参数,由实际情况而定,一般取3;
S604:采用均值为0、方差为σ 2的高斯分布描述粒子权重更新方法,更新蒙特卡罗算法的粒子权重,更新后的粒子权重为:
Figure PCTCN2019101574-appb-000022
其中,
Figure PCTCN2019101574-appb-000023
是第i个粒子时刻k的位置;e表示自然常数,
Figure PCTCN2019101574-appb-000024
表示第i个粒子初始位置,
Figure PCTCN2019101574-appb-000025
表示第i个粒子时刻k的权重,k表示时刻;
然后对更新后的粒子权重进行归一化处理;
S605:根据更新后的粒子权重计算机器人的当前定位位置:
Figure PCTCN2019101574-appb-000026
其中,n表示粒子的总数,P i是加权融合后的第i个粒子的位置;
S606:根据更新后的粒子权重,舍弃权重
Figure PCTCN2019101574-appb-000027
较小的粒子,包括权值较大的 粒子,具体如下:
S6061:对更新后的所有粒子的权重采用多项式重采样,构建离散累积分布函数:
Figure PCTCN2019101574-appb-000028
其中,F(i)表示第i个粒子的累计权重;
S6062:在[0,1]上产生服从均匀分布的随机数集合{u j},u j表示上述产生的随机数集合,j表示上述集合中随机产生第j个随机数,然后进行以下判断:
当累计权重F (i)≤u j时,则认为粒子的权重
Figure PCTCN2019101574-appb-000029
较小;
当累计权重F (i)>u j时,则认为粒子的权重
Figure PCTCN2019101574-appb-000030
较大,复制当前粒子作为新粒子,且权重设置为1/N;
S6063:重复步骤S6062的多项式重采样进行N次,生成N个新粒子,完成粒子更新,以最终更新得到的粒子的位置作为机器人在世界坐标系上的定位位置。
具体实施中还融合特征路标图像信息,如果RGB-D相机呈现图像中没有畜禽的笼具信息,且当前定位全局坐标与激光扫到的边缘障碍距离小于阈值,导航系统向运动模块发布转弯信号,然后前行至下一路口位置,再次发布转弯信号,直至导航模式关闭。
如图3所示自主导航系统的路径规划过程包括以下步骤:
步骤1:输入环境地图,输入的环境地图为上述步骤机器人自主构建的全局地图,且将全局地图栅格化处理。通过蒙特卡罗算法对机器人的位姿进行估计,匹配机器人在占据栅格后的全局地图与真实工作环境的位姿;
步骤2:输入导航目标的位姿;
步骤3:自主导航主控模块整合上述信息,利用迪杰斯特拉算法进行全局路径规划,获得最优路径。全局路径规划根据给定目标位置和全局地图进行总体路径的规划,计算机器人到目标位置的最优路线,作为机器人的全局路线。
步骤4:自主导航主控模块进行本地实时规划,将机器人行进的线速度、角速度的控制信号传输给运动模块,完成对机器人的电机控制,实现机器人的自主导航移动。
本地实时规划中,利用动态窗口法Dynamic window approach根据地图信息和附近可能出现的障碍物规划更改每个周期的行进路线,综合评价时间、碰撞等条件选取最优路线,并计算行进周期内的线速度和角速度,实现实时避障。

Claims (8)

  1. 一种基于自主导航的畜禽信息感知机器人,其特征在于:包括四轮小车(3)和安装在四轮小车(3)上的自主导航系统、运动模块、信息采集模块,四轮小车(3)上方设有升降云台(7);所述的自主导航系统包括用于获取周围环境信息的激光雷达(10)、RGB-D相机(9)、惯性测量单元(5)和里程计以及用于信息处理和控制的主控模块(12);激光雷达(10)、RGB-D相机(9)、惯性测量单元(5)、里程计和主控模块(12)均固定安装在四轮小车(3)上,激光雷达(10)、RGB-D相机(9)、惯性测量单元(5)、里程计均连接到主控模块(12);所述的运动模块包括用于控制机器人行进的直流电机组(1)、用于控制升降云台(7)上下升降移动的推杆电机(4)以及控制直流电机组(1)进而推杆电机(4)转速转向的微控制器(2);直流电机组(1)包括安装于四轮小车(3)底部四角的四个直流电机,四个直流电机的输出轴连接到四轮小车(3)底部四角的四个车轮,推杆电机(4)固定于四轮小车(3)底部上,推杆电机(4)输出端经传动结构连接到升降云台(7),推杆电机(4)运行带动升降云台(7)上下升降运动;所述的信息采集模块包括用于采集动物行为信息的热成像仪(6)、用于采集环境信息的环境检测传感器模组(8)和用于数据传输的无线传输模块(11);热成像仪(6)置于升降云台(7)底部的两侧,环境检测传感器模组(8)置于升降云台(7)上。
  2. 根据权利要求1所述的一种基于自主导航的畜禽信息感知机器人,其特征在于:还包括和电源管理模块(13),电源管理模块(13)连接到激光雷达(10)、RGB-D相机(9)、惯性测量单元(5)、里程计、主控模块(12)、直流电机组(1)、推杆电机(4)和微控制器(2)。
  3. 根据权利要求1所述的一种基于自主导航的畜禽信息感知机器人,其特征在于:所述的直流电机组(1)和推杆电机(4)均连接到微控制器(2),微控制器(2)和主控模块(12)连接,由主控模块(12)接收周围环境信息进而控制直流电机组(1)和推杆电机(4)运行,进而控制四轮小车(3)的移动和升降云台(7)的升降,并使用SLAM进行地图自主构建和导航。
  4. 根据权利要求1所述的一种基于自主导航的畜禽信息感知机器人,其特征在于:所述的热成像仪(6)和环境检测传感器模组(8)和无线传输模块(11)电连接,无线传输模块(11)连接到外部无线接收器,外部无线接收器接收热成像仪(6)和环境检测传感器模组(8)采集的环境感知信息进行存储和处理。
  5. 根据权利要求1所述的一种基于自主导航的畜禽信息感知机器人,其特征在于:所述的环境检测传感器模组(8)包括有多种传感器,包括但不限于声音传感器、温度传感器、湿度传感器、光照强度传感器、硫化氢传感器、氨气传感器、二氧化碳传感器。
  6. 应用于权利要求1-5任一所述畜禽信息感知机器人的一种自主导航地图自主构建方法,其特征在于:方法包括以下步骤:
    步骤S1:控制机器人在室内工作环境移动,同时利用激光雷达、RGB-D相机、惯性测量单元和里程计获取移动过程的周围环境信息,包括障碍物距离信息、图像及深度信息、局部坐标系下的位姿信息和里程计信息,位姿信息包括第一实时全局坐标,里程计信息包括第二实时全局坐标、速度、航向角和车轮的角速度;
    步骤S2:主控模块接收周围环境信息进行处理,通过坐标变换得到机器人在世界坐标系下的实时全局坐标;
    步骤S3:将机器人在世界坐标系下的定位全局坐标、速度、航向角以及角速度作为卡尔曼滤波器的状态向量;所述全局坐标由第一实时全局坐标、第二实时全局坐标和里程计信息处理获得;
    步骤S4:根据状态向量构建卡尔曼滤波器的状态模型,根据里程计的观测模型、惯性测量单元的观测模型以及激光雷达的观测模型构建卡尔曼滤波器的观测模型,根据卡尔曼滤波算法对状态模型和观测模型进行求解获得t时刻下状态向量的全局最优解;
    步骤S5:结合RGB-D相机采集的图像信息和蒙特卡罗即时定位与地图构建算法,确定步骤S4所述的卡尔曼滤波器的状态模型和观测模型下所述状态向量的全局最优解;
    S501:机器人在地图待构建区域移动,通过激光雷达采集的障碍物距离信息判断机器人移动过程中是否转弯和是否有障碍物,通过RGB-D相机采集的图像信息判断是否有拍到特征道路标记;激光雷达、惯性测量单元和里程计对在地图待构建区域采集到的信息数据进行特征匹配得到世界坐标系下的位姿;
    S502:作以下判断处理:
    当机器人移动过程中无转弯、无障碍物或RGB-D相机没有拍到特征道路标记时,则卡尔曼滤波器的状态模型的控制向量为世界坐标系下的位姿;
    当机器人移动过程中有转弯、有障碍物或RGB-D相机拍到特征道路标记时,则卡尔曼滤波器的状态模型的控制向量为状态向量的最优解;
    步骤S6:对卡尔曼滤波器的状态模型和观测模型进行迭代求解得到定位位 置,进而构建获得全局地图。
  7. 根据权利要求6所述的一种自主导航地图自主构建过程方法,其特征在于:所述的步骤S4,具体为:
    S401:t时刻,状态向量X c(t)构建为X c(t)=[x t,y tt,v tt] T,其中,x t、y t为机器人在世界坐标系下的定位全局坐标,θ t为航向角,v t为速度,ω t为角速度,T为矩阵转置;
    S402:按照以下构建卡尔曼滤波器状态模型:
    X c(t+1)=f(X c(t+1))+W t
    Figure PCTCN2019101574-appb-100001
    其中,X c(t)表示t时刻的状态向量,f(X c(t))为t时刻的状态向量X c(t)的非线性状态转移函数,W t为卡尔曼滤波器过程噪声,Δt为相邻两个时刻之间的时间间隔;
    S403:将卡尔曼滤波器拆分为各自并列独立的第一子滤波器和第二子滤波器的两部分,其中:
    第一子滤波器的观测模型为Z t+1=h 1X c(t)+W 1(t),具体为:
    Figure PCTCN2019101574-appb-100002
    其中,Z Las为激光雷达的观测模型,Z IMU为惯性测量单元的观测模型,W 1(t)为激光雷达与惯性测量单元的噪声,h 1为第一子滤波器的观测矩阵;
    第二子滤波器的观测模型为Z 2(t+1)=h 2X c(t)+W 2(t),具体的:
    Figure PCTCN2019101574-appb-100003
    其中,Z odom为里程计的观测模型,W 2(t)为里程计与惯性测量单元的噪声,h 2为第二子滤波器的观测矩阵;
    S404:采用以下公式将滤波器的过程噪声W t的协方差Q (t)、滤波器的估计误差协方差P (t)进行处理并分配到第一子滤波器和第二子滤波器,具体为:
    Q 1(t′)=α 1 -1Q (t)
    P 1(t′)=(1-α 1 -1)P (t)
    Q 2(t′)=α 2 -1Q (t)
    P 2(t′)=(1-α 2 -1)P (t)
    Figure PCTCN2019101574-appb-100004
    其中,Q 1(t’)为各子滤波器的t时刻量测噪声的协方差,P 1(t’)为各子滤波器t时刻的估计误差协方差,α 1表示第一子滤波器权值的分配系数;Q 2(t’)为各子滤波器的t时刻量测噪声的协方差,P 2(t’)为各子滤波器t时刻的估计误差协方差,α 2表示第二子滤波器权值的分配系数;
    Figure PCTCN2019101574-appb-100005
    为t时刻状态向量的全局最优解,
    Figure PCTCN2019101574-appb-100006
    为各子滤波器t时刻状态向量的全局最优解。
  8. 根据权利要求6所述的一种自主导航地图自主构建过程方法,其特征在于:所述的步骤S6,具体为:
    S601:将运动观测模型转换为似然函数;
    S602:将地图待构建区域均分为多个栅格,采用激光雷达扫描地图待构建区域,将存在障碍物的栅格设置为1,将没有障碍物的栅格设置为0,从而得到局部栅格地图,并作为初始全局地图;
    S603:根据蒙特卡洛算法建立粒子,以粒子的位置作为机器人可能的定位位置,将里程计和惯性测量单元获得的实时全局坐标进行加权融合获得机器人新定位位置,具体为:
    P i=P odam·θ Δt+P IMU·(1-θ Δt′)
    Figure PCTCN2019101574-appb-100007
    其中,P是加权融合后的定位位置,P odam是里程计获得的第二实时全局坐标,P IMU是惯性测量单元获得的第一实时全局坐标,θ Δt′是里程计权重,Δt′是定位持续时间;γ是里程计获得的第二实时全局坐标到达稳定的定位持续时间,n表示时间指数参数,由实际情况而定,一般取3;
    S604:采用均值为0、方差为σ 2的高斯分布描述粒子权重更新方法,更新蒙特卡罗算法的粒子权重,更新后的粒子权重为:
    Figure PCTCN2019101574-appb-100008
    其中,
    Figure PCTCN2019101574-appb-100009
    是第i个粒子时刻k的平面位置;e表示自然常数,
    Figure PCTCN2019101574-appb-100010
    表示第i个粒子初始的平面位置,
    Figure PCTCN2019101574-appb-100011
    表示第i个粒子时刻k的权重,k表示时刻;
    然后对更新后的粒子权重进行归一化处理;
    S605:根据更新后的粒子权重计算机器人的当前定位位置:
    Figure PCTCN2019101574-appb-100012
    其中,n表示粒子的总数,P i是加权融合后的第i个粒子的位置;
    S606:根据更新后的粒子权重,舍弃权重
    Figure PCTCN2019101574-appb-100013
    较小的粒子,包括权值较大的粒子,具体如下:
    S6061:对更新后的所有粒子的权重采用多项式重采样,构建离散累积分布函数:
    Figure PCTCN2019101574-appb-100014
    其中,F(i)表示第i个粒子的累计权重;
    S6062:在[0,1]上产生服从均匀分布的随机数集合{u j},u j表示上述产生的随机数集合,j表示上述集合中随机产生第j个随机数,然后进行以下判断:
    当累计权重F (i)≤u j时,则认为粒子的权重
    Figure PCTCN2019101574-appb-100015
    较小;
    当累计权重F (i)>u j时,则认为粒子的权重
    Figure PCTCN2019101574-appb-100016
    较大,复制当前粒子作为新粒子,且权重设置为1/N;
    S6063:重复步骤S6062的多项式重采样进行N次,生成N个新粒子,完成粒子更新,以最终更新得到的粒子的位置作为机器人在世界坐标系上的定位位置。
PCT/CN2019/101574 2019-03-27 2019-08-20 一种基于自主导航的畜禽信息感知机器人与地图构建方法 WO2020192000A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/426,596 US11892855B2 (en) 2019-03-27 2019-08-20 Robot with perception capability of livestock and poultry information and mapping approach based on autonomous navigation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910238750.5 2019-03-27
CN201910238750.5A CN109900280B (zh) 2019-03-27 2019-03-27 一种基于自主导航的畜禽信息感知机器人与地图构建方法

Publications (1)

Publication Number Publication Date
WO2020192000A1 true WO2020192000A1 (zh) 2020-10-01

Family

ID=66953071

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/101574 WO2020192000A1 (zh) 2019-03-27 2019-08-20 一种基于自主导航的畜禽信息感知机器人与地图构建方法

Country Status (3)

Country Link
US (1) US11892855B2 (zh)
CN (1) CN109900280B (zh)
WO (1) WO2020192000A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113038074A (zh) * 2021-03-01 2021-06-25 清华大学 基于自移动数据采集设备的室内环境智能巡检方法及系统
CN113239134A (zh) * 2021-05-07 2021-08-10 河南牧原智能科技有限公司 一种猪舍导航地图建立方法、装置、电子设备及存储介质
CN114061567A (zh) * 2021-11-10 2022-02-18 郭艳芳 一种智能定位测量方法、系统、存储介质及智能终端

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109900280B (zh) * 2019-03-27 2020-12-11 浙江大学 一种基于自主导航的畜禽信息感知机器人与地图构建方法
CN110000785B (zh) * 2019-04-11 2021-12-14 上海交通大学 农业场景无标定机器人运动视觉协同伺服控制方法与设备
EP3739361A1 (en) * 2019-05-13 2020-11-18 Aptiv Technologies Limited Method and system for fusing occupancy maps
CN110220517A (zh) * 2019-07-08 2019-09-10 紫光云技术有限公司 一种结合环境语意的室内机器人鲁棒slam方法
CN110646825B (zh) * 2019-10-22 2022-01-25 北京国家新能源汽车技术创新中心有限公司 定位方法、定位系统及汽车
CN110673603B (zh) * 2019-10-31 2023-10-24 郑州轻工业大学 一种火场自主导航侦察机器人
CN112975890A (zh) * 2019-12-13 2021-06-18 希望银蕨智能科技有限公司 一种智能巡检机器人
CN111240331A (zh) * 2020-01-17 2020-06-05 仲恺农业工程学院 基于激光雷达和里程计slam的智能小车定位导航方法及系统
CN111308490B (zh) * 2020-02-05 2021-11-19 浙江工业大学 基于单线激光雷达的平衡车室内定位与导航系统
CN111559259B (zh) * 2020-04-16 2022-07-22 江苏大学 基于ros的具有激光导航功能的高效率无线充电智能小车及控制方法
CN111427363B (zh) * 2020-04-24 2023-05-05 深圳国信泰富科技有限公司 一种机器人导航控制方法及系统
CN113932820A (zh) * 2020-06-29 2022-01-14 杭州海康威视数字技术股份有限公司 对象检测的方法和装置
CN111913484B (zh) * 2020-07-30 2022-08-12 杭州电子科技大学 一种变电站巡检机器人在未知环境下的路径规划方法
CN111949032A (zh) * 2020-08-18 2020-11-17 中国科学技术大学 一种基于强化学习的3d避障导航系统及方法
CN112162575B (zh) * 2020-09-13 2021-10-22 江苏深农智能科技有限公司 基于数据模型的智能棚舍环境控制方法及系统
CN112212917A (zh) * 2020-09-28 2021-01-12 华南农业大学 一种应用在畜禽养殖场的自动移动巡检系统和方法
CN112306058A (zh) * 2020-10-15 2021-02-02 华南农业大学 清粪机器人智能导航方法、装置、系统、介质和设备
CN112611374A (zh) * 2020-10-29 2021-04-06 华中科技大学鄂州工业技术研究院 基于激光雷达和深度相机的路径规划及避障的方法及系统
CN112461237B (zh) * 2020-11-26 2023-03-14 浙江同善人工智能技术有限公司 一种应用于动态变化场景下的多传感器融合定位方法
CN112698345B (zh) * 2020-12-04 2024-01-30 江苏科技大学 一种激光雷达的机器人同时定位与建图优化方法
CN112506200B (zh) * 2020-12-14 2023-12-08 广州视源电子科技股份有限公司 机器人定位方法、装置、机器人及存储介质
CN112833890A (zh) * 2020-12-30 2021-05-25 深圳市海柔创新科技有限公司 地图构建方法、装置、设备、机器人及存储介质
CN112767477A (zh) * 2020-12-31 2021-05-07 北京纵目安驰智能科技有限公司 一种定位方法、装置、存储介质及电子设备
CN113075686B (zh) * 2021-03-19 2024-01-12 长沙理工大学 一种基于多传感器融合的电缆沟智能巡检机器人建图方法
CN113093759A (zh) * 2021-04-08 2021-07-09 中国科学技术大学 基于多传感器信息融合的机器人编队构造方法及系统
CN113327289A (zh) * 2021-05-18 2021-08-31 中山方显科技有限公司 一种多源异构传感器的同时内外参标定方法
CN113276079A (zh) * 2021-05-20 2021-08-20 广东省大湾区集成电路与系统应用研究院 移动机器人
CN113473357B (zh) * 2021-06-15 2024-05-28 深圳优地科技有限公司 辅助定位方法、装置、设备和存储介质
CN113392909B (zh) * 2021-06-17 2022-12-27 深圳市睿联技术股份有限公司 数据处理方法、数据处理装置、终端及可读存储介质
CN113465728B (zh) * 2021-06-25 2023-08-04 重庆工程职业技术学院 一种地形感知方法、系统、存储介质、计算机设备
CN114489036B (zh) * 2021-07-25 2023-07-14 西北农林科技大学 一种基于slam的室内机器人导航控制方法
CN114019953B (zh) * 2021-10-08 2024-03-19 中移(杭州)信息技术有限公司 地图构建方法、装置、设备及存储介质
CN114111791B (zh) * 2021-11-22 2024-05-17 国网江苏省电力有限公司信息通信分公司 一种智能机器人室内自主导航方法、系统及存储介质
US11829155B2 (en) * 2022-02-15 2023-11-28 EarthSense, Inc. System and method for navigating under-canopy robots using multi-sensor fusion
CN114407051A (zh) * 2022-03-07 2022-04-29 烟台艾睿光电科技有限公司 畜禽养殖场巡检方法及畜禽养殖场机器人
CN114562994A (zh) * 2022-03-09 2022-05-31 上海应用技术大学 移动机器人处于动态环境中的定位方法
CN115291241B (zh) * 2022-08-29 2024-04-26 太原理工大学 一种基于SLAM的针对辐射工厂的α/β辐射地图构建方法
CN115235478B (zh) * 2022-09-23 2023-04-07 武汉理工大学 基于视觉标签和激光slam的智能汽车定位方法及系统
JP7465511B1 (ja) 2022-11-09 2024-04-11 株式会社RoboSapiens 自走式ロボット
US11980116B1 (en) * 2023-01-12 2024-05-14 Earthsense Inc. System and a method for automation of agricultural treatments
CN116687274B (zh) * 2023-06-19 2024-04-16 深圳市毫准科技有限公司 一种基于手机的可插拔式扫地机器人及扫地清洁控制方法
CN117075614B (zh) * 2023-09-25 2024-05-10 中国农业科学院北京畜牧兽医研究所 一种养殖巡检机器人万能通用底盘系统
CN117591989B (zh) * 2024-01-19 2024-03-19 贵州省畜牧兽医研究所 一种针对畜禽活动的数据监控方法和系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018128772A (ja) * 2017-02-07 2018-08-16 パナソニックIpマネジメント株式会社 鶏舎用自律移動ロボット
KR20180105326A (ko) * 2017-03-15 2018-09-28 (주)엔스퀘어 물류 자동화를 위한 자율주행 로봇의 환경인식 및 자기위치 추정 방법
CN109211251A (zh) * 2018-09-21 2019-01-15 北京理工大学 一种基于激光和二维码融合的即时定位与地图构建方法
CN109374069A (zh) * 2018-12-18 2019-02-22 华南农业大学 畜禽养殖场即时环境信息的空间分布监测系统及监测方法
CN109460029A (zh) * 2018-11-29 2019-03-12 华南农业大学 畜禽养殖场所巡检移动平台及其控制方法
CN109900280A (zh) * 2019-03-27 2019-06-18 浙江大学 一种基于自主导航的畜禽信息感知机器人与地图构建方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9014716B2 (en) * 2012-04-10 2015-04-21 Qualcomm Incorporated Techniques for processing perceived routability constraints that may or may not affect movement of a mobile device within an indoor environment
CN105467838B (zh) * 2015-11-10 2017-12-05 山西大学 一种随机有限集框架下的同步定位与地图构建方法
CN105509755B (zh) * 2015-11-27 2018-10-12 重庆邮电大学 一种基于高斯分布的移动机器人同步定位与地图构建方法
CN106873677A (zh) * 2017-03-01 2017-06-20 连京华 一种禽舍环境巡检及调控系统
CN109066422A (zh) * 2018-09-04 2018-12-21 南京理工大学 一种变电站巡检系统
US11950572B2 (en) * 2018-10-31 2024-04-09 Dawn Equipment Company, Inc. Movable electrified fence for rotational grazing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018128772A (ja) * 2017-02-07 2018-08-16 パナソニックIpマネジメント株式会社 鶏舎用自律移動ロボット
KR20180105326A (ko) * 2017-03-15 2018-09-28 (주)엔스퀘어 물류 자동화를 위한 자율주행 로봇의 환경인식 및 자기위치 추정 방법
CN109211251A (zh) * 2018-09-21 2019-01-15 北京理工大学 一种基于激光和二维码融合的即时定位与地图构建方法
CN109460029A (zh) * 2018-11-29 2019-03-12 华南农业大学 畜禽养殖场所巡检移动平台及其控制方法
CN109374069A (zh) * 2018-12-18 2019-02-22 华南农业大学 畜禽养殖场即时环境信息的空间分布监测系统及监测方法
CN109900280A (zh) * 2019-03-27 2019-06-18 浙江大学 一种基于自主导航的畜禽信息感知机器人与地图构建方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113038074A (zh) * 2021-03-01 2021-06-25 清华大学 基于自移动数据采集设备的室内环境智能巡检方法及系统
CN113038074B (zh) * 2021-03-01 2021-11-09 清华大学 基于自移动数据采集设备的室内环境智能巡检方法及系统
CN113239134A (zh) * 2021-05-07 2021-08-10 河南牧原智能科技有限公司 一种猪舍导航地图建立方法、装置、电子设备及存储介质
CN114061567A (zh) * 2021-11-10 2022-02-18 郭艳芳 一种智能定位测量方法、系统、存储介质及智能终端

Also Published As

Publication number Publication date
US11892855B2 (en) 2024-02-06
US20220147053A1 (en) 2022-05-12
CN109900280B (zh) 2020-12-11
CN109900280A (zh) 2019-06-18

Similar Documents

Publication Publication Date Title
WO2020192000A1 (zh) 一种基于自主导航的畜禽信息感知机器人与地图构建方法
WO2022021739A1 (zh) 一种语义智能变电站机器人仿人巡视作业方法及系统
Zhang et al. Automated guided vehicles and autonomous mobile robots for recognition and tracking in civil engineering
CN109374069B (zh) 畜禽养殖场即时环境信息的空间分布监测系统及监测方法
WO2017041730A1 (zh) 一种移动机器人避障导航的方法和系统
CN111958591A (zh) 一种语义智能变电站巡检机器人自主巡检方法及系统
CN111522339A (zh) 畜禽舍巡检机器人自动路径规划与定位方法及装置
CN113189977B (zh) 一种用于机器人的智能导航路径规划系统及方法
CN111309015A (zh) 一种融合多传感器的变电站巡检机器人定位导航系统
Kucuksubasi et al. Transfer learning-based crack detection by autonomous UAVs
CN112518739A (zh) 履带式底盘机器人侦察智能化自主导航方法
CN113325837A (zh) 一种用于多信息融合采集机器人的控制系统及方法
CN214520204U (zh) 一种基于深度相机和激光雷达的港区智能巡检机器人
CN113075686B (zh) 一种基于多传感器融合的电缆沟智能巡检机器人建图方法
CN113566808A (zh) 一种导航路径规划方法、装置、设备以及可读存储介质
CN118020038A (zh) 两轮自平衡机器人
CN115685736A (zh) 一种基于热成像与卷积神经网络的轮式巡检机器人
CN112857370A (zh) 一种基于时序信息建模的机器人无地图导航方法
CN111958593A (zh) 一种语义智能变电站巡视作业机器人视觉伺服方法及系统
Ahmed et al. Path planning of unmanned aerial systems for visual inspection of power transmission lines and towers
CN114527763A (zh) 基于目标检测和slam构图的智能巡检系统及方法
CN110539305A (zh) 一种用于小区安保的智能机器人管理控制系统
Lu et al. Slam and navigation of electric power intelligent inspection robot based on ROS
Chen et al. Low cost multi-sensor robot laser scanning system and its accuracy investigations for indoor mapping application
Tao Research on intelligent robot patrol route based on cloud computing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19921247

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19921247

Country of ref document: EP

Kind code of ref document: A1