WO2018133074A1 - 一种基于大数据及人工智能的智能轮椅系统 - Google Patents

一种基于大数据及人工智能的智能轮椅系统 Download PDF

Info

Publication number
WO2018133074A1
WO2018133074A1 PCT/CN2017/072101 CN2017072101W WO2018133074A1 WO 2018133074 A1 WO2018133074 A1 WO 2018133074A1 CN 2017072101 W CN2017072101 W CN 2017072101W WO 2018133074 A1 WO2018133074 A1 WO 2018133074A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
displacement
information
determining
processor
Prior art date
Application number
PCT/CN2017/072101
Other languages
English (en)
French (fr)
Inventor
焦寅
李家鑫
刘伟荣
闫励
东东
黄翊峰
Original Assignee
四川金瑞麒智能科学技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 四川金瑞麒智能科学技术有限公司 filed Critical 四川金瑞麒智能科学技术有限公司
Priority to PCT/CN2017/072101 priority Critical patent/WO2018133074A1/zh
Priority to CN201780082879.0A priority patent/CN110177532A/zh
Priority to US16/477,178 priority patent/US20190369631A1/en
Publication of WO2018133074A1 publication Critical patent/WO2018133074A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G5/00Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
    • A61G5/04Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs motor-driven
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G2203/00General characteristics of devices
    • A61G2203/10General characteristics of devices characterised by specific control means, e.g. for adjustment or steering
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G2203/00General characteristics of devices
    • A61G2203/10General characteristics of devices characterised by specific control means, e.g. for adjustment or steering
    • A61G2203/22General characteristics of devices characterised by specific control means, e.g. for adjustment or steering for automatically guiding movable devices, e.g. stretchers or wheelchairs in a hospital
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the invention relates to an intelligent wheelchair system based on big data and artificial intelligence and a control method thereof. Specifically, it relates to a mobile intelligent robot based on big data and artificial intelligence, and a control method for controlling image detection and processing, path search, and robot movement.
  • the intelligent robotic system can identify the environment based on existing maps and automatically move. With the rapid expansion of demand for services, people expect to be able to update maps, planning paths and automatically moving multi-functional intelligent robot systems, especially intelligent robots that can adapt to more complex areas.
  • the intelligent wheelchair has many functions such as autonomous navigation, obstacle avoidance, human-machine dialogue and provision of special services. It can be used for disabled people with cognitive impairment (such as dementia patients) and disabled people with movement disorders. (such as cerebral palsy patients, quadriplegia patients, etc.), the elderly, etc. to provide a safe and convenient lifestyle, greatly improve their daily life and work quality, so that they can regain their self-care ability and social integration.
  • the intelligent wheelchair combines various technologies in robotics research, including robot navigation and positioning, machine vision, pattern recognition, multi-sensor information fusion, and human-computer interaction.
  • One aspect of the invention relates to an intelligent wheelchair system that includes a memory that stores instructions and a processor that communicates with the memory.
  • the processor may establish communication with the motion module and the cloud station through the communication port when executing the instruction; the processor may acquire information from the sensors of the motion module and the pan/tilt to construct a map; the processor may also be based on Decoding the information planning path and generating control parameters based on the information.
  • the method can include establishing communication with a motion module and a cloud station through a communication port; the method can include acquiring information from sensors of the motion module and the pan/tilt to construct a map; The method can also include planning a path based on the information, and generating a control parameter based on the information.
  • the computer program product includes a communication port for establishing communication between a processor and a motion module, and between the processor and the cloud platform.
  • the communication port can establish communication using an Application Program Interface (API).
  • API Application Program Interface
  • FIG. 1 is a schematic illustration of an intelligent wheelchair system, in accordance with some embodiments of the present application.
  • FIG. 2 is a schematic block diagram of a robot in the robot control system of FIG. 1 shown in accordance with some embodiments of the present application;
  • FIG. 3 is a schematic block diagram of a processor in the robot of FIG. 2, shown in accordance with some embodiments of the present application;
  • FIG. 4 is a schematic block diagram of an analysis module in the processor of FIG. 3, shown in accordance with some embodiments of the present application;
  • FIG. 5 is a schematic block diagram of a navigation module in a processor, shown in some embodiments of the present application.
  • FIG. 6 is a schematic diagram of motion control shown in accordance with some embodiments of the present application.
  • FIG. 7 is a schematic diagram of motion control shown in accordance with some embodiments of the present application.
  • FIG. 8 is a schematic structural view of the sensor of FIG. 2 according to some embodiments of the present application.
  • Figure 9 is a schematic illustration of the fuselage of Figure 2, shown in accordance with some embodiments of the present application.
  • FIG. 10 is a schematic diagram of a motion module shown in accordance with some embodiments of the present application.
  • FIG. 11 is a schematic structural view of the pan/tilt head of FIG. 9 according to some embodiments of the present application.
  • Figure 12 is a robotic system shown in accordance with some embodiments of the present application.
  • FIG. 13 is a flow chart of determining control parameters of a control robot, in accordance with some embodiments of the present application.
  • FIG. 14 is a flow diagram of constructing a map, shown in accordance with some embodiments of the present application.
  • 15 is a flow diagram of determining one or more reference frames, in accordance with some embodiments of the present application.
  • 16 is a flowchart showing obtaining depth information, intensity information, and displacement information, according to some embodiments of the present application.
  • 17A is a flow chart for determining an initial value of displacement, in accordance with some embodiments of the present application.
  • 17B is a flowchart of determining a robot pose, shown in some embodiments of the present application.
  • FIG. 18 is a schematic block diagram showing the gyroscope and accelerometer determining the angle between the horizontal plane and the Z-axis, in accordance with some embodiments of the present application;
  • 19 is a flowchart of determining a corresponding angle of a reference frame, according to some embodiments of the present application.
  • 20 is a flow diagram of adjusting vertical motion in a sensor in a smart device, in accordance with some embodiments of the present application.
  • system means for distinguish one of the different components, components, parts, or components of the different levels in the sequential arrangement. method. However, if other expressions can achieve the same purpose, these terms can be replaced by other expressions.
  • the intelligent robot system or method can also be applied to any type of smart device or car other than a smart robot.
  • an intelligent robotic system or method can be applied to different smart device systems, including one or any combination of a balance wheel, an unmanned ground vehicle (UGV), a smart wheelchair, and the like.
  • the intelligent robotic system can also be applied to any intelligent system including application management and/or distribution, such as systems for transmitting and/or receiving express delivery, as well as carrying people or goods to certain locations.
  • robot intelligent robot
  • smart device as used in this disclosure are used interchangeably to refer to a device, device, or tool that is movable and automatically operable.
  • user equipment in this disclosure may refer to a tool that may be used to request a service, subscribe to a service, or facilitate the provision of a service.
  • mobile terminal in this disclosure may refer to a tool or interface that may be used by a user to control an intelligent robot.
  • the positioning techniques used in this disclosure include Global Positioning System (GPS) technology, Global Navigation Satellite System (GLONASS) technology, Compass navigation system (COMPASS) technology, Galileo positioning system (Galileo) technology, and Quasi-Zenith satellite system (QZSS) technology.
  • GPS Global Positioning System
  • GLONASS Global Navigation Satellite System
  • COMPASS Compass navigation system
  • Galileo positioning system Galileo
  • QZSS Quasi-Zenith satellite system
  • WiFi wireless fidelity
  • the present disclosure describes an intelligent wheelchair system 100 as an exemplary system and a method of constructing a map and planning a route for the intelligent wheelchair system 100.
  • the methods and systems of the present disclosure are directed to building a map based on, for example, information obtained by the intelligent wheelchair system 100.
  • the information obtained can be captured by sensors (groups) located in the intelligent wheelchair system 100.
  • the sensor (group) can be optical or magnetoelectric.
  • the sensor can be a camera or a lidar.
  • the intelligent wheelchair system 100 can include an intelligent robot 110, a network 120, a user device 130, and a database 140.
  • the user can control the smart robot 110 through the network 120 using the user device 130.
  • the intelligent robot 110 and the user device 130 can establish communication.
  • the communication between the intelligent robot 110 and the user device 130 may be wired or wireless.
  • the intelligent robot 110 can establish communication with the user device 130 or the database 140 via the network 120 and can be based on operational commands from the user device 130.
  • the intelligent robot 110 is wirelessly controlled (eg, a command to move or rotate).
  • the smart robot 110 can be directly connected to the user device 130 or database 140 via a cable or fiber.
  • the smart robot 110 may update or download a map stored in the database 140 based on communication between the smart robot 110 and the database 140.
  • the intelligent robot 110 can capture information in a route and can analyze the information to construct a map.
  • the complete map can be stored in database 140.
  • the map constructed by the intelligent robot 110 may include information corresponding to a portion of the complete map. In some embodiments, the corresponding portion of the complete map can be updated by the constructed map.
  • the complete map stored in the database 140 can be accessed by the intelligent robot 110. A portion of the complete map containing the destination and current location of the intelligent robot 110 may be selected by the intelligent robot 110 for planning the route.
  • the smart robot 110 can plan a route based on the selected map, the destination of the smart robot 110, and the current location.
  • the smart robot 110 can employ a map of the user device 130. For example, user device 130 can download a map from the Internet.
  • User device 130 can direct the motion of smart robot 110 based on a map downloaded from the Internet. As another example, user device 130 can download the latest map from database 140. Once the destination and current location of the intelligent robot 110 are determined, the user device 130 can transmit the map obtained from the database 140 to the intelligent robot 110. In some embodiments, user device 130 may be part of intelligent robot 110. In some embodiments, if the map constructed by the intelligent robot 110 includes its destination and current location, the intelligent robot 110 can plan the route based on the map constructed by itself.
  • Network 120 can be a single network or a combination of different networks.
  • network 120 can be a local area network (LAN), a wide area network (WAN), a public network, a private network, a wireless local area network (WLAN), a virtual network, a metropolitan area network (MAN), a public switched telephone network (PSTN), or any combination thereof.
  • the smart robot 110 can communicate with the user device 130 and the database 140 via Bluetooth.
  • Network 120 may also include various network access points.
  • a wired or wireless access point such as a base station or an Internet switching point, can be included in the network 120.
  • the user can send control operations from the user device 130 to the intelligent robot 110 and receive the results via the network 120.
  • the intelligent robot 110 can access information stored in the database 140 directly or via the network 120.
  • the user device 130 connectable to the network 120 may be one of the mobile device 130-1, the tablet 130-2, the notebook computer 130-3, the built-in device 130-4, or the like, or any combination thereof.
  • mobile device 130-1 can include one of a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, etc., or any combination thereof.
  • the user may control the smart robot 110 through a wearable device, which may include a smart bracelet, smart footwear, smart glasses, a smart helmet, One of a smart watch, a smart wear, a smart backpack, a smart accessory, or the like, or any combination thereof.
  • the smart mobile device can include one of a smart phone, a personal digital assistant (PDA), a gaming device, a navigation device, a point of sale (POS) device, or the like, or any combination thereof.
  • the virtual reality device and/or the augmented reality device may include one or any of a virtual reality helmet, a virtual reality glasses, a virtual reality patch, an augmented reality helmet, an augmented reality glass, an augmented reality eyewear, and the like. combination.
  • the virtual reality device and/or augmented reality device may include Google Glass, Oculus Rift, HoloLens, Gear VR, and the like.
  • the built-in device 130-4 can include an onboard computer, a car television, and the like.
  • user device 130 may be a device having a location technology that locates a location of a user device and/or user device 130 associated with the user. For example, the route may be determined by the intelligent robot 110 based on the map, the destination of the intelligent robot 110, and the current location. The location of the intelligent robot 110 can be obtained by the user device 130.
  • user device 130 may be a device with image capture capabilities. For example, the map stored in database 140 can be updated based on information captured by an image sensor (eg, a camera).
  • user device 130 may be part of intelligent robot 110. For example, a smartphone with a camera, a gyroscope, and an accelerometer can be held by the pan/tilt of the intelligent robot 110.
  • User device 130 can be used as a sensor to detect information.
  • processor 210 and memory 220 can be portions of a smartphone.
  • user device 130 may also serve as a communication interface for a user of intelligent robot 110. For example, the user can touch the screen of the user device 130 to select a control operation of the smart robot 110.
  • Database 140 can store a complete map. In some embodiments, there may be multiple intelligent robots wirelessly coupled to the database 140. Each intelligent robot connected to the database 140 can construct a map based on information captured by its sensors. In some embodiments, the map constructed by the intelligent robot may be part of a complete map. During the update process, the constructed map can replace the corresponding area in the complete map. Each smart robot can download a map from the database 140 when the route needs to be planned from the location of the intelligent robot 110 to the destination. In some embodiments, the map downloaded from database 140 may be part of a complete map that includes at least the location and destination of intelligent robot 110. The database 140 can also store historical information related to users connected to the intelligent robot 110. The history information may include, for example, a user's previous operations or information related to how the intelligent robot 110 operates. As shown in FIG. 1, database 140 can be accessed by intelligent robot 110 and user device 130.
  • intelligent wheelchair system 100 described above is merely illustrative of one example of a particular embodiment of the system and is not intended to limit the scope of the disclosure.
  • the intelligent robot 110 may include a processor 210, a memory 220, a sensor (group) 230, a communication port 240, an input/output interface 250, and a body 260.
  • the sensor (group) 230 can acquire information.
  • the information can include image data, gyroscope data, accelerometer data, position data, and distance data.
  • Processor 210 can process the information to generate one or more results.
  • the one or more results may include displacement information and depth information (eg, displacement of a camera between two adjacent frames, depth of an object in two adjacent frames).
  • processor 210 can construct a map based on one or more results. The processor 210 can also transfer the map to the database 140 for updating.
  • processor 210 may include one or more processors (eg, a single core processor or a multi-core processor).
  • processor 210 may include a central processing unit (CPU), an application specific integrated circuit (ASIC), a dedicated instruction set processor (ASIP), a graphics processing unit (GPU), a physical processing unit (PPU), a digital signal processor.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • ASIP dedicated instruction set processor
  • GPU graphics processing unit
  • PPU physical processing unit
  • DSP Field Programmable Gate Array
  • PLD Programmable Logic Device
  • controller microcontroller unit, reduced instruction set computer, microprocessor, etc., or any combination thereof.
  • the memory 220 can store instructions for the processor 210, and when executed, the processor 210 can perform one or more of the functions or operations described in this disclosure.
  • memory 220 may store instructions executed by processor 210 for processing information obtained by sensors (groups) 230.
  • processor 220 may automatically store information obtained by sensors (groups) 230.
  • Memory 220 may also store one or more results generated by processor 210 (e.g., displacement information and/or depth information used to construct the map).
  • processor 210 may generate one or more results and store them in memory 220, and one or more results may be read by processor 210 from memory 220 to construct a map.
  • memory 220 can store a map constructed by processor 210.
  • memory 220 may store a map obtained by processor 210 from database 140 or user device 130.
  • the memory 220 can store a map constructed by the processor 210, which can then be sent to the database 140 to update the corresponding portion of the complete map.
  • the memory 220 can temporarily store a map downloaded by the processor 210 from the database 140 or the user device 130.
  • memory 220 can include one of mass storage, removable memory, volatile read and write memory, read only memory (ROM), and the like, or any combination thereof.
  • Exemplary mass storage devices can include magnetic disks, optical disks, and solid state drives, and the like.
  • Exemplary removable memories may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tape, and the like.
  • An exemplary volatile read and write memory can include random access memory (RAM).
  • RAM may include dynamic RAM (DRAM), dual date rate synchronous dynamic RAM (DDR SDRAM), static RAM (SRAM), thyristor RAM (T- RAM) and zero capacitor RAM (Z-RAM).
  • exemplary ROMs may include mask ROM (MROM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), compact disk ROM (CD-ROM) digital multifunction Disk ROM.
  • the sensor (set) 230 may include image data capable of obtaining an object or obstacle, gyroscope data, accelerometer data, position data, distance data, and any other that may be used by the intelligent robot 110 to perform various functions described in this disclosure. data.
  • the sensor (group) 230 can include one or more night vision cameras for obtaining image data in a low light environment.
  • the data and/or information obtained by the sensor (set) 230 may be stored in the memory 220 and may be processed by the processor 210.
  • one or more sensors (sets) 230 can be mounted in the body 260. More specifically, for example, one or more image sensors may be mounted in the pan/tilt of the fuselage 260.
  • One or more navigation sensors, auger and accelerometer can be installed in the pan/tilt and motion module.
  • the sensor (set) 230 can automatically explore the environment and detect location under the control of the processor 210.
  • sensors (groups) 230 can be used to dynamically sense or detect the location of objects, obstacles, and the like.
  • Communication port 240 may be a port for communicating within intelligent robot 110. That is, the communication port 240 can exchange information between components of the intelligent robot 110. In some embodiments, communication port 240 can transmit signals/data/signals of processor 210 to internal portions of intelligent robot 110 and receive signals from internal portions of intelligent robot 110. For example, the processor 210 can receive information from sensors (groups) mounted on the body 260. As another example, processor 210 can transmit control operations to body 260 via communication port 240. The transmit-receive process can be implemented via communication port 240. Communication port 240 can receive various wireless signals in accordance with certain wireless communication specifications.
  • communication port 240 can be provided as a communication module for known wireless local area communication, such as Wi-Fi, Bluetooth, infrared (IR), ultra-wideband (UWB), ZigBee, etc., or as, for example, 3G , 4G or Long Term Evolution (LTE) mobile communication modules, or as a known communication method for wired communication.
  • communication port 240 is not limited to elements for transmitting/receiving signals from internal devices, and may be used as an interface for interactive communication.
  • communication port 240 can establish communication between processor 210 and other portions of intelligent robot 110 through circuitry using an application programming interface (API).
  • API application programming interface
  • user device 130 may be part of intelligent robot 110.
  • communication between the processor 210 and the user device 130 can be performed by the communication port 240.
  • the input/output interface 250 may be an interface for communication between the smart robot 110 and other external devices such as the database 140.
  • the input/output interface 250 can be controlled with an intelligent robot 110 data transmission.
  • the latest map can be sent from the database 140 to the intelligent robot 110.
  • a map constructed based on information obtained by the sensors (groups) 230 can be transmitted from the database 140 to the smart robot 110.
  • the input/output interface 250 may also include various additional components, such as a wireless communication module (not shown) for wireless communication or a tuner (not shown) for adjusting broadcast signals, depending on the design of the smart robot 110. Type and for receiving signal/data components from external inputs.
  • the input/output interface 250 can be used for communication modules of known wireless local area communication, such as Wi-Fi, Bluetooth, infrared (IR), ultra-wideband (UWB), ZigBee, etc., or as, for example, 3G, 4G or Long Term Evolution ( Mobile communication module of LTE), or as a known input/output interface for wired communication.
  • the input/output interface 250 can be provided as a communication module for known wired communications, such as fiber optics or Universal Serial Bus (USB).
  • the intelligent robot 110 can exchange data with the database 140 of the computer via a USB interface.
  • the body 260 may be a body for holding the processor 210, the memory 220, the sensor 230, the communication port 240, and the input/output interface 250.
  • the body 260 can execute instructions from the processor 210 to move and rotate the sensor (set) 230 to obtain or detect information for the area.
  • the fuselage 260 can include a motion module and a pan/tilt, as described in other portions of the disclosure (such as FIG. 9 and its description) for the fuselage 260.
  • sensors (groups) can be installed in the motion module and the pan/tilt, respectively.
  • FIG. 3 An exemplary block diagram of a processor 210 shown in FIG. 3 is provided in accordance with some embodiments of the present application.
  • the processor 210 may include an analysis module 310, a navigation module 320, and an intelligent robot control module 330.
  • Analysis module 310 can analyze the information obtained from sensors (groups) 230 and generate one or more results. Analysis module 310 can construct a map based on one or more results. In some embodiments, the constructed map can be sent to database 140. In some embodiments, the analysis module 310 can receive the most recent map from the database 140 and send it to the navigation module 320. The navigation module 320 can plan a route from the location of the intelligent robot 110 to the destination. In some embodiments, the complete map can be saved in database 140. The map constructed by analysis module 310 may correspond to a portion of the complete map. The update process can replace the corresponding portion of the full map with the constructed map. In some embodiments, the map constructed by analysis module 310 can be up-to-date and include the location and destination of intelligent robot 110.
  • Analysis module 310 may not receive the map from database 140.
  • the map constructed by the analysis module 310 can be transmitted to the navigation module 320 to plan the route.
  • the intelligent robot control module 330 can generate control parameters of the intelligent robot 110 based on the route planned by the navigation module 320.
  • the control parameters may be temporarily stored in memory 220.
  • control parameters may be sent to the intelligent robotic body 260 to control the motion of the intelligent robot 110, see other portions of the disclosure (as in Figures 6, 7 and its description) for a description of the control parameters.
  • the analysis module 310 can include an image processing unit 410, a displacement determination unit 420, a depth determination unit 430, a closed loop control unit 440, and an object detection unit 450.
  • Image processing unit 410 may process the image data to perform one or more functions of intelligent robot 110.
  • Image data may include, for example, one or more images (eg, still images, video frames, etc.), initial depth and displacement of each pixel in each frame, and/or any other data associated with one or more images.
  • the displacement may include a displacement of the wheel between the time intervals in which two adjacent frames are taken and a displacement of the camera relative to the wheel.
  • Image data may be provided by any device capable of providing image data, such as a sensor (group) 230 (eg, one or more image sensors).
  • the image data can include data regarding a plurality of images.
  • An image may include a sequence of video frames (also referred to as "frames"). Each frame can be a frame, a field, and the like.
  • image processing unit 410 can process the image data to generate motion information for intelligent robot 110.
  • image processing unit 410 can process two frames (eg, a first frame and a second frame) to determine the difference between the two frames. The image processing unit 410 can then generate motion information of the intelligent robot 110 based on the difference between the frames.
  • the first frame and the second frame may be adjacent frames (eg, current and previous frames, current and subsequent frames, etc.).
  • the first frame and the second frame may also be non-adjacent frames. More specifically, for example, image processing unit 410 may determine one or more corresponding pixel points in the first frame and the second frame and one or more regions including corresponding pixel points (also referred to as "overlapping regions").
  • image processing unit 410 may determine the first pixel in the first frame as the corresponding pixel of the second pixel in the second frame.
  • the first pixel in the second frame and its corresponding pixel may correspond to the same location of the opposite object.
  • image processing unit 410 can identify one or more pixel points in the first frame that do not have corresponding pixel points in the second frame.
  • Image processing unit 410 may further identify one or more regions (also referred to as "non-overlapping regions") that include the identified pixel points. The non-overlapping regions may correspond to the motion of the sensor (group) 230.
  • the pixel points of the non-overlapping regions in the first frame having no corresponding pixel points in the second frame may be Omitted.
  • image processing unit 410 can identify the intensity of pixel points in the first frame and corresponding pixel points in the second frame.
  • the intensity of the pixel points in the first frame and the corresponding pixel points in the second frame may be obtained as a criterion for determining the difference between the first frame and the second frame.
  • the RGB intensity can be selected as a criterion for determining the difference between the first frame and the second frame.
  • Pixels, corresponding pixels and RGB intensity It is sent to the displacement determining unit 420 and/or the depth determining unit 430 for determining the displacement and depth of the second frame.
  • the depth may represent the spatial depth of an object in two frames.
  • the displacement information can be a set of displacements of a set of frames.
  • the depth information can be the depth of a set of frames. Frames, displacement information, and depth information can be used to construct the map.
  • Displacement determining unit 420 can determine displacement information based on data provided by image processing unit 410 and/or any other data.
  • the displacement information may include one or more displacements of motion information that may represent a sensor (set) 230 that generates image data (eg, an image sensor that captures multiple frames).
  • the displacement determining unit 420 can obtain data of corresponding pixels in two frames (for example, the first frame and the second frame).
  • the data may include one or more values of corresponding pixels, such as gray values of pixels, intensity, and the like.
  • the displacement determination unit 420 can determine the value of the pixel point based on any suitable color model (eg, RGB (red, green, and blue) models, HSV (hue, saturation, and brightness) models, etc.).
  • the displacement determining unit 420 can determine a difference between corresponding pairs of pixel points in the two frames. For example, the image processing unit 410 may identify the first pixel in the first frame and its corresponding pixel (eg, the second pixel) in the second frame, and may determine the first based on the transformation of the coordinates of the first pixel. Two pixel points. The first pixel and the second pixel may correspond to the same object. The displacement determining unit 420 may also determine a difference between the value of the first pixel point and the value of the second pixel point. In some embodiments, the displacement can be determined by minimizing the sum of the differences between corresponding pairs of pixel points in the first frame and the second frame.
  • the displacement determining unit 420 can determine an initial displacement ji ji,1 representing an origin estimate of the displacement.
  • the initial displacement ji ji,1 can be determined based on the following formula (1):
  • x represents the coordinates of the pixel points in the first frame
  • ⁇ (x, D i (x), ji ji ) represents the coordinates of the corresponding pixel in the second frame
  • ⁇ (x, D i (x), ⁇ ji ) and I i (x) may be at the same relative position of an object
  • ⁇ (x, D i (x), ji ji ) is a transformed pixel of x after the camera moves a certain displacement ⁇ ji
  • is a set of pixel pairs, each of which includes a pixel in the first frame and a corresponding pixel drop in the second frame.
  • I i (x) is the RGB intensity of the pixel whose coordinate value is x;
  • I j ( ⁇ (x, D i (x), ji ji )) is the pixel point ⁇ (x, D i (x), ⁇ ji ) RGB intensity.
  • ⁇ (x, D i (x), ji ji ) is the transformed coordinate of x after the camera moves a certain displacement ⁇ ji .
  • the displacement determining unit 420 may calculate the corresponding pixel point ⁇ (x, D i (x), ji ji ) based on the initial value ⁇ ji of the displacement and the initial depth D i (x).
  • the initial depth D i (x) can be a zero matrix.
  • the initial value of the displacement ⁇ ji can be a variable.
  • the displacement determining unit 420 may need the initial value ⁇ ji of the displacement as shown in the iteration formula (1).
  • the initial value of the displacement [xi] can ji 'with respect to the wheel and the displacement [xi] ji camera' wheel based on the displacement [xi] ji is determined, see elsewhere in the present disclosure (e.g., FIG. 17A and the description thereof) in Early The description of the value ⁇ ji .
  • the initial value of the displacement may be the vector sum of ⁇ ji ' and ⁇ ji ". Trying to find the minimum difference between the two frames around the initial value of the displacement ji ji initial value and the variable.
  • the depth determination unit 430 can determine the updated depth D i,1 (x).
  • the updated depth D i,1 (x) can be calculated by equation (2):
  • the depth D i (x) represents a variable of the difference between the two frames in the formula (2), and when the difference between the two frames is the smallest, the value D i,1 (x) is determined as the updated depth.
  • the initial depth D i (x) can be a zero matrix.
  • the displacement determining unit 420 may also generate the updated displacement ⁇ ji, 1u based on the updated depth D i,1 (x).
  • the updated displacement ⁇ ji, 1u may be obtained based on equation (1) by replacing the initial depth D i (x) with the updated depth D i,1 (x).
  • the closed loop control unit 440 can perform closed loop detection.
  • the closed loop control unit 440 can detect whether the smart robot 110 returns to the previously accessed position, and can update the displacement information based on the detection.
  • the closed loop control unit 440 in response to determining that the intelligent robot 110 has returned to a previously accessed location in the route, can use the g2o closed loop detection to adjust the updated displacement of the frame to reduce the error.
  • G2o closed-loop detection is a general optimization framework for reducing nonlinear errors.
  • the updated displacement of the adjusted frame can be set to displacement information.
  • the intelligent robot 110 includes a depth sensor such as a lidar, the depth can be directly obtained, the displacement can be determined based on equation (1), and then the displacement can be adjusted by the closed loop control unit 440 to generate an adjusted displacement.
  • the displacement information when the depth sensor detects the depth information, the displacement information may be a set of displacements based on equation (1), and then adjusted by the closed loop control unit 440.
  • the displacement information When the depth information is a set of the updated depths, the displacement information may be a set of displacements calculated by the formula (1), the formula (2), and the closed loop control unit 440.
  • the closed loop control unit 440 can generate a map based on the frame, displacement information, and depth information.
  • the analysis module 310 can also include an object detection unit 450 that can detect obstacles, objects, and distances from the intelligent robot 110 to obstacles and objects.
  • obstacles and objects may be detected based on data obtained by sensors (sets) 230.
  • the object detecting unit 450 may detect an object based on distance data captured by a sonar, an infrared distance sensor, an optical flow sensor, or a laser radar.
  • FIG. 5 is a block diagram of an exemplary navigation module 320 in processor 210, in accordance with some embodiments of the present application.
  • the navigation module 320 can include a drawing unit 510 and a route planning unit 520.
  • drawing unit 510 can receive a map from database 140.
  • drawing unit 510 can process the map for route planning.
  • the map may be part of a complete map in database 140.
  • a map containing the determined destinations and the location of the intelligent robot 110 can be adapted to plan the route.
  • the map obtained from database 140 can be a 3D map.
  • drawing unit 510 can convert a 3D map into a 2D map by projection techniques.
  • the drawing unit 510 may divide the object in the 3D map into pixel points and project the pixel points onto the horizontal surface to generate a 2D map.
  • the route planning unit 520 can plan a route from the location of the smart robot 110 to the destination based on the transmitted 2D map.
  • the intelligent robot control module 330 can determine the control parameters based on the route planned by the route planning unit 520 in the navigation module 320. In some embodiments, the intelligent robot control module 330 can divide the route into a set of segments. The intelligent robot control module 330 can obtain a set of nodes of the segment. In some embodiments, the node between the two segments may be the end of the previous segment and the beginning of the subsequent segment. The control parameters of a segment can be determined based on the start and end points.
  • the end point of the intelligent robot 110 may not match the predetermined end point of the segment, and the route planning unit 520 may be based on the mismatched end point (the position of the new smart robot 110) and Destination to plan a new route.
  • the intelligent robot control module 330 can segment the new route and generate one or more new segments, and then the intelligent robot control module 330 can determine a set of control parameters for each new segment.
  • FIG. 6 and 7 are examples of motion control of the intelligent robot 110.
  • the motion module moves around the point ICC at an angular velocity ⁇ .
  • Motion module having two wheels, comprising moving velocity v l and the left wheel 610 moves at a speed of v r of the right wheel 620.
  • the distance between the left wheel 610 and the right wheel 620 is L.
  • the distance between the left wheel 610 and the right wheel 620 to the center point O of the two wheels is L/2.
  • the distance between the center point O and the point ICC is R.
  • FIG. 7 is an exemplary schematic diagram of a control parameter determining method of the intelligent robot 110.
  • the intelligent robot dt module 110 in motion from point to point O 1 O 2.
  • the angle between the line connecting point O 1 and point ICC to the line connecting point O 2 and point ICC is ⁇ . If dt, L, R, and ⁇ are known, the speed v l of the left wheel and the speed v r of the right wheel can be calculated.
  • FIG. 8 is a block diagram showing an exemplary structure of a sensor (group) 230, according to an embodiment of the present application.
  • the sensor (set) 230 can include an image sensor 810, an accelerometer 820, a gyroscope 830, a sonar 840, an infrared distance sensor 850, an optical flow sensor 860, a lidar 870, and a navigation sensor 880.
  • Image sensor 810 can capture image data.
  • the analysis module 310 can construct a map.
  • the image data may include a frame, an initial depth and a displacement of each pixel on each frame.
  • the initial depth and displacement can be used to determine depth and displacement.
  • the displacement may include displacement of the wheel and displacement of the camera relative to the wheel during one time interval in which two adjacent frames are taken.
  • the accelerometer 820 and the gyroscope 830 can operate together. In order to obtain stable information from the sensor (group) 230, the balance is necessary. In some embodiments, in order to control the pitch attitude within a certain threshold, the accelerometer 820 and the gyroscope 830 can operate together. In some embodiments, the accelerometer 820 and gyroscope 830 can be held by a motion module and a pan/tilt, respectively. For a description of balance keeping, see other parts of the application, such as Figures 18, 19 and its description.
  • the sonar 840, the infrared distance sensor 850, and the optical flow sensor 860 can be used to position the intelligent robot 110.
  • the smart robot 110 can be positioned using one or any combination of sonar 840, infrared distance sensor 850, and optical flow sensor 860.
  • Lidar 870 can detect the depth of an object in a frame. That is, the lidar 870 can acquire the depth of each frame, and the analysis module 310 in the processor 210 does not need to calculate the depth.
  • the depth obtained by the laser radar 870 can be directly used to calculate the displacement described in equation (1) in Fig. 4.
  • the displacement obtained based on the formula (1) can be adjusted by the closed loop control unit 440.
  • the sonar 840, the infrared distance sensor 850, and the optical flow sensor 860 can position the intelligent robot 110 by detecting the distance between the intelligent robot 110 and an object or obstacle.
  • Navigation sensor 880 can be in a thick Position the intelligent robot in a slight area or position range.
  • navigation sensor 880 can position smart robot 110 with any type of positioning system.
  • the positioning system may include a Global Positioning System (GPS), a Beidou navigation or positioning system, and a Galileo positioning system.
  • the body 260 can include a housing 910, a motion module 920, and a pan/tilt 930.
  • the housing 910 can be a housing of the body 260 that protects the modules and units in the intelligent robot 110.
  • the motion module 920 can be a motion operating element in the smart robot 110.
  • the motion module 920 can be motion based on control parameters generated by the intelligent robot control module 330 in the processor 210. For example, in a segment of the route determined by the intelligent robot control module 330, the determination of the control parameters may be based on the start and end points of the segment of the route.
  • the pan/tilt 930 can be at least one support device for the sensor depicted in FIG.
  • the pan/tilt 930 can support an image sensor 810, such as a camera, to acquire frames.
  • the pan/tilt can support an image sensor 810, such as a camera, to capture frames.
  • the pan/tilt 930 can support the accelerometer 820 and the gyroscope 830 to obtain stable information by maintaining the balance of the sensors supported by the gimbal.
  • the platform 930 can support at least one of the sonar 840, the infrared distance sensor 850, and the optical flow sensor 860.
  • the pan/tilt 930 can also support the lidar 870 and other sensors in order to detect depth information or other information.
  • navigation sensor 880 can be mounted on pan/tilt 930.
  • the sensors supported by the pan/tilt can be integrated on a single smartphone.
  • FIG. 10 is an exemplary schematic diagram of a motion module 920.
  • the motion module 920 can include a motion unit and a carrier 1010.
  • the motion unit can include two wheels, which can include a left wheel 610 and a right wheel 620.
  • Carrier 1010 can carry sonar 840 or optical flow sensor 860 to detect objects or obstacles.
  • the carrier 1010 can include an accelerometer 820 (not shown in FIG. 10) and a gyroscope 830 (not shown in FIG. 10) to maintain the balance of the motion module 920.
  • the carrier 1010 can include other sensors, such as an infrared distance sensor 850, to obtain other desired information.
  • the pan/tilt 930 can support sensors (groups) 230 to obtain information to generate maps, plan routes, or generate control parameters.
  • 11 is an exemplary schematic diagram of a pan/tilt 930 in the fuselage 260 depicted in FIG. 9, in accordance with some embodiments of the present application.
  • the platform 930 can include a spindle 1170 for controlling rotation about the X-axis, a spindle 1150 for controlling rotation about the Y-axis, and a spindle 1130 for controlling rotation about the Z-axis.
  • the X axis may be the first axis in the horizontal plane
  • the Y axis may be the second axis in the horizontal plane
  • the Z axis can be a vertical axis that is perpendicular to the horizontal plane.
  • the platform 930 can include a connecting rod 1180 for connecting the rotating shaft 1170 and the sensor, a connecting rod 1160 for connecting the rotating shaft 1150 and the rotating shaft 1170, and a connecting rod for connecting the rotating shaft 1130 and the rotating shaft 1150. 1140.
  • the platform 930 can include a connector 1110, a connecting rod 1114, and a dynamic Z-buffer bar 1120.
  • the sensors can be integrated into one user device 130 (eg, a smartphone).
  • User device 130 may include sensors such as image sensor 810, accelerometer 820, gyroscope 830, and navigation sensor 880.
  • the cloud platform 930 can also include a connection block 1190 to support the user equipment 130.
  • sensors in the user device 130 obtain information.
  • the sensors in user device 130 are controlled to obtain appropriate information by adjusting the pose of pan/tilt 930.
  • the attitude of the pan/tilt 930 can be adjusted by rotating the rotating shaft 1170, the rotating shaft 1150, and the rotating shaft 1130 around the X-axis, the Y-axis, and the Z-axis.
  • the traditional 3-axis pan/tilt can be used for aerial photography.
  • a dynamic Z-buffer connecting rod 1120 is employed in the pan/tilt 930.
  • the dynamic Z-buffer connecting rod 1120 can maintain the stability of the pan/tilt 930 on the Z-axis.
  • the dynamic Z-buffer connecting rod 1120 can be a telescoping rod that can expand and contract along the Z-axis.
  • the method of operating the dynamic Z-buffer connecting rod 1120 in the pan/tilt 930 is illustrated in FIG. The rotation and vertical movement of the rotating shafts 1130, 1150, 1170 of the dynamic Z-buffer connecting rod 1120 are controlled in accordance with control parameters generated by the intelligent robot control module 330.
  • the intelligent robot 110 may include a processor 210, a motion module 920, and a pan/tilt 930.
  • the processor 210 can include an analysis module 310, a navigation module 320, and an intelligent robot control module 330.
  • the motion module 920 can include a motion unit 1210, a first type of sensor 1220, and a communication port 240.
  • the pan/tilt 930 can include a pan/tilt control unit 1230, a communication port 240, and a second type of sensor 1240.
  • processor 210 may send control parameters to control motion unit 1210 in motion module 920 and pan/tilt control unit 1230 in pan/tilt 930.
  • the first type of sensor 1220 and the second type of sensor 1240 can acquire information.
  • Analysis module 310 can process the acquired information and build a map.
  • the constructed map can be sent to database 140.
  • the analysis module 310 can download an up-to-date map from the database 140 and send the latest map to the navigation module 320.
  • the navigation module 320 can process the latest map and determine a route from the location of the smart robot to the destination.
  • the analysis 310 module may not download the full map, including portions of the complete map of the location and destination of the smart robot that are sufficient for planning the route.
  • the location of the analysis module 310 The map may include the location and destination of the intelligent robot 110, and the map is the most recent map in the database.
  • the map constructed by the analysis module 310 can be sent to the navigation module 320 to plan the route.
  • the navigation module 320 can include a drawing unit 510 and a route planning unit 520.
  • drawing unit 510 can generate a 2D map for route planning.
  • the route planning unit 520 can plan a route that can be sent to the intelligent robot control module 330.
  • the intelligent robot control module 330 can divide the route into one or more routes.
  • the intelligent robot control module 330 can generate control parameters for each segment of the line.
  • Each line has a start and end point, and the end of the line can be the starting point of the next line.
  • the end position of the intelligent robot 110 in a segment of the line may not match the end point preset for the line, which may affect the planning of the remaining portion of the route. Thus, it is necessary to re-route the route based on the mismatched location (new location of the intelligent robot 110) and the destination.
  • the re-planning route process can be performed by the navigation module 320 if a mismatch condition is detected.
  • the intelligent robotic control module 330 can generate control parameters to stabilize the motion module 920 and the pan/tilt 930.
  • the sensor can be mounted on the motion module 920 and the pan/tilt 930.
  • the first type of sensor 1220 can include at least one of an accelerometer 820, a gyroscope 830, a sonar 840, an infrared distance sensor 850, an optical flow sensor 860, a lidar 870, and a navigation sensor 880.
  • the second type of sensor 1240 can include at least one of an image sensor 810, an accelerometer 820, a gyroscope 830, a sonar 840, an infrared distance sensor 850, an optical flow sensor 860, a lidar 870, and a navigation sensor 880. .
  • the processor 210 can establish communication between the motion module and the cloud platform 930 through the communication port 240.
  • communication port 240 can be in any form.
  • communication port 240 can be a wired or wireless transceiver.
  • communication port 240 can exist in the form of an interface for interactive communication.
  • communication port 240 can establish communication between processor 210 and other portions of intelligent robot 110 through circuitry that runs an application programming interface (API).
  • API application programming interface
  • the API is a set of subroutine definitions, protocols, and tools for building software and applications.
  • the API can make the development of the program simpler by providing all the components and then can be assembled together.
  • the API protocol can be used to design circuits for wireless communication, for example, the wireless circuits can be Wi-Fi, Bluetooth, infrared (IR), ultra-wideband (UWB), and wireless domain network (ZigBee). Etc., it can also be a mobile communication module such as 3G, 4G and Long Term Evolution (LTE).
  • API can The bottom hardware (eg, motion module 920 or pan/tilt) and control hardware (eg, processing module 210) are separated.
  • processing module 210 eg, a portion of a smartphone
  • processing module 210 can control the motion of the wheels in motion module 920 and the pose of an image sensor (eg, a camera) in pan/tilt 930.
  • the first type of sensor 1220 in the motion module 920 can send information (eg, location data) to the smartphone.
  • the second type of sensor 1240 in the pan/tilt 930 can send information (eg, camera gestures) to the smartphone.
  • FIG. 13 is an exemplary flow chart for determining control parameters for controlling an intelligent robot.
  • the step 1300 described in FIG. 13 may be completed by the processor 210 in the intelligent robot 110 in accordance with an instruction stored in the memory 220.
  • the processor 210 can obtain information from the sensor (group) 230.
  • the analysis module 310 in the processor 210 can receive information from the first type of sensors in the motion module 920 and the second type of sensors in the pan/tilt 930 via the API communication port 240.
  • the motion of the intelligent robot 110 can be controlled by information analysis.
  • the stabilization of the motion module 920 and the pan/tilt 930 in the intelligent robot 110 may be maintained by information analysis.
  • the processor 210 can determine the destination and current location of the smart bot 110 based on the received information.
  • analysis module 310 in processor 210 can receive location data from sensors (groups) 230.
  • the sensors include, but are not limited to, sonar, infrared distance sensors, optical flow sensors, lidars, navigation sensors, and the like.
  • the user may determine the destination through an input/output (I/O) interface 250.
  • I/O input/output
  • a user can input a destination for the smart robot 110.
  • the processor 210 can provide the smart robot 110 with a route of motion using the information of the destination determined by the user.
  • processor 210 can determine the current location of intelligent robot 110 based on the received information.
  • processor 210 may determine the current location of smart robot 110 based on information obtained from sensors (groups) 230. For example, processor 210 may determine a coarse location of the intelligent robot based on information acquired by navigation sensor 880 in a positioning system (eg, GPS). For another example, the processor 210 can determine the precise location of the smart robot 110 based on information acquired by at least one of the sonar 840, the infrared distance sensor 850, and the optical flow sensor 860.
  • a positioning system eg, GPS
  • the processor 210 can determine the precise location of the smart robot 110 based on information acquired by at least one of the sonar 840, the infrared distance sensor 850, and the optical flow sensor 860.
  • the processor 210 can obtain a map based on the destination and current location of the smart robot 110, which map can be used to plan the route.
  • a complete map containing a large number of points representing the city may be stored in database 140.
  • a map containing the destination and current location of the intelligent robot 110 is required to plan the route from the current location to the destination.
  • a map containing the destination and current location of the smart robot 110 Can be part of a complete map.
  • the analysis module 310 in the processor 210 can obtain a suitable portion of the complete map from the database 140 based on the destination and current location of the intelligent robot 110.
  • the analysis module 310 can construct a map based on information obtained from the sensors (sets) 230, which can be sent to the database 140 to update the entire map.
  • the constructed map may include the destination and current location of the intelligent robot 110.
  • the navigation module 320 can use the constructed map to plan the route.
  • the route from the current location of the intelligent robot 110 to the destination may be planned.
  • the route planning can be completed by the navigation module 320.
  • the navigation module 320 can convert the resulting map into a two-dimensional map through the drawing unit 510.
  • the route planning unit 520 can obtain a route from the current location of the intelligent robot 110 to the destination based on the two-dimensional map.
  • the intelligent robot control module 330 may segment the planned route into one or more segments.
  • the route separation can be based on a threshold to determine whether to perform, for example, if the planned route is less than a threshold, no route separation is required.
  • route segmentation may be accomplished by intelligent robot control module 330 in accordance with instructions in storage module 220.
  • the intelligent robot control module 330 can determine the control parameters of the control robot based on the segment segmented in step 1350.
  • each segment segmented by intelligent robot control module 330 in step 1350 has a start point and an end point.
  • the intelligent robot control module 330 can determine the control parameters of the intelligent robot on the road segment based on the start and end points of a certain road segment. For details on how to determine the control parameters between the two points, reference may be made to the detailed description in FIGS. 6 and 7. In some embodiments, the control parameters need to be constantly adjusted based on time.
  • the intelligent robot 110 when an intelligent robot 110 passes through two points on a straight line on a road segment, the intelligent robot 110 can adopt different motion speeds in different time periods from the first point to the second point.
  • the control parameters are used to ensure that the intelligent robot remains stable during motion along the planned route. For example, by maintaining the stability of the motion module 920 and the pan/tilt 930, the acquired sensory information can be made relatively more accurate. As another example, when the route is not flat, the control parameters can be used to stabilize the pan/tilt 930 in a direction perpendicular to the ground.
  • the intelligent robot 110 may stop at a position that does not match the intelligent robot control module 330 for the preset end point of the road segment.
  • the navigation module 320 can re-plan a new route according to the location and destination of the matching error where the intelligent robot is located.
  • the intelligent robot control module 330 may further divide the newly planned route into one or more segments, and the intelligent robot control module 330 may also determine the control parameters of the intelligent robot for the segmented segment or segments.
  • the position mismatch may be estimated after the intelligent robot 110 passes each road segment based on the comparison between the actual position of the intelligent robot and the preset end point of the road segment.
  • FIG. 14 is an exemplary flow diagram of processor 210 generating a map, in accordance with some embodiments of the present application. The steps of constructing the map shown may be accomplished by the analysis module 310 based on information obtained by the sensor (group) 230.
  • analysis module 310 can retrieve image data from image sensor 810.
  • the image data may include a large number of frames, an initial depth and/or displacement of each pixel within the frame.
  • the displacement may include the displacement of the wheel and the displacement of the camera relative to the wheel.
  • the initial depth can be set to a zero matrix.
  • the sensor (set) 230 includes a laser radar or a camera with a depth detection function, the depth information can be acquired by the sensor (group).
  • analysis module 310 can determine one or more reference frames based on the image data.
  • the image data may include a large number of frames, an initial depth and/or displacement of each pixel within the frame.
  • analysis module 310 can select one or more reference frames from among the frames.
  • a reference frame can be used to construct a map.
  • analysis module 310 can determine depth information and displacement information from one or more reference frames. That is, in order to acquire displacement information and depth information for each frame, the image data may be processed by the analysis module 310.
  • the image data may be processed by the analysis module 310.
  • analysis module 310 can generate a map based on one or more reference frames, depth information for the frame, and displacement information.
  • a three-dimensional map may be obtained by concatenating one or more reference frames with corresponding displacements.
  • the map can be determined by a large number of frames and their corresponding displacement information and depth information.
  • the order of steps 1420 and 1430 may be reversed or performed simultaneously.
  • step 1420 may also include the process of determining displacement information and depth information in step 1430 in the process of determining one or more reference frames. That is, step 1430 can be a sub-step of the process of determining one or more reference frames in step 1420.
  • the image data can be processed to obtain one or more results.
  • the one or more results may include displacement information (eg, camera displacement between adjacent two frames) and depth information (eg, depth of one of two adjacent frames).
  • the one or more results may be adjusted by a g2o closed loop detection technique to generate adjusted displacement information.
  • the adjusted displacement information can be used as displacement information to generate a map.
  • Analysis module 310 can generate a map based on one or more reference frames and their corresponding displacement information and depth information.
  • Figure 15 is an exemplary flow diagram for determining one or more reference frames, in accordance with some embodiments of the present application. This step can be completed by the analysis module 310, the displacement determination unit 420, and the depth determination unit 430 based on the image data acquired by the image sensor 810. In particular, analysis module 310 can determine one or more reference frames based on one or more results, such as displacement information and depth information.
  • analysis module 310 can obtain image data comprising a plurality of frames, which can include at least one first frame and one second frame.
  • the first frame may be an existing frame and the second frame may be a continuous frame of the first frame. That is, the image sensor 810 can grab the first frame at one point in time and the second frame at the next point in time. That is, the plurality of frames may be adjacent to each other in the time domain.
  • the analysis module 310 can process the image in advance.
  • the processed image may include one or any combination of image segmentation, image enhancement, image fusion, image compression, and the like.
  • the method of segmenting an image may include wavelet transform, Gabor transform, morphological image processing method, image frequency domain processing method, and histogram-based method (eg, color histogram-based method) , intensity histogram based method, edge histogram based method, etc.), compression based method, region growing method, partial differential equation based method, variational method, image segmentation method, watershed transform, model based segmentation method, One or any combination of multi-scale segmentation, triangulation, co-occurrence matrix, edge detection, thresholding, and the like.
  • image enhancement may be an enhancement to one or more properties of the image.
  • the properties of the image include one or any combination of contrast (local or global), brightness (local or global), saturation (local or global), sharpness (local or global), grayscale of the image, and the like.
  • the analysis module 310 can determine one or more historical spatial features from the image. Spatial features may relate to overall or local pixel intensity (integral or local brightness), position (eg, plane, protrusion, obstacle, channel, etc.) position, length or size, etc. in an image.
  • the identified spatial features may include the area of the target, the location of the target, the shape of the target, the overall or local brightness, the position of the target, the boundary of the target, the edge of the target, the angle of the target, the ridge of the target, the content of the spot, etc. One or any combination of the two.
  • analysis module 310 can determine one or more historical time features from two or more images.
  • the historical time feature may be a plurality of images or a change or change in certain physical quantities in a sequence of images consisting of more than two images.
  • the historical time feature can include one or a combination of any of a time pattern, a motion, a time gradient, and the like.
  • analysis module 310 can determine motion.
  • Time series analysis of historical features can include analysis of images over a period of time. Image analysis over time can reveal motion patterns that exist in multiple static images captured over time. Movement can include translation, rotation, and the like of the object.
  • Sports mode can indicate re-issue Seasonal or cyclical. In some embodiments, a moving average or regression analysis can be used.
  • the analysis can use some type of filter (for example, morphological filter, Gaussian filter, unsharp filter, frequency filter, averaging filter, median filter, etc.) on the image data to reduce the error.
  • filter for example, morphological filter, Gaussian filter, unsharp filter, frequency filter, averaging filter, median filter, etc.
  • This analysis can be done in the time domain or in the frequency domain.
  • analysis module 310 can process the image using a particular method to determine one or more features as one or more orthogonal inputs.
  • the particular method can include principal component analysis (PCA), independent component analysis, orthogonal decomposition, singular value decomposition, whitening methods, or spheroidization methods, and the like.
  • PCA principal component analysis
  • independent component analysis orthogonal decomposition
  • singular value decomposition singular value decomposition
  • whitening methods whitening methods
  • spheroidization methods and the like.
  • the orthogonal input can be linearly uncorrelated.
  • analysis module 310 can use the first frame as a reference frame and the second frame as an alternate frame.
  • analysis module 310 can select a reference frame and an alternate frame using a model.
  • the model can include one or any combination of methods, algorithms, procedures, formulas, rules, and the like.
  • the model may include one or any combination of an image segmentation model, an image enhancement model, a user interface model, a workflow model, and the like.
  • the model may include some models of big data and artificial intelligence, such as feedforward neural networks (FNN), recurrent neural networks (RNN), Kohonen self-organizing maps, autoencoders, probabilistic neural networks (PNNs), Time Delay Neural Network (TDNN), Radial Basis Function Network (RBF), Learning Vector Quantization, Convolutional Neural Network (CNN), Adaptive Linear Neuron (ADALINE) Model, Associated Neural Network (ASNN), Generated Confrontation Network One or any combination of (GAN,generative adversary network) and the like.
  • FNN feedforward neural networks
  • RNN recurrent neural networks
  • PNNs probabilistic neural networks
  • TDNN Time Delay Neural Network
  • RBF Radial Basis Function Network
  • Learning Vector Quantization Convolutional Neural Network
  • CNN Convolutional Neural Network
  • ADALINE Adaptive Linear Neuron
  • ASNN Adaptive Linear Neuron
  • GAN Generated Confrontation Network
  • GAN Generated Confrontation Network
  • An exemplary recurrent neural network may include one or any combination of a Hopfield network, a Boltzmann machine, an echo state network, a long-term short-term memory network, a two-way recurrent neural network, a hierarchical recurrent neural network, a random neural network, and the like. .
  • the analysis module 310 can determine one or more first pixels in the reference frame corresponding to one or more second pixels in the candidate frame.
  • the reference frame and the alternate frame have overlapping regions, in which case the first pixel and the second pixel may refer to the same location of an object in the overlapping region of the reference frame and the alternate frame.
  • the one or more first pixel points can be a set of pixel points ⁇ as described in FIG.
  • the reference frame and the alternate frame have no overlapping regions, that is, any region in the reference frame does not correspond to any of the alternate frames. At this time, the pixel points in the reference frame and the candidate frame cannot be selected as the first pixel point and/or the second pixel point.
  • the analysis module 310 can utilize a clustering algorithm to determine one or more second pixels in the candidate frame, and/or one or more first pixels in the reference frame.
  • the clustering method may include a hierarchical clustering method, a partition clustering method, a density clustering method, a model clustering method, a grid clustering method, and a soft computing clustering method.
  • Hierarchical clustering The method may include aggregation hierarchical clustering and segmentation hierarchical clustering, single link clustering, full link clustering, average link clustering, and the like.
  • the partition clustering method may include an error minimum algorithm (for example, a K-means algorithm, a K-center method, a K-prototype algorithm), a graph theory cluster, and the like.
  • Density clustering methods may include expectation maximization algorithms, density-based spatial clustering (DBSCAN) algorithms for noisy applications, sorting points for identifying clustering structures (OPTICS) algorithms, automatic clustering algorithms, negative selection bias by observation (SNOB) algorithm, MCLUST algorithm, etc.
  • the model clustering method may include decision tree clustering, neural network clustering, self-organizing map clustering, and the like.
  • Soft computing clustering methods may include fuzzy clustering, evolution methods for clustering, simulated annealing algorithms for clustering, and the like.
  • the analysis module 310 can determine depth information, intensity information, and/or displacement information for the reference frame and the alternate frame. In some embodiments, a method of determining depth information, intensity information, and/or displacement information may be described in FIG.
  • analysis module 310 can determine if the candidate frame is the last frame. Specifically, the analysis module 310 can detect whether the next frame of the candidate frame exists in the time domain. If the next frame exists in the candidate frame, the process proceeds to step 1512; otherwise, the process proceeds to step 1514.
  • analysis module 310 may output the depth and/or displacement corresponding to the reference frame and the reference frame.
  • analysis module 310 can determine the difference between the reference frame and the alternate frame.
  • the difference between the reference frame and the candidate frame may be determined based on the strength information of the reference frame and the candidate frame.
  • the intensity of the reference frame may be determined by the RGB intensity of the one or more first pixels
  • the strength of the candidate frame may be determined by the RGB intensity of the one or more second pixels.
  • the strength information of the reference frame and the alternate frame may be determined by step 1504.
  • the intensity information of the reference frame and the alternate frame may be determined by step 1514 prior to determining the difference between the reference frame and the alternate frame.
  • analysis module 310 can determine if the difference between the reference frame and the alternate frame is greater than a threshold. If the difference between the reference frame and the alternate frame is greater than the threshold, then the process proceeds to step 1518; otherwise, the process proceeds to step 1520.
  • the analysis module 310 may use the candidate frame as the updated reference frame and the frame after the candidate frame as the updated device. Select the frame.
  • the frame following the alternate frame may be a frame that is closely adjacent to the alternate frame. At this time, the updated reference frame and the updated candidate frame are sent to step 1506, and the process 1500 is repeated.
  • step 1520 if the difference between the reference frame and the candidate frame is determined to be no greater than the threshold, the analysis module 310 can specify that the frame after the candidate frame is the updated candidate frame. At this point, the updated reference frame and the updated candidate frame will be sent to step 1506, which repeats the process 1500.
  • step 1518 or step 1520 may output a new reference frame and a new alternate frame to be processed by analysis module 310.
  • a new reference frame can be obtained by replacing the reference frame with the candidate frame.
  • a new candidate frame may be obtained by replacing the candidate frame with the next frame of the candidate frame, that is, the substitution of the candidate frame may be unconditional, and the substitution of the reference frame is conditional. .
  • the process 1500 terminates.
  • some conditions for determining termination may be specified.
  • a counter may be used in the process 1500 such that the number of cycles of the process 1500 is no greater than a predetermined threshold.
  • 16 is an exemplary flowchart of acquiring depth information and displacement information for a reference frame and/or an alternate frame, in accordance with some embodiments of the present application.
  • the process can be performed by analysis module 310.
  • the process is similar to the method of obtaining the displacement and depth of a frame as described in FIG.
  • the analysis module 310 can obtain a first frame and a second frame from among the plurality of frames obtained by the image sensor 810. In some embodiments, the analysis module 310 can select the first frame and the second frame from among a plurality of frames captured by the image sensor. In some embodiments, the first frame and the second frame may be adjacent to each other in the time domain, the first frame may be an existing frame, and the second frame may be a continuous frame.
  • the analysis module 310 can identify one or more first pixel points in the first frame that correspond to one or more second pixel points in the second frame.
  • the pixels in the first frame can be identified using step 1506 described in FIG. 15 with respect to the pixels in the second frame.
  • the analysis module 310 can obtain an initial depth based on the one or more first pixel points and one or more second pixel points. In some embodiments, the initial depth can be set to a zero matrix.
  • analysis module 310 can determine an initial displacement based on the one or more first pixel points, one or more second pixel points, and/or an initial depth. For example, step 1640 can be implemented by equation (1) as described in FIG.
  • the analysis module 310 can determine the updated depth based on the one or more first pixel points, the one or more second pixel points, and the initial displacement.
  • step 1650 can be implemented by equation (2) as described in FIG.
  • the analysis module 130 can solve the formula (2) by the optimization algorithm to obtain the updated depth.
  • Optimization algorithms may include, for example, random search, Newton's method, quasi-Newton method, performance Algorithm, coordinate descent method, near-end gradient method, gradient descent method, steepest descent method, conjugate gradient method, double conjugate gradient method, etc.
  • the analysis module 310 can determine the updated displacement based on the one or more first pixel points, one or more second pixel points, and/or the updated depth.
  • step 1660 can be implemented by equation (1) described in FIG. 4, ie, replacing the updated depth with the initial depth.
  • an initial displacement can be obtained first.
  • determining the initial displacement requires providing an initial value of the displacement.
  • 17A is an exemplary flow chart for determining an initial value of displacement, in accordance with some embodiments of the present application. This process can be done by the analysis module 310 based on the image data obtained by the image sensor 810.
  • image data may be obtained by analysis module 310.
  • the initial value of the displacement can be determined from the image data.
  • the initial value of the displacement can be determined from the displacement in the image data.
  • the displacement in the image data may include the displacement of the motion unit (eg, two wheels) and the displacement of the camera relative to the motion unit during the time interval in which two adjacent frames are acquired.
  • analysis module 310 can obtain a first displacement associated with the motion unit based on the image data.
  • the first displacement associated with the motion unit may be the displacement of the center of the two wheels over a period of time.
  • the first displacement associated with the motion unit can be a displacement of a point over a period of time, the point being configured with a navigation sensor.
  • the navigation sensors can be located at the center of the two wheels, respectively.
  • the time period may be a time interval at which image sensor 810 acquires two frames.
  • the analysis module 310 can acquire a second displacement associated with the image sensor 810 relative to the motion unit.
  • the second displacement may be a relative displacement of image sensor 810 relative to the motion unit.
  • image sensor 810 can be a camera.
  • the analysis module 310 can determine a third displacement associated with the image sensor 810 based on the first displacement and the second displacement.
  • the third displacement can be a vector sum of the first displacement and the second displacement.
  • the third displacement may be an initial displacement value used to determine the initial displacement.
  • the pose of the intelligent robot 110 can be controlled in the pan/tilt 930 by controlling the angle of rotation of the shaft.
  • 17B is an exemplary flow chart for determining the pose of the smart robot 110, in accordance with some embodiments of the present application. This process can be performed by the analysis module 310 based on the angle of rotation of the axis of the pan/tilt 930.
  • the image data can be obtained by the analysis module 310.
  • the image data may include a frame, a displacement, and an initial depth.
  • the image data may also include rotation information.
  • analysis module 310 can acquire a first angle of rotation relative to the reference axis.
  • the first rotation angle may be associated with a motion unit based on image data.
  • the first angle of rotation of the reference axis associated with the motion unit can be obtained from the rotation information in the image data.
  • the first angle of rotation can be an angle over a period of time.
  • the time period is a time interval at which image sensor 810 acquires two frames.
  • the analysis module 310 can acquire a second angle of rotation relative to the motion unit over a period of time, the motion unit being associated with the image sensor.
  • the second angle of rotation may be a relative angle of rotation of image sensor 810 relative to the motion unit.
  • image sensor 810 can be a camera.
  • analysis module 310 can determine a third angle of rotation relative to a reference axis that is associated with image sensor 810.
  • the third angle of rotation may be determined based on the first angle of rotation and the second angle of rotation.
  • the third angle of rotation may be a vector sum of the first angle of rotation and the second angle of rotation.
  • the motion module 820 and the pan/tilt 930 can configure sensors (groups) 230 to obtain information.
  • the sensor (set) 230 can be located in the carrier 1010 or in a smartphone supported by the platform 930.
  • the motion module 920 and the pan/tilt 930 may require a full range of stabilization measures to obtain accurate and reliable information. A method of how to balance the motion module 920 and the pan/tilt 930 with respect to the horizontal plane will be specifically described in the description of FIG.
  • the horizontal plane may be the mounting surface of the carrier 1010, and the angle between the horizontal plane and the Z-axis may be determined based on gyroscope data and accelerometer data.
  • the horizontal plane may be a relative plane in which the pan/tilt 930 detects the pitch angle of the pan/tilt 930.
  • the system can include an adder 1810, an integrator 1820, a component extractor 1830, and an adder 1840.
  • the adder 1810, integrator 1820, component extractor 1830, and adder 1840 can form a feedback loop for determining the output angle.
  • the integrator 1820 can acquire an angle between the horizontal plane and the Z-axis among each frame obtained by the image sensor 810. It is assumed that the image sensor 810 obtains the first frame at time t1 and the second frame at time t2. Then, at times t1 and t2, the gyroscope 830 and the accelerometer 820 can obtain angular velocity and angle information.
  • the feedback output angle ⁇ 1 associated with the first frame obtained at time t1 may be used to determine the second frame associated with time t2.
  • the output angle ⁇ 2 may be used to determine the second frame associated with time t2.
  • the gyroscope data and accelerometer data of the first frame can be processed at time t1.
  • the integrator 1820 can generate an output angle ⁇ 1 associated with the first frame
  • the accelerometer 820 can generate a first angle ⁇ 1 '
  • the adder 1840 can generate a second clip based on the output angle ⁇ 1 and the first angle ⁇ 1 ' the angle ⁇ 1 ".
  • the second angle ⁇ 1" may be output by the angle ⁇ 1 and the first angle ⁇ 1 'obtained by subtracting the vector.
  • Extractor 1830 may be a component "to determine the compensation angular velocity [omega] 1" based on the second angle ⁇ . In some embodiments, component extractor 1830 can be a differentiator.
  • the gyro data and accelerometer data for the second frame can then be processed at time t2.
  • the gyroscope 830 can generate an angular velocity ⁇ 2
  • the adder 1810 can generate the corrected angular velocity ⁇ 2 ′ according to the angular velocity ⁇ 2 and the compensated angular velocity ⁇ 1 ′′.
  • the corrected angular velocity ⁇ 2 ′ can be from the angular velocity ⁇ 2
  • the compensation angular velocity ⁇ 1 ′′ vector is added.
  • the integrator 1820 can output an included angle ⁇ 2 associated with the second frame at time t2 based on the corrected angular velocity ⁇ 2 '.
  • the method described in FIG. 18 can be performed by processor 210.
  • gyroscope data and accelerometer data can be transmitted to the processor 210 (eg, a portion of a smartphone) via an API interface.
  • the processor 210 can determine an output angle when each frame is obtained.
  • the angle between the horizontal plane and the Z-axis can be detected as each frame is acquired. The balance of the system on a horizontal plane can be maintained based on the real-time output angle associated with each frame.
  • FIG. 19 is an exemplary flow diagram of a flow 1900 of determining an angle associated with a frame.
  • the flow 1900 is performed by the processor 210.
  • the processor 210 can acquire a plurality of frames including the first frame and the second frame.
  • the first frame and the second frame may be captured by image sensor 810 at intervals of time. For example, at a time t, the image sensor capturing a first frame 810, at time t 2, the image sensor capturing a second frame 810, t 1 time period between time t 2 and may be an image sensor 810 of the sampling interval.
  • the processor 210 can acquire gyroscope data and accelerometer data associated with the first frame and/or the second frame.
  • the gyroscope data and accelerometer data can include parameters such as angular velocity and angle.
  • processor 210 may determine the first angle information.
  • the first angle information can include a first angle.
  • processor 210 may determine compensation angle information based on the first angle information and angle information associated with the first frame.
  • the angle information associated with the first frame may be an output angle associated with the first frame.
  • the first may be processed by subtracting the output angle associated with the first frame by a vector Angle information.
  • the compensation angle information may be a compensated angular velocity. The compensated angular velocity may be determined by component extractor 1830 based on an operation that subtracts the output angle associated with the first frame from the first angle information.
  • the processor 210 can determine the second angle information.
  • the second angle data may be an angle between a horizontal plane and a Z-axis detected by the processor 210 associated with the second frame.
  • the output angle associated with the second frame can be fed back to the output angle associated with the first frame.
  • the output angle of each frame can be obtained by the processor 210.
  • the angle between the horizontal plane and the Z-axis may exceed a certain threshold and then generate a balanced control signal.
  • a method of maintaining the horizontal balance of the motion module 920 or the pan/tilt 930 is as shown in Figs.
  • sensors installed in the smartphone held by the pan/tilt 930 can acquire information.
  • the information can include image data, gyroscope data, accelerometer data, and data acquired from other sensors.
  • the second type of sensor 1240 in the smartphone it is necessary to maintain a horizontal balance by the processor 210.
  • the road may be uneven and the information may not be stably obtained.
  • sensors in a smartphone want to obtain stable information, and a balance of vertical axes is also necessary.
  • 20 is a flow diagram of an exemplary method 2000 of adjusting the vertical displacement of the second type of sensor 1240 in a smartphone.
  • the method can be performed by processor 210 to control dynamic Z-buffer bar 1120 as shown in FIG. 11 in accordance with control parameters generated by intelligent robot control module 330.
  • the processor 210 can acquire a first displacement of the motor along the axis of rotation.
  • the axis of rotation may be the Z axis and the first displacement may be a vector along the Z axis.
  • the processor 210 can determine if the displacement of the motor along the Z-axis is greater than a threshold.
  • the threshold may be a limit value within which the second type of sensor 1240 is capable of stably acquiring information.
  • processor 210 may generate a first control signal to move the motor to an initial position.
  • the initial location may be a preset location suitable for obtaining information.
  • the processor 210 may output a first control signal to the motor to cause the second type of sensor 1240 installed in the smartphone to return to the initial position to obtain stable information.
  • processor 210 may acquire a first acceleration along the axis of rotation.
  • the acceleration can be obtained by an accelerometer 820 mounted in the smartphone.
  • the processor 210 can generate a second acceleration based on the first acceleration.
  • the second acceleration may be the acceleration after the first acceleration filtering.
  • processor 210 may determine a second displacement based on the second acceleration.
  • the second displacement can be calculated from the integrated value of the second acceleration.
  • the second displacement can be a vector along the Z axis.
  • processor 210 may generate a second control signal to control movement of the motor based on the second displacement.
  • the second control signal may determine a remaining gap of the displacement (remaining movable range) based on the second displacement and the threshold, and then the processor 210 may control the sensor in the smartphone to move along the Z axis.
  • sensor 210 may output a second control signal to the motor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种基于大数据及人工智能的智能轮椅系统和方法。该智能轮椅系统包括处理器(210)、运动模块(920)和云台(930),处理器(210)用以执行接收信息、构建地图、规划路径和生成控制参数等操作;运动模块(920)执行控制参数以在周围移动,并包括传感器(1220)以感知信息;云台(930)包括传感器(1240)以感知信息。

Description

一种基于大数据及人工智能的智能轮椅系统 技术领域
本发明涉及一种基于大数据及人工智能的智能轮椅系统及其控制方法。具体的,涉及一种基于大数据及人工智能的移动智能机器人以及控制图像检测与处理、路径搜索和机器人移动的控制方法。
背景技术
在日常生活中,能够移动的智能设备例如清扫机器人和智能平衡轮等越来越常见。为了在所处区域内提供服务,智能机器人系统可以基于现有的地图识别环境并自动移动。随着人们对服务需求的快速扩张,人们期望可以更新地图、规划路径和自动移动的多功能智能机器人系统,尤其是能够适应更多复杂区域的智能机器人。
另外,随着社会老龄化进程的加快以及由于各种疾病、工伤、交通事故等原因造成下肢损伤的人数的增加,为老年人和残疾人提供性能优越的代步工具已成为整个社会重点关注的问题之一。智能轮椅作为一种服务机器人,具有自主导航、避障、人机对话以及提供特种服务等多种功能,可以为有认知障碍的残疾人(如,痴呆病人等)、有运动障碍的残疾人(如,脑瘫病人、四肢瘫痪病人等)、老年人等提供安全便捷的生活方式,大大提高他们的日常生活和工作质量,使他们重新获得生活自理能力和融入社会成为可能。作为机器人技术的一种应用平台,智能轮椅上融合了机器人研究领域的多种技术,包括机器人导航和定位、机器视觉、模式识别、多传感器信息融合以及人机交互等等。
简述
本发明一方面涉及的是一个智能轮椅系统,所述智能轮椅系统包括存储指令的存储器和与所述存储器通信的处理器。在执行指令时,所述处理器可以通过通信端口与运动模块和云台建立通信;所述处理器可以从运动模块和云台的传感器中获取信息以构建地图;所述处理器还可以基于所述信息规划路径,以及基于所述信息生成控制参数。
本发明另一方面涉及的是一种方法,所述方法可以包括通过通信端口与运动模块和云台建立通信;所述方法可以包括从运动模块和云台的传感器中获取信息以构建地图;所述方法还可以包括基于所述信息规划路径,以及基于所述信息生成控制参数。
本发明的又一方面涉及的是永久性的计算机可读媒介,具体表现为计算机程序产品。所述计算机程序产品包括用于在处理器和运动模块之间,以及处理器和云台之间建立通信的通信端口。所述通信端口可以采用应用程序接口(Application Program Interface(API))建立通信。
附图说明
本方法、系统和/或程序以实施例的形式被进一步描述。这些典型实施例参照附图进行描述。这些实施例为不限制本发明的实施例,这些实施例中的标号代表其它视角下的相同结构的标号。
图1是根据本申请的一些实施例所示的一种智能轮椅系统的示意图;
图2是根据本申请的一些实施例所示的图1中机器人控制系统中的机器人的示意框图;
图3是根据本申请的一些实施例所示的图2中的机器人中处理器的示意框图;
图4是根据本申请的一些实施例所示的图3中的处理器中分析模块的示意框图;
图5是根据本申请的一些实施例所示的处理器中导航模块的示意框图;
图6是根据本申请的一些实施例所示的运动控制示意图;
图7是根据本申请的一些实施例所示的运动控制示意图;
图8是根据本申请的一些实施例所示的图2中的传感器结构示意图;
图9是根据本申请的一些实施例所示的图2中的机身示意图;
图10是根据本申请的一些实施例所示的运动模块示意图;
图11是根据本申请的一些实施例所示的图9中的云台结构示意图;
图12是根据本申请的一些实施例所示的机器人系统;
图13是根据本申请的一些实施例所示的确定控制机器人的控制参数的流程图;
图14是根据本申请的一些实施例所示的构建地图的流程图;
图15是根据本申请的一些实施例所示的确定一个或多个参考帧的流程图;
图16是根据本申请的一些实施例所示的获得深度信息、强度信息和位移信息的流程图;
图17A是根据本申请的一些实施例所示的确定位移初值的流程图;
图17B是根据本申请的一些实施例所示的确定机器人姿势的流程图;
图18是根据本申请的一些实施例所示的陀螺仪和加速度计确定水平面与Z轴夹角的示意框图;
图19是根据本申请的一些实施例所示的确定参考帧对应角度的流程图;
图20是根据本申请的一些实施例所示的调节智能设备中传感器中垂直方向运动的流程图。
具体实施例
在下面的详细描述中,通过示例阐述了本披露的许多具体细节,以便提供对相关披露的透彻理解。然而,对于本领域的普通技术人员来讲,本披露显而易见的可以在没有这些细节的情况下实施。在其他情况下,本披露中的公知的方法、过程、系统、组件和/或电路已经在别处以相对高的级别进行了描素,本披露中对此没有详细地描述,以避免不必要地重复。
应当理解的是,本披露中使用“系统”、“装置”、“单元”和/或“模块”术语,是用于区分在顺序排列中不同级别的不同部件、元件、部分或组件的一种方法。然而,如果其他表达式可以实现相同的目的,这些术语可以被其他表达式替换。
应当理解的是,当设备、单元或模块被称为“在……上”、“连接到”或“耦合到”另一设备、单元或模块时,其可以直接在另一设备、单元或模块上,连接或耦合到或与其他设备、单元或模块通信,或者可以存在中间设备、单元或模块,除非上下文明确提示例外情形。例如,本披露所使用的术语“和/或”包括一个或多个相关所列条目的任何一个和所有组合。
本披露所用术语仅为了描述特定实施例,而非限制本披露范围。如本披露说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的特征、整体、步骤、操作、元素和/或组件,而该类表述并不构成一个排它性的罗列,其他特征、整体、步骤、操作、元素和/或组件也可以包含在内。
参看下面的说明以及附图,本披露的这些或其他特征和特点、操作方法、结构的相关元素的功能、部分的结合以及制造的经济性可以被更好地理解,其中说明和附图形成了说明书的一部分。然而,可以清楚地理解,附图仅用作说明和描述的目的,并不意在限定本披露的保护范围。可以理解的是,附图并非按比例绘制。
此外,本披露仅描述了与确定智能机器人状态相关的系统及方法,可以理解的是,本披露中的描述仅仅是一个实施例。该智能机器人系统或方法也可以应用到除智能机器人以外的其他任何类型的智能设备或汽车中。例如,智能机器人系统或方法可以应用于不同的智能设备系统中,这些智能设备系统包括摆轮、无人地面车辆(UGV)、智能轮椅等中的一种或者任意几种组合。智能机器人系统还可以应用到包括应用管理和/或分发的任何智能系统,例如用于发送和/或接收快递,以及将人员或货物运载到某些位置的系统。
本披露的术语“机器人”、“智能机器人”、“智能设备”可互换地用于指代可移动和自动操作的装置、设备或工具。本披露中的术语“用户设备”可以指可以用于请求服务、订购服务或促进服务的提供的工具。本披露中的术语“移动终端”可以指可由用户使用以控制智能机器人的工具或接口。
本披露中使用的定位技术包括全球定位系统(GPS)技术、全球导航卫星系统(GLONASS)技术、罗盘导航系统(COMPASS)技术、伽利略定位系统(Galileo)技术、准天顶卫星系统(QZSS)技术、无线保真(WiFi)定位技术等中的一种或者任意几种组合。上述定位技术中的一种或多种可以在本披露中可互换地使用。
本披露描述了作为示例性系统的智能轮椅系统100以及为智能轮椅系统100构建地图和规划路线的方法。本披露的方法和系统旨在基于,例如由智能轮椅系统100获得的信息,构建地图。所获得的信息可以由位于智能轮椅系统100中的传感器(组)捕获。传感器(组)可以是光学或磁电型。例如,传感器可以是相机或激光雷达。
根据本申请的一些实施例,图1所示的是智能轮椅系统100的一种示例性示意图。智能轮椅系统100可以包括一个智能机器人110、一个网络120、一个用户设备130和一个数据库140。用户可以通过网络120使用用户设备130来控制智能机器人110。
智能机器人110和用户设备130可以建立通信。智能机器人110和用户设备130之间的通信可以是有线或无线的。例如,智能机器人110可以经由网络120建立与用户设备130或数据库140的通信,并且可以基于来自用户设备130的操作命令 (例如,移动或旋转的命令)无线地控制智能机器人110。再例如,智能机器人110可以通过电缆或光纤直接连接到用户设备130或数据库140。在一些实施例中,智能机器人110可以基于智能机器人110和数据库140之间的通信来更新或下载存储在数据库140中的地图。例如,智能机器人110可以在路线中捕获信息,且可以分析信息以构建地图。在一些实施例中,完整的地图可以存储在数据库140中。在一些实施例中,由智能机器人110构建的地图可以包括与完整地图的一部分相对应的信息。在一些实施例中,可以通过构建的地图更新完整的地图的对应部分。当智能机器人110确定其目的地和当前位置时,存储在数据库140中的完整的地图可以由智能机器人110访问。包含智能机器人110的目的地和当前位置的完整地图的一部分可以由智能机器人110选择用于规划路线。在一些实施例中,智能机器人110可以基于所选择的地图、智能机器人110的目的地和当前位置来规划路线。在一些实施例中,智能机器人110可以采用用户设备130的地图。例如,用户设备130可以从因特网下载地图。用户设备130可以基于从因特网下载的地图来指导智能机器人110的运动。再例如,用户设备130可以从数据库140下载最新的地图。一旦确定了智能机器人110的目的地和当前位置,用户设备130就可以将从数据库140获得的地图发送到智能机器人110。在一些实施例中,用户设备130可以是智能机器人110的一部分。在一些实施例中,如果智能机器人110构建的地图包括其目的地和当前位置,智能机器人110可以基于由其自身构建的地图来规划路线。
网络120可以是单个网络或不同网络的组合。例如,网络120可以是局域网(LAN)、广域网(WAN)、公共网络、专用网络、无线局域网(WLAN)、虚拟网络、城域网(MAN)、公共电话交换网络(PSTN)或其任意组合。例如,智能机器人110可以经由蓝牙与用户设备130和数据库140通信。网络120还可以包括各种网络接入点。例如,诸如基站或因特网交换点的有线或无线接入点可以包括在网络120中。用户可以从用户设备130向智能机器人110发送控制操作并且经由网络120接收结果。智能机器人110可以直接或经由网络120访问存储在数据库140中的信息。
可连接到网络120的用户设备130可以是移动设备130-1、平板电脑130-2、笔记本电脑130-3、内置设备130-4等中的一种或者其任意几种组合。在一些实施例中,移动设备130-1可以包括可穿戴设备、智能移动设备,虚拟现实设备、增强现实设备等中的一种或其任意几种组合。在一些实施例中,用户可以通过可穿戴设备控制智能机器人110,可穿戴设备可以包括智能手环、智能鞋袜、智能眼镜、智能头盔、 智能手表、智能服装、智能背包、智能配件等中的一种或者其任意几种组合。在一些实施例中,智能移动设备可以包括智能电话、个人数字助理(PDA)、游戏设备、导航设备,销售点(POS)设备等中的一种或者其任意几种组合。在一些实施例中,虚拟现实设备和/或增强现实设备可以包括虚拟现实头盔、虚拟现实眼镜、虚拟现实贴片、增强现实头盔、增强现实玻璃、增强现实眼罩等中的一种或者任意几种组合。例如,虚拟现实设备和/或增强现实设备可以包括Google Glass,Oculus Rift,HoloLens,Gear VR等。在一些实施例中,内置设备130-4可以包括车载电脑、车载电视等。在一些实施例中,用户设备130可以是具有为用户和/或与用户相关联的用户设备130的位置定位的定位技术的设备。例如,可以由智能机器人110基于地图、智能机器人110的目的地和当前位置来确定路线。智能机器人110的位置可以通过用户设备130获得。在一些实施例中,用户设备130可以是具有图像捕获能力的设备。例如,可以基于由图像传感器(例如,相机)捕获的信息来更新存储在数据库140中的地图。在一些实施例中,用户设备130可以是智能机器人110的一部分。例如,具有相机、陀螺仪和加速度计的智能手机可以由智能机器人110的云台夹持。用户设备130可以用作传感器以检测信息。再例如,处理器210和存储器220可以是智能手机的一些部分。在一些实施例中,用户设备130还可以为智能机器人110的用户充当通信接口。例如,用户可以触摸用户设备130的屏幕以选择智能机器人110的控制操作。
数据库140可以存储完整的地图。在一些实施例中,可以存在与数据库140无线连接的多个智能机器人。与数据库140连接的每个智能机器人可以基于由其传感器捕获的信息来构建地图。在一些实施例中,由智能机器人构建的地图可以是完整地图的一部分。在更新过程中,构建的地图可以替换完整的地图中的相应区域。当路线需要从智能机器人110的位置规划到目的地时,每个智能机器人可以从数据库140下载地图。在一些实施例中,从数据库140下载的地图可以是完整的地图的一部分,该部分至少包括智能机器人110的位置和目的地。数据库140还可以存储与连接到智能机器人110的用户有关的历史信息。该历史信息可以包括,例如用户的先前操作或与智能机器人110如何操作有关的信息。如图1所示,数据库140可以由智能机器人110和用户设备130访问。
应当注意,上述智能轮椅系统100仅仅为了描述该系统的特定实施例的一个示例,而非限制本披露范围。
根据本申请的一些实施例,图2所示的是图1所示的智能轮椅系统100中的示例性智能机器人110的框图。智能机器人110可以包括处理器210、存储器220、传感器(组)230、通信端口240、输入/输出接口250和机身260。传感器(组)230可以获取信息。在一些实施例中,信息可以包括图像数据、陀螺仪数据、加速度计数据、位置数据和距离数据。处理器210可以处理信息以生成一个或多个结果。在一些实施例中,一个或多个结果可以包括位移信息和深度信息(例如,相邻两个帧间的相机的位移,两个相邻帧中的对象的深度)。在一些实施例中,处理器210可以基于一个或多个结果构建地图。处理器210还可以将地图传送到数据库140以进行更新。在一些实施例中,处理器210可以包括一个或多个处理器(例如,单核处理器或多核处理器)。仅作为示例,处理器210可以包括中央处理单元(CPU)、专用集成电路(ASIC)、专用指令集处理器(ASIP)、图形处理单元(GPU)、物理处理单元(PPU)、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、可编程逻辑器件(PLD)、控制器、微控制器单元、精简指令集计算机、微处理器等中的一种或者其任意几种组合。
存储器220可以存储用于处理器210的指令,并且当执行指令时,处理器210可以执行本披露中描述的一个或多个功能或操作。例如,存储器220可以存储由处理器210执行的指令,这些指令用来处理由传感器(组)230获得的信息。在一些实施例中,处理器220可以自动存储由传感器(组)230获得的信息。存储器220还可以存储由处理器210生成的一个或多个结果(例如,用于构建地图的位移信息和/或深度信息)。例如,处理器210可以生成一个或多个结果并将其存储在存储器220中,并且一个或多个结果可以由处理器210从存储器220中读取以构建地图。在一些实施例中,存储器220可以存储由处理器210构建的地图。在一些实施例中,存储器220可以存储由处理器210从数据库140或用户设备130获得的地图。例如,存储器220可以存储由处理器210构建的地图,然后可以将构建的地图发送到数据库140以更新完整的地图的对应部分。再例如,存储器220可以临时存储由处理器210从数据库140或用户设备130下载的地图。在一些实施例中,存储器220可以包括大容量存储器、可移动存储器、易失性读写存储器、只读存储器(ROM)等中的一种或者其任意几种组合。示例性大容量存储器可以包括磁盘、光盘和固态驱动器等。示例性可移动存储器可以包括闪存驱动器、软盘、光盘、存储卡、压缩盘和磁带等。示例性易失性读写存储器可以包括随机存取存储器(RAM)。示例性RAM可以包括动态RAM(DRAM)、双日期速率同步动态RAM(DDR SDRAM)、静态RAM(SRAM)、晶闸管RAM(T- RAM)和零电容器RAM(Z-RAM)等。示例性ROM可以包括掩模ROM(MROM)、可编程ROM(PROM)、可擦除可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)、光盘ROM(CD-ROM)数字多功能盘ROM。
传感器(组)230可以包括能够获得来自物体或障碍物的图像数据、陀螺仪数据、加速度计数据、位置数据、距离数据以及可由智能机器人110使用以执行本披露中描述的各种功能的任何其它数据。例如,传感器(组)230可以包括用于在低光环境下获得图像数据的一个或多个夜视摄影机。在一些实施例中,由传感器(组)230获得的数据和/或信息可以存储在存储器220中并且可以由处理器210处理。在一些实施例中,一个或多个传感器(组)230可以安装在机身260中。更具体地,例如,一个或多个图像传感器可以安装在机身260的云台中。一个或多个导航传感器、螺旋仪和加速度计可以安装在云台和运动模块中。在一些实施例中,传感器(组)230可以在处理器210的控制下自动探索环境并检测位置。例如,传感器(组)230可以用于动态地感测或检测物体、障碍物等的位置。
通信端口240可以是用于在智能机器人110内通信的端口。也就是说,通信端口240可以在智能机器人110的组件之间交换信息。在一些实施例中,通信端口240可以将处理器210的信号/数据/信号发送到智能机器人110的内部部分,以及从智能机器人110的内部部分接收信号。例如,处理器210可以从安装在机身260的传感器(组)接收信息。再例如,处理器210可以通过通信端口240将控制操作发送到机身260。发送-接收的处理过程可以通过通信端口240来实现。通信端口240可以根据某些无线通信规范接收各种无线信号。在一些实施例中,通信端口240可以被提供为用于诸如Wi-Fi、蓝牙、红外(IR)、超宽带(UWB)、ZigBee等的已知无线局域通信的通信模块,或者作为诸如3G、4G或长期演进(LTE)的移动通信模块,或者作为用于有线通信的已知通信方法。在一些实施例中,在一些实施例中,通信端口240不限于用于从内部设备发送/接收信号的元件,并且可以被用于交互式通信的接口。例如,通信端口240可以通过使用应用程序接口(API)的电路在处理器210和智能机器人110的其他部分之间建立通信。在一些实施例中,用户设备130可以是智能机器人110的一部分。在一些实施例中,处理器210和用户设备130之间的通信可以由通信端口240执行。
输入/输出接口250可以是用于智能机器人110与诸如数据库140的其他外部设备之间的通信的接口。在一些实施例中,输入/输出接口250可以控制与智能机器人 110的数据传输。例如,最新的地图可以从数据库140发送到智能机器人110。再例如,基于传感器(组)230获得的信息构建的地图可以从数据库140发送到智能机器人110。输入/输出接口250还可以包括各种附加元件,诸如用于无线通信的无线通信模块(未示出)或用于调节广播信号的调谐器(未示出),这取决于智能机器人110的设计类型以及用于从外部输入接收信号/数据元件。输入/输出接口250可以被用于已知无线局域通信的通信模块,诸如Wi-Fi、蓝牙、红外(IR)、超宽带(UWB)、ZigBee等,或者作为诸如3G,4G或长期演进(LTE)的移动通信模块,或者作为用于有线通信的已知输入/输出接口。在一些实施例中,输入/输出接口250可以被提供为用于诸如光纤或通用串行总线(USB)的已知有线通信的通信模块。例如,智能机器人110可以经由USB接口与计算机的数据库140交换数据。
机身260可以是用于保存处理器210、存储器220、传感器230、通信端口240和输入/输出接口250的主体。机身260可以执行来自处理器210的指令以移动和旋转传感器(组)230以获得或检测区域的信息。在一些实施例中,机身260可以包括运动模块和云台,参见本披露中其他部分(如图9及其描述)对机身260的描述。在一些实施例中,传感器(组)可以分别安装在运动模块和云台中。
根据本申请的一些实施例,图3所示的一种处理器210的示例性结构图。如图3所示,处理器210可以包括分析模块310、导航模块320和智能机器人控制模块330。
分析模块310可以分析从传感器(组)230获得的信息并生成一个或多个结果。分析模块310可以基于一个或多个结果构建地图。在一些实施例中,构建的地图可以被发送到数据库140。在一些实施例中,分析模块310可以从数据库140中接收最新的地图并将其发送到导航模块320。导航模块320可以规划从智能机器人110的位置到目的地的路线。在一些实施例中,完整的地图可以被保存在数据库140中。分析模块310构建的地图可以对应于完整地图的一部分。更新过程可以用构建的地图替换完整地图的对应部分。在一些实施例中,由分析模块310构建的地图可以是最新的,并且包括智能机器人110的位置和目的地。分析模块310可以不从数据库140接收地图。由分析模块310构建的地图可以被传送到导航模块320以规划路线。智能机器人控制模块330可以基于导航模块320规划的路线生成智能机器人110的控制参数。在一些实施例中,控制参数可以暂时存储在存储器220中。在一些实施例中,控制参数可以被发送到智能机器人机身260以控制智能机器人110的运动,参见本披露中其他部分(如图6、图7及其描述)对控制参数的描述。
根据本申请的一些实施例,图4是图3所示的处理器210中的示例性分析模块310的结构示意图。在一些实施例中,分析模块310可以包括图像处理单元410、位移确定单元420、深度确定单元430、闭环控制单元440和对象检测单元450。
图像处理单元410可以处理图像数据以执行智能机器人110的一个或多个功能。图像数据可以包括例如一个或多个图像(例如,静止图像、视频帧等)、每个帧中的每个像素点的初始深度和位移,和/或与一个或多个图像相关的任何其它数据。在一些实施例中,位移可以包括在拍摄两个相邻帧的时间间隔之间的轮子的位移和相机相对于轮子的位移。图像数据可以由能够提供图像数据的任何设备提供,诸如传感器(组)230(例如,一个或多个图像传感器)。在一些实施例中,图像数据可以包括关于多个图像的数据。图像可以包括视频帧序列(也称为“帧”)。每个帧可以是一个帧、一个字段等。
在一些实施例中,图像处理单元410可以处理图像数据以生成智能机器人110的运动信息。例如,图像处理单元410可以处理两个帧(例如,第一帧和第二帧)以确定两帧之间的差异。图像处理单元410然后可以基于帧之间的差异产生智能机器人110的运动信息。在一些实施例中,第一帧和第二帧可以是相邻帧(例如,当前帧和先前帧、当前帧和后续帧等)。另外,第一帧和第二帧也可以是不相邻帧。更具体地,例如,图像处理单元410可以确定第一帧和第二帧中的一个或多个对应像素点以及包括对应像素点(也称为“重叠区域”)的一个或多个区域。响应于确定的同一对象的第一像素点和第二像素点,图像处理单元410可以将第一帧中的第一像素点确定为第二帧中的第二像素点的对应像素点。第二帧中的第一像素点及其对应像素点(例如,第二像素点)可以对应于相对的一个对象的相同位置。在一些实施例中,图像处理单元410可以识别第一帧中在第二帧中不具有对应像素点的一个或多个像素点。图像处理单元410可进一步识别包括所识别像素点的一个或多个区域(也称为“非重叠区域”)。非重叠区域可以对应于传感器(组)230的运动。在一些实施例中,在进一步处理(例如,通过位移确定单元420和/或深度确定单元430的处理),在第二帧中没有对应像素点的第一帧中的非重叠区域的像素点可以省略。
在一些实施例中,图像处理单元410可以识别第一帧中的像素点和第二帧中的对应像素点的强度。在一些实施例中,可以获得第一帧中的像素点和第二帧中的对应像素点的强度作为确定第一帧和第二帧之间的差的标准。例如,可以选择RGB强度作为确定第一帧和第二帧之间的差异的标准。像素点、对应像素点和RGB强度可 以被发送到位移确定单元420和/或深度确定单元430,用于确定第二帧的位移和深度。在一些实施例中,深度可以表示两个帧中的对象的空间深度。在一些实施例中,位移信息可以是一组帧的位移的集合。在一些实施例中,深度信息可以是一组帧的深度。帧、位移信息和深度信息可以用于构建地图。
位移确定单元420可基于由图像处理单元410提供的数据和/或任何其它数据来确定位移信息。位移信息可以包括可以表示生成图像数据的传感器(组)230(例如,捕获多个帧的图像传感器)的运动信息的一个或多个位移。例如,位移确定单元420可以获得两个帧(例如第一帧和第二帧)中的对应像素点的数据。数据可以包括对应像素点的一个或多个值,诸如像素点的灰度值,强度等。位移确定单元420可基于任何合适的颜色模型(例如,RGB(红色、绿色和蓝色)模型,HSV(色调、饱和度和亮度)模型等)确定像素点的值。在一些实施例中,位移确定单元420可以确定两个帧中的对应像素点对之间的差值。例如,图像处理单元410可以识别第一帧中的第一像素点及其在第二帧中的对应像素点(例如,第二像素点),可以基于第一像素点的坐标的变换来确定第二像素点。第一像素点和第二像素点可以对应于相同的对象。位移确定单元420还可以确定第一像素点的值和第二像素点的值之间的差。在一些实施例中,可以通过最小化第一帧和第二帧中的对应像素点对之间的差的和来确定位移。
在一些实施例中,位移确定单元420可以确定表示位移的原点估计值的初始位移ξji,1。例如,初始位移ξji,1可以基于如下的公式(1)来确定:
Figure PCTCN2017072101-appb-000001
其中,x表示所述第一帧中的像素点的坐标;ω(x,Di(x),ξji)表示第二帧中对应像素点的坐标,ω(x,Di(x),ξji)和Ii(x)可以在一个对象的相同相对位置处,且ω(x,Di(x),ξji)是相机移动一定位移ξji之后的x的变换像素点。Ω是一组像素点对,每个像素点对包括第一帧中的像素点和第二帧中的对应像素滴点。Ii(x)是坐标值为x的像素点的RGB强度;Ij(ω(x,Di(x),ξji))是像素点为ω(x,Di(x),ξji)的RGB强度。
ω(x,Di(x),ξji)是相机移动一定位移ξji之后的x的变换坐标。在一些实施例中,位移确定单元420可以基于位移的初值ξji和初始深度Di(x)来计算对应像素点ω(x,Di(x),ξji)。在一些实施例中,初始深度Di(x)可以是零矩阵。位移的初值ξji可以是 变量。为了获得初始位移ξji,1,位移确定单元420可能需要如迭代公式(1)所示的位移的初值ξji。在一些实施例中,位移的初值ξji可以基于车轮的位移ξji′和相机相对于车轮的位移ξji′来确定,参见本披露中其他地方(例如,图17A及其描述)对初值ξji的描述。在一些实施例中,位移的初值可以是ξji′和ξji″的向量和。尝试围绕位移的初值ξji初值和变量,可以获得两个帧的最小差值。
在一些实施例中,深度确定单元430可以确定更新后的深度Di,1(x)。更新后的深度Di,1(x)可以由公式(2)计算:
Figure PCTCN2017072101-appb-000002
其中,其中深度Di(x)表示公式(2)中的两个帧的差的变量,当两个帧的差为最小时,确定值Di,1(x)作为更新后的深度。在一些实施例中,初始深度Di(x)可以是零矩阵。
位移确定单元420还可以基于更新后的深度Di,1(x)产生更新后的位移ξji,1u。在一些实施例中,可以通过用更新后的深度Di,1(x)替换初始深度Di(x)基于公式(1)获得更新后的位移ξji,1u
闭环控制单元440可执行闭环检测。闭环控制单元440可以检测智能机器人110是否返回先前访问的位置,并且可以基于该检测更新位移信息。在一些实施例中,响应于确定智能机器人110已经返回到路线中先前访问的位置,闭环控制单元440可以使用g2o闭环检测来调整帧的更新后的位移以减少误差。g2o闭环检测是用于减少非线性误差的一般优化框架。调整的帧的更新后的位移可以被设置为位移信息。在一些实施例中,如果智能机器人110包括诸如激光雷达的深度传感器,则可以直接获得深度,可以基于公式(1)确定位移,然后可以通过闭环控制单元440调整位移以生成调整位移。
首先,当深度传感器检测到深度信息时,位移信息可以是基于公式(1)的一组位移,然后由闭环控制单元440调整。当深度信息是更新后的深度的集合时,位移信息可以是经公式(1)、公式(2)计算后的和闭环控制单元440调整之后的位移的集合。
在一些实施例中,闭环控制单元440可基于帧、位移信息和深度信息产生地图。
分析模块310还可以包括对象检测单元450,对象检测单元450可以检测障碍物、对象和从智能机器人110到障碍物和对象的距离。在一些实施例中,可以基于由传感器(组)230获得的数据来检测障碍物和物体。例如,对象检测单元450可以基于由声纳、红外距离传感器、光流传感器或激光雷达所捕获的距离数据来检测对象。
根据本申请的一些实施例,图5是处理器210中的示例性导航模块320的结构示意图。在一些实施例中,导航模块320可以包括绘图单元510和路线规划单元520。在一些实施例中,绘图单元510可以从数据库140接收地图。在一些实施例中,绘图单元510可以处理地图以进行路线规划。在一些实施例中,地图可以是数据库140中的完整地图的一部分。例如,包含确定的目的地和智能机器人110位置的地图可以适用于规划路线。在一些实施例中,从数据库140获得的地图可以是3D地图。在一些实施例中,绘图单元510可以通过投影技术将3D地图转换成2D地图。也就是说,绘图单元510可以将3D地图中的对象划分为像素点,并将像素点投影到水平地表面以生成2D地图。一旦通过绘图单元510获得了2D地图,路线规划单元520可以基于所传送的2D地图来规划从智能机器人110的位置到目的地的路线。
智能机器人控制模块330可基于导航模块320中的路线规划单元520所规划的路线来确定控制参数。在一些实施例中,智能机器人控制模块330可将路线分成一组段。智能机器人控制模块330可以获得段的一组节点。在一些实施例中,两个段之间的节点可以是前一段的终点和后一段的起点。可以基于起点和终点来确定一段的控制参数。
在一些实施例中,在智能机器人110在段中运动期间,智能机器人110的终点可能与段的预定终点不匹配,路线规划单元520可以基于不匹配的终点(新的智能机器人110的位置)和目的地规划新路线。在一些实施例中,智能机器人控制模块330可以分割新路线并生成一个或多个新段,然后智能机器人控制模块330可以确定每个新段的一组控制参数。
图6和图7是智能机器人110的运动控制的示例。如图6所示,运动模块以角速度ω围绕点ICC移动。运动模块具有两个轮子,包括以速度vl移动的左轮610和以速 度vr移动的右轮620。在一些实施例中,左轮610和右轮620之间的距离是L。左轮610和右轮620到两个轮中心点O的距离都是L/2。中心点O和点ICC的距离是R。
图7是智能机器人110的控制参数确定方法的一种示例性示意图。如图7所示,智能机器人110的运动模块在dt内从点O1运动到点O2。点O1和点ICC的连线到点O2和点ICC的连线的夹角为α。如果dt、L、R和α已知,可以计算出左轮的速度vl和右轮的速度vr
根据本申请的一个实施例,图8是传感器(组)230的一种示例性结构框图。传感器(组)230可以包括一个图像传感器810、一个加速度计820、一个陀螺仪830、一个声呐840、一个红外距离传感器850、一个光流传感器860、一个激光雷达870和一个导航传感器880。
图像传感器810可以捕捉图像数据。在一些实施例中,基于所述图像数据,分析模块310可以构建地图。在一些实施例中,所述图像数据可以包括帧、每一帧上每个像素点的初始深度和位移。在一些实施例中,所述初始深度和位移可以用来确定深度和位移。关于所述深度和位移的获取方法可参见本申请其他部分的说明(详细描述见图4中的公式(1))。在一些实施例中,所述位移可以包括在拍摄两个相邻帧的一个时间间隔间的轮子的位移和相机相对于轮子的位移。
为了保持一个运动模块和一个云台的平衡,加速度计820和陀螺仪830可以共同运行。为了从传感器(组)230获取稳定的信息,所述平衡是必要的。在一些实施例中,为了将俯仰姿态(pitch attitude)控制在一定阈值内,加速度计820和陀螺仪830可以共同运行。在一些实施例中,所述加速度计820和陀螺仪830可分别由运动模块和云台持有。关于平衡保持相关描述可参见本申请其他部分,例如图18、图19及其描述。
声呐840、红外距离传感器850和光流传感器860可以用来对智能机器人110定位。在一些实施例中,可以采用声呐840、红外距离传感器850和光流传感器860中的一种或任意一种组合对智能机器人110定位。
激光雷达870可以检测在一帧中的物体的深度。也就是说,激光雷达870可以获取每一帧的深度,处理器210中的分析模块310不需要计算深度。激光雷达870获得的深度可以直接用来计算图4中公式(1)中所描述的位移。基于公式(1)获得的位移可以通过闭环控制单元440进行调整。
通过检测智能机器人110和一个物体或障碍物间的距离,声呐840、红外距离传感器850和光流传感器860可以对智能机器人110进行定位。导航传感器880可以在一个粗 略的区域或位置范围内对智能机器人进行定位。在一些实施例中,导航传感器880可以对带有任何类型的定位系统的智能机器人110进行定位。所述定位系统可以包括全球定位系统(Global Positioning System,GPS)、北斗导航或定位系统和伽利略定位系统。
根据本申请的一些实施例,图9是图2中所描述的机身260的一种示例性框图。机身260可以包括一个外壳910、一个运动模块920和一个云台930。外壳910可以是机身260的一个壳,所述机身260的壳可以保护智能机器人110中的模块和单元。运动模块920可以是智能机器人110中的运动操作元件。在一些实施例中,运动模块920可以基于处理器210中的智能机器人控制模块330生成的控制参数进行运动。例如,由智能机器人控制模块330确定的路线的一段路线中,控制参数的确定可以是基于所述一段路线的起点和终点。然后,为了让智能机器人110从所述起点运动到所述终点,所述控制参数可以从智能机器人控制模块330传输到运动模块920。在一些实施例中,云台930可以是至少一个图8中所描述的传感器的支撑设备。云台930可以支撑一个图像传感器810,例如,一个照相机,来获取帧。在一些实施例中,云台可以支撑一个图像传感器810,例如,一个照相机,来捕捉帧。在一些实施,云台930可以支撑加速度计820和陀螺仪830,通过保持云台支撑的传感器的平衡来获得稳定的信息。在一些实施例中,为了检测智能机器人110和一个物体或障碍物间的距离,云台930可以支撑声呐840、红外距离传感器850和光流传感器860中的至少一种传感器。在一些实施例中,为了检测深度信息或其它信息,云台930还可以支撑激光雷达870和其它传感器。在一些实施例中,导航传感器880可以安装在云台930上。在一些实施例中,由云台支撑的传感器可以集成在一个智能手机上。
图10是运动模块920的一种示例性示意图。运动模块920可以包括一个运动单元和一个载体1010。所述运动单元可以包括两个轮子,所述两个轮子可以包括一个左轮610和一个右轮620。载体1010可以承载声呐840或光流传感器860来检测物体或障碍物。在一些实施例中,载体1010可以包括加速度计820(未在图10中显示)和陀螺仪830(未在图10中显示)来保持运动模块920的平衡。在一些实施例中,载体1010可以包括其它传感器,例如,红外距离传感器850,来获得其它需要的信息。
如图9所示,云台930可以支撑传感器(组)230以获得信息生成地图、规划路线或生成控制参数。根据本申请的一些实施例,图11是图9中所描述的机身260中云台930的一种示例性示意图。在一些实施例中,云台930可以包括一个用于控制X轴周围旋转的转轴1170、一个用于控制Y轴周围旋转的转轴1150和一个用于控制Z轴周围旋转的转轴1130。所述X轴可以是水平面中的第一个轴,所述Y轴可以是水平面中的第二个轴,所 述Z轴可以是垂直于所述水平面的一个竖直轴。在一些实施例中,云台930可以包括一个用于连接转轴1170和传感器的连接杆1180、一个用于连接转轴1150和转轴1170的连接杆1160和一个用于连接转轴1130和转轴1150的连接杆1140。在一些实施例中,云台930可以包括一个连接件1110、一个连接杆1114和一个动态Z-缓冲杆1120。在一些实施例中,传感器可以集成到一个用户设备130中(例如,智能手机)。用户设备130可以包括传感器如图像传感器810、加速度计820、陀螺仪830和导航传感器880。云台930还可以包括一个连接块1190来支撑用户设备130。在云台930的操作期间,用户设备130中的传感器获取信息。在一些实施例中,通过调整云台930的姿态来控制用户设备130中的传感器以获得合适的信息。在一些实施例中,云台930的姿态可以通过在X轴、Y轴和Z轴周围旋转转轴1170、转轴1150和转轴1130进行调整。
传统的3-轴云台可以用于空中摄影。为了在路线运动过程中保持云台930的稳定性,在云台930中采用动态Z-缓冲连接杆1120。动态Z-缓冲连接杆1120可以保持云台930在Z轴的稳定性。在一些实施例中,动态Z-缓冲连接杆1120可以是一个伸缩杆,可以沿着Z轴进行扩展和收缩。云台930中的动态Z-缓冲连接杆1120操作方法在图20中进行说明。根据智能机器人控制模块330生成的控制参数,对动态Z-缓冲连接杆1120的转轴1130、1150、1170的旋转和竖直运动进行控制。
智能机器人110中可以有多个模块和单元。根据本发明的一些实施例,图12是智能机器人110的一种简单的系统。如图12所示,智能机器人110可以包括处理器210、运动模块920和云台930。在一些实施例中,处理器210可以包括分析模块310、导航模块320和智能机器人控制模块330。运动模块920可以包括一个运动单元1210、第一类型传感器1220和通信端口240。云台930可以包括一个云台控制单元1230、通信端口240和第二类型传感器1240。在一些实施例中,处理器210可以发送控制参数以控制运动模块920中的运动单元1210和云台930中的云台控制单元1230。
在一些实施例中,第一类型传感器1220和第二类型传感器1240可以获取信息。分析模块310可以处理获取的信息和构建地图。在一些实施例中,所述构建的地图可以被发送到数据库140。为了确定一条到达目的地的路线,需要一张地图进行导航,分析模块310可以从数据库140下载一个最新的地图,并且将所述最新的地图发送给导航模块320。导航模块320可以处理所述最新的地图,并且确定一条从智能机器人所处位置到目的地的路线。在一些实施例中,分析310模块可以不用下载完整地图,包括智能机器人所处位置和目的地的完整地图的部分足够用于规划路线。在一些实施例中,分析模块310构建的地 图可以包括智能机器人110的位置和目的地,而且所述地图是数据库中最新的地图。分析模块310构建的地图可以被发送到导航模块320以规划路线。导航模块320可以包括绘图单元510和路线规划单元520。在一些实施例中,根据来自分析模块310的最新地图或构建的地图,绘图单元510可以生成一个2D地图以进行路线规划。路线规划单元520可以规划路线,所述路线可以被发送到智能机器人控制模块330。智能机器人控制模块330可以将所述路线分成一段或多段路线。智能机器人控制模块330可以针对每段线路生成控制参数。每段线路有一个起点和终点,所述一段线路的终点可以是下一段线路的起点。在一些实施例中,智能机器人110在一段线路中的终点位置可能与为这段线路预设的终点不匹配,这可能会影响剩余部分路线的规划。由此,有必要根据不匹配的位置(智能机器人的新位置110)和目的地重新规划路线。在一些实施例中,在一段线路后,如果检测到不匹配的情况,所述重新规划路线过程可以通过导航模块320执行。
在一些实施例中,如果运动模块920中的第一类型传感器1220和云台930中的第二类型传感器1240不稳定,由运动模块920中的第一类型传感器1220和云台930中的第二类型传感器1240捕捉的信息可能不适合用于构建地图。为了稳定第一类型传感器1220和第二类型传感器1240,智能机器人控制模块330可以生成控制参数以稳定运动模块920和云台930。
传感器可以安装在运动模块920和云台930上。在一些实施例中,第一类型传感器1220可以包括加速度计820、陀螺仪830、声呐840、红外距离传感器850、光流传感器860、激光雷达870和导航传感器880中的至少一种。在一些实施例中,第二类传感器1240可以包括图像传感器810、加速度计820、陀螺仪830,声呐840、红外距离传感器850、光流传感器860、激光雷达870和导航传感器880中的至少一种。
如图12所示,处理器210可以通过通信端口240建立运动模块和云台930间的通信。在一些实施例中,通信端口240可以是任何形式的。例如,通信端口240可以是一个有线或无线收发器。在一些实施例中,通信端口240可以以接口的形式存在,用于交互式通信。例如,通信端口240可以通过运行应用程序接口(API)的电路建立处理器210和智能机器人110其它部之间的通信。在一些实施例中,API是一组用于构建软件和应用程序的子例程定义,协议和工具。在一些实施例中,API可以通过提供所有构件让程序的开发更简单,然后,可以组装在一起。在一些实施例中,所述API协议可以用于设计无线通信的电路,例如,所述无线电路可以是Wi-Fi、蓝牙、红外(IR)、超宽带(UWB)和无线域网(ZigBee)等等,也可以是一个移动通信模块如3G、4G和长期演进(LTE)。API可 以分离底部硬件(例如运动模块920或云台)和控制硬件(例如处理模块210)。在一些实施例中,通过调用通信端口240中的API,处理模块210(例如智能手机的一部分)可以控制运动模块920中的轮子的运动和云台930中图像传感器(例如相机)的姿态。在一些实施例中,运动模块920中的第一类型传感器1220可以将信息(例如位置数据)发送到所述智能手机。在一些实施例中,云台930中的第二类型传感器1240可以将信息(例如相机姿态)发送到所述智能手机。
根据本申请的一些实施例,图13是确定控制智能机器人的控制参数的示例性流程图。图13所述的步骤1300可以通过智能机器人110中的处理器210根据存储在存储器220中的指令完成。
在步骤1310,处理器210可以从传感器(组)230中获取信息。如图3和图12所述,处理器210中的分析模块310可以通过API通信端口240从运动模块920中的第一类型传感器和云台930中的第二类型传感器接收信息。在一些实施例中,可以通过信息分析来控制智能机器人110的运动。在另一些实施例中,可以通过信息分析来维持智能机器人110中的运动模块920和云台930的稳定。
在步骤1320,处理器210可以根据所接收到的信息确定智能机器人110的目的地和当前位置。例如,处理器210中的分析模块310可以从传感器(组)230中接收位置数据。所述传感器包括但不限于声呐、红外距离传感器、光流传感器、激光雷达、导航传感器等。在一些实施例中,用户可以通过输入输出(I/O)接口250确定目的地。例如,用户可以为智能机器人110输入目的地。处理器210可以使用用户确定的目的地的信息为智能机器人110提供一个运动的路线。在一些实施例中,处理器210可以根据所接收到的信息确定智能机器人110的当前位置。在一些实施例中,处理器210可以根据从传感器(组)230所获得的信息确定智能机器人110的当前位置。例如,处理器210可以根据定位系统(如,GPS)中的导航传感器880获取的信息确定智能机器人的粗略位置。又例如,处理器210可以根据声呐840、红外距离传感器850和光流传感器860中至少一个传感器所获取的信息确定智能机器人110的精确位置。
在步骤1330,处理器210可以根据智能机器人110的目的地和当前位置得到一个地图,所述地图可以用于规划路线。在一些实施例中,一个包含大量代表城市的标记点的完整地图可以储存在数据库140中。通过步骤1310和步骤1320,智能机器人110的目的地和当前位置确定后,需要一个包含智能机器人110目的地和当前位置的地图来规划从当前位置到目的地的路线。在一些实施例中,包含智能机器人110目的地和当前位置的地图 可以是一个完整地图的一部分。在一些实施例中,处理器210中的分析模块310可以根据智能机器人110的目的地和当前位置从数据库140中获得完整地图的合适部分。在一些实施例中,分析模块310可以根据从传感器(组)230获得的信息构建地图,所构建的地图可以发送给数据库140来更新整个地图。在一些实施例中,所构建的地图可以包含智能机器人110的目的地和当前位置。导航模块320可以使用所构建的地图来规划路线。
在步骤1340,根据步骤1330所得的地图,从智能机器人110的当前位置到目的地的路线可以被规划。所述路线规划可以由导航模块320完成。在一些实施例中,导航模块320可以通过绘图单元510将所得地图转换为二维地图。然后,路线规划单元520可以基于所述二维地图得到一个从智能机器人110当前位置到目的地的路线。
在步骤1350,智能机器人控制模块330可以将所述规划路线分割成一段或者多段。路线分隔可以基于一个阈值来判断是否执行,比如,如果所规划的路线小于一个阈值,就不需要进行路线分隔。在一些实施例中,路线分割可以由智能机器人控制模块330按照存储模块220中的指令完成。
在步骤1360,智能机器人控制模块330可以根据步骤1350中分割的路段确定控制机器人的控制参数。在一些实施例中,步骤1350中智能机器人控制模块330分割的每一个路段都有一个起点和一个终点。在一些实施例中,智能机器人控制模块330可以基于某个路段的起点和终点确定智能机器人在该路段上的控制参数。关于如何确定两点之间的控制参数可以参考图6和图7中的具体描述。在一些实施例中,控制参数需要根据时间不断调整。例如,当一个智能机器人110穿过一个路段上的一条直线上的两点时,智能机器人110从第一点到第二点的过程中,可以在不同的时间段内采用不同的运动速度。在一些实施例中,控制参数是用于保证智能机器人在沿规划路线的运动中保持稳定。例如,通过维持运动模块920和云台930的稳定,可以使得获取的传感信息相对更加准确。又例如,当路线不平坦时,可以使用控制参数使得云台930沿垂直于地面的方向保持稳定。
在一些实施例中,智能机器人在根据预设控制参数经过一个路段时,智能机器人110可能会停在一个与智能机器人控制模块330为该路段预设的终点不匹配的位置上。导航模块320可以根据智能机器人所在的匹配错误的位置和目的地重新规划一个新的路线。智能机器人控制模块330可以将新规划的路线进一步分割成一段或者多段,智能机器人控制模块330也可以为分割后的一段或者多段路线确定智能机器人的控制参数。在一些实施例中,位置不匹配可以在智能机器人110通过每一路段后根据智能机器人的实际位置和该路段的预设终点的比较结果来估算。
根据本申请的一些实施例,图14是处理器210生成地图的示例性流程图。所示构建地图的步骤可以由分析模块310根据传感器(组)230获得的信息完成。
在步骤1410,分析模块310可以从图像传感器810中获取图像数据。在一些实施例中,图像数据可以包括大量的帧,帧之中每一像素点的初始深度和/或位移。所述位移可以包括轮子的位移和相机相对于轮子的位移。在一些实施例中,初始深度可以设定为一个零矩阵。在一些实施例中,如果传感器(组)230包含激光雷达或者具有深度探测功能的相机,那么深度信息可以由传感器(组)获取。
在步骤1420,分析模块310可以根据图像数据确定一个或者多个参考帧。在一些实施例中,图像数据可以包括大量的帧,帧之中每一像素点的初始深度和/或位移。在一些实施例中,分析模块310可以从这些帧之中选择一个或者多个参考帧。具体描述请参见本申请的其它部分,比如,图15及其对应的说明书部分。在一些实施例中,参考帧可以用于构建地图。
在步骤1430,分析模块310可以根据一个或多个参考帧确定深度信息和位移信息。也就是说,为了获取每一帧的位移信息和深度信息,可以由分析模块310来处理图像数据。关于如何确定位移信息和深度信息,请参见本申请的其他部分,比如,图4及其说明书部分。
在步骤1440,分析模块310可以根据一个或多个参考帧、帧的深度信息和位移信息生成地图。在一些实施例中,三维地图可以是通过将一个或多个参考帧与所对应的位移连接起来获得的。
地图可以通过大量的帧及其对应的位移信息和深度信息确定。在一些实施例中,步骤1420和步骤1430的顺序可以颠倒,或者同步进行。例如,步骤1420在确定一个或多个参考帧的过程中也可以包含步骤1430中确定位移信息和深度信息的过程。也就是说,步骤1430可以是步骤1420确定一个或多个参考帧的过程的子步骤。如图4的说明所述,图像数据可以通过处理获得一个或多个结果。在一些实施例中,所述一个或多个结果可以包括位移信息(比如,相邻两个帧之间的相机位移)和深度信息(比如,两个相邻帧之中一个物体的深度)。在一些实施例中,所述一个或多个结果可以通过g2o闭环检测技术来调整,从而生成调整后的位移信息。在一些实施例中,调整后的位移信息可以作为位移信息来生成地图。分析模块310可以基于一个或多个参考帧及其对应的位移信息和深度信息生成地图。
根据本申请的一些实施例,图15是确定一个或多个参考帧的示例性流程图。该步骤可以由分析模块310、位移判定单元420和深度判定单元430根据图像传感器810获取的图像数据完成。具体地,分析模块310可以根据一个或多个结果(比如,位移信息和深度信息)确定一个或多个参考帧。
在步骤1502,分析模块310可以获取包含许多帧的图像数据,所述许多帧可以包含至少一个第一帧和一个第二帧。在一些实施例中,第一帧可以是一个现有帧,第二帧可以是第一帧的一个续帧。也就是说,图像传感器810可以在一个时间点抓取第一帧,并在下一个时间点抓取第二帧。也就是说,所述大量的帧在时间域上可以彼此相邻。
在一些实施例中,基于已经获取的图像数据,分析模块310可以预先处理图像。仅仅作为示例,处理图像可以包括图像分割、图像增强、图像融合、图像压缩等中的一种或任意几种组合。
在一些实施例中,在一些实施例中,分割图像的方法可以包括小波变换、Gabor变换、形态学图像处理方法、图像频域处理方法、基于直方图的方法(例如,基于颜色直方图的方法、基于强度直方图的方法、基于边缘直方图的方法等)、基于压缩的方法、区域生长方法、基于偏微分方程的方法、变分法、图像分割法、分水岭变换、基于模型的分割法、多尺度分割、三角测量法、共生矩阵法、边缘检测法、阈值法等中的一种或任意几种组合。
在一些实施例中,图像增强可以是对图像的一种或多种性质的增强。图像的性质包括,对比度(局部或全局)、亮度(局部或全局)、饱和度(局部或全局)、锐度(局部或全局)、图像的灰度等中的一种或任意几种组合。
在一些实施例中,分析模块310可以从图像中确定一个或多个历史空间特征。空间特征可以涉及在一幅图像中,整体或局部像素强度(整体或局部亮度)、目标(例如,平面、突起、障碍物、通道等)的位置、长度或大小等。例如,所识别的空间特征可以包括目标的面积、目标的定位、目标的形状、总体或局部亮度、目标的位置、目标的边界、目标的边缘、目标的角、目标的脊,斑点内容等中的一种或任意几种组合。在一些实施例中,分析模块310可以从两幅或两幅以上的图像中确定一个或多个历史时间特征。历史时间特征可以是多个图像或由两个以上图像组成的图像序列中,某些物理量的改变或变化。例如,历史时间特征可以包括时间模式、运动、时间梯度等中的一种或任意几种的组合。例如,基于历史特征的时间序列分析,分析模块310可以确定运动。历史特征的时间序列分析可以包括在一定时间段内对图像的分析。随时间的图像分析可以揭示存在于随时间捕获的多个静态图像中的运动模式。运动可以包括物体的平动、转动等。运动模式可以指示再次发 生的季节性或周期性。在一些实施例中,可以使用移动平均或回归分析。此外,分析可以对图像数据采用某种类型的滤波器(例如,形态滤波器、高斯滤波器、非锐化滤波器、频率滤波器、平均滤波器、中值滤波器等),以减小误差。该分析可以在时域或在频域中进行。
在一些实施例中,分析模块310可以使用特定方法处理图像,以将一个或多个特征确定为一个或多个正交输入。在一些实施例中,该特定方法可以包括主成分分析(PCA)、独立成分分析、正交分解、奇异值分解、白化方法或球化方法等。该正交输入可以是线性不相关的。
在步骤1504,分析模块310可以将第一帧作为参考帧,将第二帧作为备选帧。在一些实施例中,分析模块310可以使用一种模型选择参考帧与备选帧。在一些实施例中,模型可以包括方法、算法、过程、公式、规则等中的一种或任意几种组合。仅仅作为示例,模型可以包括图像分割模型、图像增强模型、用户界面模型、工作流程模型等中的一种或任意几种组合。在一些实施例中,模型可以包括大数据和人工智能的一些模型,比如前馈神经网络(FNN)、递归神经网络(RNN)、Kohonen自组织映射、自动编码器、概率神经网络(PNN)、时间延迟神经网络(TDNN)、径向基函数网络(RBF)、学习矢量量化、卷积神经网络(CNN)、自适应线性神经元(ADALINE)模型、关联神经网络(ASNN)、生成式对抗网络(GAN,generative adversary network)等中的一种或任意几种组合。示例性的递归神经网络(RNN)可以包括Hopfield网络、Boltzmann机、回波状态网络、长期短期存储器网络、双向递归神经网络、分级递归神经网络、随机神经网络等中的一种或任意几种组合。
在步骤1506,分析模块310可以确定对应于备选帧中一个或多个第二像素点的参考帧中的一个或多个第一像素点。在一些实施例中,参考帧和备选帧有重叠区域,此时,所述第一像素点和第二像素点可以指参考帧和备选帧的重叠区域中的一个物体的相同位置。在一些实施例中,一个或多个第一像素点可以是图4中所述的一组像素点Ω。在一些实施例中,参考帧和备选帧没有重叠区域,也就是说,参考帧中的任何区域与备选帧中的任何区域都不对应。此时,参考帧和备选帧中的像素点不能被选作为第一像素点和/或第二像素点。
在一些实施例中,分析模块310可以利用聚类算法确定备选帧中一个或多个第二像素点,和/或,参考帧中的一个或多个第一像素点。聚类方法可以包括分级聚类方法、分区聚类方法、密度聚类方法、模型聚类方法、网格聚类方法和软计算聚类方法。分级聚类 方法可以包括聚集分级聚类和分割分级聚类、单链路聚类、完全链路聚类、平均链路聚类等。分区聚类方法可以包括误差最小算法(例如,K均值算法、K中心方法、K原型算法)、图形理论聚类等。密度聚类方法可以包括期望最大化算法、具有噪声的应用的基于密度的空间聚类(DBSCAN)算法、识别聚类结构(OPTICS)算法的排序点、自动聚类算法、通过观察的负选择偏差(SNOB)算法、MCLUST算法等。模型聚类方法可以包括决策树聚类、神经网络聚类、自组织映射聚类等。软计算聚类方法可以包括模糊聚类、用于聚类的演化方法、用于聚类的模拟退火算法等。
在步骤1508,分析模块310可以确定参考帧和备选帧的深度信息、强度信息和/或位移信息。在一些实施例中,确定深度信息、强度信息和/或位移信息的方法可以参见图4中的描述。
在步骤1510,分析模块310可以确定备选帧是否是最后一帧。具体地,分析模块310可以探测备选帧在时间域上的下一帧是否存在,如果备选帧存在下一帧,则该过程进入步骤1512;否则,该过程进入步骤1514。
在步骤1512,如果备选帧的下一帧被确定是最后一帧,分析模块310可以输出参考帧及参考帧对应的深度和/或位移。
在步骤1514,分析模块310可以确定参考帧和备选帧之间的差值。在一些实施例中,参考帧和备选帧之间的差值可以根据参考帧和备选帧的强度信息确定。在一些实施例中,参考帧的强度可以由一个或多个第一像素点的RGB强度确定,备选帧的强度可以由一个或多个第二像素点的RGB强度确定。在一些实施例中,参考帧和备选帧的强度信息可以通过步骤1504确定。在一些实施例中,参考帧和备选帧的强度信息可以通过步骤1514在确定参考帧和备选帧的差值之前确定。
在步骤1516,分析模块310可以确定参考帧和备选帧之间的差值是否大于一个阈值。如果参考帧和备选帧之间的差值大于该阈值,则该过程进入步骤1518,;否则,该过程进入步骤1520。
在步骤1518,如果参考帧和备选帧之间的差值被确定为大于该阈值,分析模块310可以将备选帧作为更新后的参考帧,将备选帧之后的帧作为更新后的备选帧。在一些实施例中,备选帧之后的帧可以是与备选帧紧密相邻的帧。此时,更新后的参考帧和更新后的备选帧被发送到步骤1506,重复该过程1500。
在步骤1520,如果参考帧和备选帧的差值被确定为不大于该阈值,分析模块310可以指定备选帧之后的帧为更新后的备选帧。此时,更新后的参考帧和更新后的备选帧将被发送到步骤1506,重复该过程1500。
在一些实施例中,步骤1518或步骤1520可以输出一个新的参考帧和一个新的备选帧由分析模块310处理。在一些实施例中,当参考帧与备选帧之间的差值大于一个阈值时,可以通过用备选帧取代参考帧得到新的参考帧。在一些实施例中,可以通过用备选帧的下一帧取代备选帧得到新的备选帧,也就是说,备选帧的取代可以是无条件的,而参考帧的取代是有条件的。
当步骤1512获得地图时,该过程1500终止。在一些实施例中,为了及时终止该过程1500,可以指定一些判定终止的条件。例如,该过程1500中可以使用一个计数器,从而使得过程1500的循环次数不大于一个预设的阈值。
根据本申请的一些实施例,图16是获取参考帧和/或备选帧的深度信息和位移信息的示例性流程图。在一些实施例中,该过程可以由分析模块310完成。在一些实施例中,该过程类似于图4中所述的获取一个帧的位移和深度的方法。
在步骤1610,分析模块310可以从图像传感器810获得的大量帧之中获取一个第一帧和一个第二帧。在一些实施例中,分析模块310可以从图像传感器抓取的大量帧之中选择第一帧和第二帧。在一些实施例中,第一帧和第二帧可以在时间域上彼此相邻,第一帧可以是一个现有帧,第二帧可以是一个续帧。
在步骤1620,分析模块310可以识别第一帧中的与第二帧中一个或多个第二像素点对应的一个或多个第一像素点。相对于第二帧中的像素点,第一帧中的像素点可以使用图15中所述的步骤1506识别。
在步骤1630,分析模块310可以根据所述的一个或多个第一像素点和一个或多个第二像素点获取初始深度。在一些实施例中,初始深度可以设定为零矩阵。在步骤1640,分析模块310可以根据所述的一个或多个第一像素点、一个或多个第二像素点、和/或初始深度确定初始位移。例如,步骤1640可以通过图4中所述的公式(1)实现。
在步骤1650,分析模块310可以根据所述的一个或多个第一像素点、一个或多个第二像素点以及初始位移确定更新后的深度。在一些实施例中,步骤1650可以通过图4中所述的公式(2)实现。在一些实施例中,分析模块130可以通过最优化算法求解公式(2)获得更新后的深度。最优化算法可以包括,例如,随机搜索、牛顿法、准牛顿法、演 化算法、坐标下降法、近端梯度法、梯度下降法、最速下降法、共轭梯度法、双共轭梯度法等。
在步骤1660,分析模块310可以根据所述的一个或多个第一像素点、一个或多个第二像素点、和/或更新后的深度确定更新后的位移。在一些实施例中,步骤1660可以通过图4中所述的公式(1)实现,即用初始深度取代更新后的深度。
如图4所述,为了通过公式(1)确定位移,可以首先得到一个初始位移。如公式(1)所示,确定初始位移需要提供位移的初值。根据本申请的一些实施例,图17A是确定位移初值的一个示例性流程图。该过程可以由分析模块310根据图像传感器810获得的图像数据完成。
在步骤1710,图像数据可以由分析模块310获得。在一些实施例中,位移的初值可以根据所述图像数据确定。具体地,位移的初值可以根据所述图像数据中的位移确定。在一些实施例中,图像数据中的位移可以包括在获取两个相邻帧的时间间隔内,运动单元(如,两个轮子)的位移和相机相对于运动单元的位移。
在步骤1720,分析模块310可以基于所述图像数据,获得一个与运动单元关联的第一位移。在一些实施例中,所述与运动单元相关的第一位移可以是一段时间内两个轮子中心的位移。在一些实施例中,所述与运动单元相关的第一位移可以是一段时间内一个点的位移,所述点配置有导航传感器。在一些实施例中,所述导航传感器可以分别位于两个轮子的中心位置。在一些实施例中,所述时间段可以是图像传感器810获取两个帧的时间间隔。
在步骤1730,分析模块310可以获取一个相对于运动单元,与图像传感器810关联的第二位移。在一些实施例中,所述第二位移可以是图像传感器810相对于运动单元的相对位移。在一些实施例中,图像传感器810可以是一个相机。
在步骤1740,分析模块310可以根据第一位移和第二位移确定一个与图像传感器810关联的第三位移。在一些实施例中,第三位移可以是第一位移和第二位移的矢量和。在一些实施例中,第三位移可以是用于确定初始位移的位移初值。
在智能机器人110运动过程中,需要控制云台来获取智能机器人110精确的姿势。在一些实施例中,可以在云台930中通过控制轴的旋转角来控制智能机器人110的姿势。根据本申请的一些实施例,图17B是确定智能机器人110姿势的示例性流程图。该过程可以由分析模块310根据云台930中轴的旋转角完成。
在步骤1715,图像数据可以由分析模块310获取。如图17A所述,图像数据可以包括帧、位移和初始深度。在一些实施例中,图像数据还可以包括旋转信息。
在步骤1725,分析模块310可以获取一个相对于参考轴的第一旋转角。所述第一旋转角可以与基于图像数据的运动单元相关联。在一些实施例中,与运动单元关联的参考轴的第一旋转角可以根据图像数据中的旋转信息获取。在一些实施例中,第一旋转角可以是一段时间内的角度。在一些实施例中,所述时间段是图像传感器810获取两个帧的时间间隔。
在步骤1735,分析模块310可以获取一个在一段时间内相对于运动单元的第二旋转角,所述运动单元与图像传感器相关联。在一些实施例中,第二旋转角可以是图像传感器810相对于运动单元的相对旋转角。在一些实施例中,图像传感器810可以是一个相机。
在步骤1745,分析模块310可以确定一个相对于参考轴的第三旋转角,所述参考轴与图像传感器810相关联。在一些实施例中,第三旋转角可以根据第一旋转角和第二旋转角确定。在一些实施例中,第三旋转角可以是第一旋转角和第二旋转角的矢量和。
在智能机器人110运动过程中,运动模块820和云台930可以配置传感器(组)230来获取信息。在一些实施例中,传感器(组)230可以位于载体1010中,也可以位于云台930支撑的智能手机中。在一些实施例中,运动模块920和云台930要获得精确可靠的信息可能需要全方位的稳定措施。关于如何使运动模块920和云台930相对于水平面保持平衡的方法会在图18的说明中具体介绍。
根据本申请的一些实施例,图18是陀螺仪和加速度计如何确定水平面和Z轴夹角的示例性框图。在一些实施例中,水平面可以是载体1010的搭载面,水平面和Z轴之间的夹角可以根据陀螺仪数据和加速度计数据确定。在一些实施例中,水平面可以是云台930探测云台930俯仰角的相对平面。
如图18所示的系统框图,该系统可以包括加法器1810、积分器1820、组分提取器1830和加法器1840。所述加法器1810、积分器1820、组分提取器1830和加法器1840可以构成一个用于确定输出角的反馈回路。积分器1820可以获取图像传感器810获得的每一帧之中的水平面和Z轴之间的夹角。假定图像传感器810在时间t1时获得第一帧,在时间t2时获得第二帧。那么,在时间t1和t2时,陀螺仪830和加速度计820可以得到角速度和夹角信息。在一些实施例中,与时间t1时获得的第一帧关联的反馈输出角θ1、以 及时间t2时获得的陀螺仪数据和加速度计数据可以用于确定与时间t2时获得的第二帧关联的输出角θ2
首先,第一帧的陀螺仪数据和加速度计数据可以在时间t1处理。积分器1820可以生成与第一帧关联的输出角θ1,加速度计820可以生成第一夹角θ1′,加法器1840可以根据输出角θ1和第一夹角θ1′生成第二夹角θ1″。在一些实施例中,第二夹角θ1″可以由输出角θ1和第一夹角θ1′矢量相减得到。组分提取器1830可以基于第二夹角θ1″确定补偿角速度ω1″。在一些实施例中,组分提取器1830可以是差分器。
然后,第二帧的陀螺仪数据和加速度计数据可以在时间t2处理。陀螺仪830可以生成角速度ω2,加法器1810可以根据角速度ω2和补偿角速度ω1″生成修正后的角速度ω2′。在一些实施例中,修正后的角速度ω2′可以由角速度ω2和补偿角速度ω1″矢量相加得到。最后,积分器1820可以基于修正后的角速度ω2′输出在时间t2时与第二帧关联的夹角θ2
在一些实施例中,图18所述的方法可以由处理器210完成。例如,陀螺仪数据和加速度计数据可以通过API接口传输给处理器210(比如,智能手机的一部分)。在获得每一帧时,处理器210都可以确定一个输出角。在一些实施例中,水平面和Z轴的夹角可以在获取每一帧时探测得到。可以根据与每一帧相关联的实时输出角维持系统在水平面上的平衡。
图19是确定与帧关联的角度的流程1900的一个示例性流程图。所述流程1900由处理器210执行。
在步骤1910,处理器210可以获取多个包含第一帧和第二帧的帧。在一些实施例中,在间隔时刻,所述第一帧和第二帧可以被图像传感器810捕捉。例如,在t1时刻,图像传感器810拍摄第一帧,在t2时刻,图像传感器810拍摄第二帧,t1时刻和t2时刻间的时间可以是图像传感器810的采样间隔。
在步骤1920,处理器210可以获取与第一帧和/或第二帧关联的陀螺仪数据和加速度计数据。在一些实施例中,陀螺仪数据和加速度计数据可以包括参数如角速度和角度。
在步骤1930,根据与第一帧关联的加速度计数据,处理器210可以确定第一角度信息。在一些实施例中,所述第一角度信息可以包括第一角度。
在步骤1940,根据所述第一角度信息和与第一帧关联的角度信息,处理器210可以确定补偿角度信息。在一些实施例中,所述与第一帧关联的角度信息可以是与第一帧关联的输出角。在一些实施例中,可以通过矢量减去与第一帧关联的输出角来处理所述第一 角度信息。在一些实施例中,所述补偿角度信息可以是补偿角度速度。根据从第一角度信息减去与第一帧关联的输出角的运算,所述补偿角度速度可以由组分提取器1830确定。
在步骤1950,根据补偿角度信息和与第二帧关联的陀螺仪数据,处理器210可以确定第二角度信息。在一些实施例中,在拍摄第二帧t2时刻,所述第二角度数据可以是由与第二帧关联的处理器210检测到的水平面和Z轴之间的角度。
如图18和19所示,与第二帧关联的输出角度可以被反馈给与第一帧关联的输出角度。采用这种循环形式及陀螺仪数据和加速度计数据,可以通过处理器210获取每一帧的输出角度。在一些实施例中,水平面和Z轴的夹角可能会超出一定阈值,然后,生成一个保持平衡的控制信号。
保持运动模块920或云台930的水平平衡的方法如图18和19所示。在智能机器人110的运动过程中,安装在由云台930夹持的智能手机中的传感器可以获取信息。在一些实施例中,所述信息可以包括图像数据、陀螺仪数据、加速度计数据以及从其他传感器获取的数据。为了智能手机中的第二类型传感器1240稳定地获取信息,需要通过处理器210保持水平平衡。另一方面,对于由云台930支撑的智能手机中的第二类型传感器1240来说,道路可能会不平坦而不能稳定地获取信息。在一些实施例中,智能手机中的传感器想要获取稳定的信息,竖直轴的平衡也是必要的。
图20是在智能手机中调整第二类型传感器1240的竖直位移的示例性方法2000的流程图。在一些实施例中,该方法可以由处理器210执行,根据智能机器人控制模块330产生的控制参数,控制如图11所示的动态Z缓冲杆1120。
在步骤2010,处理器210可以获取马达沿着旋转轴的第一位移。在一些实施例中,旋转轴可以为Z轴,第一位移可以为沿着Z轴的向量。
在步骤2020,处理器210可以确定马达沿着Z轴的位移是否大于一个阈值。在一些实施例中,该阈值可以为一个极限值,在该极限值内第二类型传感器1240能够稳定地获取信息。
在步骤2030,当马达的位移大于一个阈值时,处理器210可以产生第一控制信号以使该马达移动到一个初始位置。在一些实施例中,初始位置可以是一个适合获取信息的预设位置。
在步骤2040,处理器210可以输出第一控制信号给马达,使安装在智能手机中的第二类型传感器1240退回到初始位置以获取稳定的信息。
在步骤2050,当马达的位移不大于一个阈值时,处理器210可以获取沿着旋转轴的第一加速度。在一些实施例中,该加速度可以通过安装在智能手机中的加速度计820获得。
在步骤2060,处理器210可以根据第一加速度产生第二加速度。在一些实施例中,第二加速度可以为第一加速度过滤后的加速度。
在步骤2070,处理器210可以根据第二加速度确定第二位移。在一些实施例中,第二位移可以根据第二加速度的积分值计算得到。在一些实施例中,第二位移可以是沿着Z轴的向量。
在步骤2080,处理器210可以根据第二位移产生第二控制信号以控制马达的移动。在一些实施例中,第二控制信号可以根据第二位移和阈值确定一个位移的剩余间隙(剩余的可活动范围),然后处理器210可以控制智能手机中的传感器沿着Z轴移动。
在步骤2090,传感器210可以输出第二控制信号给马达。
本发明采用了多个实施例进行描述和说明,可以理解的是,对于本领域技术人员来说,可以对其形式和细节等进行多种修改而不脱离本发明的精神和保护范围,正如随附上的权利要求和其等同描述中所明确的那样。

Claims (14)

  1. 一种智能轮椅系统,包括:
    运动模块,所述运动模块包括轮子、载体和第一类型传感器;
    云台,所述云台包括第二类型传感器;
    处理器,所述处理器包括分析模块、导航模块和控制模块,
    所述处理器被配置为:
    分别与所述云台和所述运动模块建立通信;
    分别从所述第二类型传感器和所述第一类型传感器中获取信息;
    确定所述智能轮椅系统的目的地和位置;
    根据所述信息构建地图;
    根据所述地图为所述智能轮椅系统规划路径;
    根据所述路径和所述信息和人工智能方法为所述智能轮椅系统确定控制参数;以及
    根据所述控制参数控制所述智能轮椅系统的移动和姿态。
  2. 如权利要求1所述的智能轮椅系统,其中所述处理器分别采用应用程序接口与所述云台和所述运动模块通信。
  3. 如权利要求1所述的智能轮椅系统,其中所述处理器进一步被配置为:
    获取图像数据;
    根据所述图像数据确定至少一个包含像素点的参考帧;
    根据所述图像数据对应的所述参考帧确定深度信息和位移信息;以及
    根据所述至少一个参考帧、所述深度信息和所述位移信息生成地图。
  4. 如权利要求3所述的智能轮椅系统,其中所述处理器进一步被配置为:
    获取至少包括第一帧和第二帧的多个帧;
    确定所述第一帧为第一参考帧,所述第二帧为第一备选帧;
    确定与所述第一备选帧中的至少一个第二像素点对应的所述第一参考帧中的至少一个第一像素点;
    确定所述第一参考帧和所述第一备选帧的深度信息、强度信息和/或位移信息;
    当所述第一备选帧是最后一帧时,输出所述第一参考帧、所述深度信息、所述强度信息和所述位移信息;
    当所述第一备选帧不是最后一帧时,根据所述第一参考帧和所述第一备选帧的所述强度信息,确定所述第一参考帧和所述第一备选帧之间的区别;
    当所述第一参考帧和所述第一备选帧之间的区别大于一个阈值时,确定所述第一备选帧为第二参考帧,确定后一帧为第二备选帧;
    当所述第一参考帧和所述第一备选帧之间的区别不大于一个阈值时,确定后一帧为第二备选帧;以及
    获取所有的参考帧和所有参考帧对应的深度信息和位移信息。
  5. 如权利要求4所述的智能轮椅系统,其中所述处理器进一步被配置为:
    根据所述至少一个第一像素点和/或所述至少一个第二像素点获取初始深度信息;
    根据位移初值和/或所述初始深度信息确定图像传感器的初始位移;
    根据所述至少一个第一像素点、所述至少一个第二像素点和/或所述图像传感器的所述初始位移,确定更新后的深度信息;以及
    根据所述位移初值和/或所述更新后的深度信息确定所述图像传感器的更新后的位移。
  6. 如权利要求5所述的智能轮椅系统,其中所述处理器进一步被配置为:
    根据所述图像数据获取与所述轮子相关的第一位移;
    获取与所述轮子相关的图像传感器的第二位移;
    根据所述第一位移和所述第二位移获取所述图像传感器的第三位移;以及
    将所述第三位移设置为确定初始位移的所述位移初值。
  7. 如权利要求4所述的智能轮椅系统,其中所述处理器进一步被配置为:
    利用聚类算法确定所述备选帧中的至少一个第二像素点,和/或,所述参考帧中的至少一个第一像素点。
  8. 一种用于控制智能轮椅的方法,所述智能轮椅包括至少一个处理器,一个云台,和一 个运动模块,所述方法包括:
    在所述处理器和所述云台之间、所述处理器和所述运动模块之间建立通信;
    通过所述处理器分别获取所述云台和所述运动模块中的一个或多个传感器的信息;
    通过所述处理器确定所述智能轮椅的目的地和位置;
    通过所述处理器,根据所述信息获取地图;
    通过所述处理器,根据所述地图规划从所述智能轮椅的所述位置到所述目的地的路径;
    根据所述路径和所述信息确定所述运动模块和所述云台的控制参数;以及
    根据所述控制参数控制所述智能轮椅的移动和姿态。
  9. 如权利要求8所述的方法,其中所述处理器分别采用应用程序接口与所述云台和所述运动模块通信。
  10. 如权利要求8所述的方法,进一步包括:
    获取图像数据;
    根据所述图像数据确定至少一个包含像素点的参考帧;
    根据所述图像数据对应的所述参考帧确定深度信息和位移信息;以及
    根据所述至少一个参考帧、所述深度信息和所述位移信息构建地图。
  11. 如权利要求10所述的方法,进一步包括:
    获取至少包括第一帧和第二帧的多帧;
    确定所述第一帧为第一参考帧,所述第二帧为第一备选帧;
    确定与所述第一备选帧中的至少一个第二像素点对应的所述第一参考帧中的至少一个第一像素点;
    确定所述第一参考帧和所述第一备选帧的深度信息、强度信息和/或位移信息;
    当所述第一备选帧是最后一帧时,输出所述第一参考帧、所述深度信息、所述强度信息和所述位移信息;
    当所述第一备选帧不是最后一帧时,根据所述第一参考帧和所述第一备选帧的所述强度信息,确定所述第一参考帧和所述第一备选帧之间的区别;
    当所述第一参考帧和所述第一备选帧之间的区别大于一个阈值时,确定所述第一备 选帧为第二参考帧,确定后一帧为第二备选帧;
    当所述第一参考帧和所述第一备选帧之间的区别不大于一个阈值时,确定后一帧为第二备选帧;以及
    获取所有的参考帧和所有参考帧对应的深度信息和位移信息。
  12. 如权利要求11所述的方法,进一步包括:
    根据所述至少一个第一像素点和/或所述至少第二像素点获取初始深度信息;
    根据位移初值和/或所述初始深度信息确定图像传感器的初始位移;
    根据所述至少一个第一像素点、所述至少一个第二像素点和/或所述图像传感器的所述初始位移,确定更新后的深度信息;以及
    根据所述位移初值和/或所述更新后的深度信息确定所述图像传感器的更新后的位移。
  13. 如权利要求12所述的方法,进一步包括:
    根据所述图像数据获取与所述轮子相关的第一位移;
    获取与所述轮子相关的图像传感器的第二位移;
    根据所述第一位移和所述第二位移获取所述图像传感器的第三位移;以及
    将所述第三位移设置为确定初始位移的所述位移初值。
  14. 如权利要求11所述的方法,进一步包括:
    通过所述处理器,利用聚类算法确定所述备选帧中的至少一个第二像素点,和/或,所述参考帧中的至少一个第一像素点。
PCT/CN2017/072101 2017-01-22 2017-01-22 一种基于大数据及人工智能的智能轮椅系统 WO2018133074A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2017/072101 WO2018133074A1 (zh) 2017-01-22 2017-01-22 一种基于大数据及人工智能的智能轮椅系统
CN201780082879.0A CN110177532A (zh) 2017-01-22 2017-01-22 一种基于大数据及人工智能的智能轮椅系统
US16/477,178 US20190369631A1 (en) 2017-01-22 2017-01-22 Intelligent wheelchair system based on big data and artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/072101 WO2018133074A1 (zh) 2017-01-22 2017-01-22 一种基于大数据及人工智能的智能轮椅系统

Publications (1)

Publication Number Publication Date
WO2018133074A1 true WO2018133074A1 (zh) 2018-07-26

Family

ID=62907570

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/072101 WO2018133074A1 (zh) 2017-01-22 2017-01-22 一种基于大数据及人工智能的智能轮椅系统

Country Status (3)

Country Link
US (1) US20190369631A1 (zh)
CN (1) CN110177532A (zh)
WO (1) WO2018133074A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220326020A1 (en) * 2021-04-08 2022-10-13 Haier Us Appliance Solutions, Inc. Household appliances navigation system
CN115861320B (zh) * 2023-02-28 2023-05-12 天津中德应用技术大学 一种汽车零件加工信息智能检测方法
CN117891262B (zh) * 2024-03-18 2024-05-31 山东乐宁医疗科技有限公司 一种具有智能机器人配合转移车使用的联动系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004321722A (ja) * 2003-04-22 2004-11-18 Mizukoshi Keiki Kk 電動車椅子
CN101190158A (zh) * 2006-11-29 2008-06-04 上海电气集团股份有限公司 智能轮椅
CN102188311A (zh) * 2010-12-09 2011-09-21 南昌大学 一种嵌入式智能轮椅视觉导航控制系统及方法
CN102323819A (zh) * 2011-07-26 2012-01-18 重庆邮电大学 一种基于协调控制的智能轮椅室外导航方法
CN102631265A (zh) * 2012-05-11 2012-08-15 重庆大学 一种智能轮椅的嵌入式控制系统
CN105681747A (zh) * 2015-12-10 2016-06-15 北京理工大学 一种远程呈现交互操作轮椅

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885443B (zh) * 2012-12-20 2017-02-08 联想(北京)有限公司 用于即时定位与地图构建单元的设备、系统和方法
CN103996187B (zh) * 2014-04-29 2017-04-19 南京航空航天大学 对地运动目标光电检测系统及其数据处理方法和图像处理方法
CN104161629A (zh) * 2014-06-27 2014-11-26 西安交通大学苏州研究院 智能轮椅
CN105825520A (zh) * 2015-01-08 2016-08-03 北京雷动云合智能技术有限公司 一种可创建大规模地图的单眼slam方法
KR101583723B1 (ko) * 2015-01-16 2016-01-08 단국대학교 산학협력단 Bim 디지털 모델과 건설 현장의 양방향 동기화 시스템
JP6269546B2 (ja) * 2015-03-23 2018-01-31 トヨタ自動車株式会社 自動運転装置
CN105809687B (zh) * 2016-03-08 2019-09-27 清华大学 一种基于图像中边沿点信息的单目视觉测程方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004321722A (ja) * 2003-04-22 2004-11-18 Mizukoshi Keiki Kk 電動車椅子
CN101190158A (zh) * 2006-11-29 2008-06-04 上海电气集团股份有限公司 智能轮椅
CN102188311A (zh) * 2010-12-09 2011-09-21 南昌大学 一种嵌入式智能轮椅视觉导航控制系统及方法
CN102323819A (zh) * 2011-07-26 2012-01-18 重庆邮电大学 一种基于协调控制的智能轮椅室外导航方法
CN102631265A (zh) * 2012-05-11 2012-08-15 重庆大学 一种智能轮椅的嵌入式控制系统
CN105681747A (zh) * 2015-12-10 2016-06-15 北京理工大学 一种远程呈现交互操作轮椅

Also Published As

Publication number Publication date
CN110177532A (zh) 2019-08-27
US20190369631A1 (en) 2019-12-05

Similar Documents

Publication Publication Date Title
Huang et al. Visual odometry and mapping for autonomous flight using an RGB-D camera
CN110068335B (zh) 一种gps拒止环境下无人机集群实时定位方法及系统
US10437252B1 (en) High-precision multi-layer visual and semantic map for autonomous driving
US10794710B1 (en) High-precision multi-layer visual and semantic map by autonomous units
Zhang et al. Low-drift and real-time lidar odometry and mapping
CN112740268B (zh) 目标检测方法和装置
WO2020103108A1 (zh) 一种语义生成方法、设备、飞行器及存储介质
WO2018133077A1 (zh) 一种智能轮椅的环境信息收集与反馈系统及方法
WO2018133075A1 (zh) 一种具有医疗监测及反应功能的智能轮椅系统
WO2021081774A1 (zh) 一种参数优化方法、装置及控制设备、飞行器
Eynard et al. Real time UAV altitude, attitude and motion estimation from hybrid stereovision
CN108074251A (zh) 基于单目视觉的移动机器人导航控制方法
WO2018133074A1 (zh) 一种基于大数据及人工智能的智能轮椅系统
CN115328153A (zh) 传感器数据处理方法、系统及可读存储介质
Xanthidis et al. Aquavis: A perception-aware autonomous navigation framework for underwater vehicles
CN114077249B (zh) 一种作业方法、作业设备、装置、存储介质
CN113405547B (zh) 一种基于语义vslam的无人机导航方法
WO2018133076A1 (zh) 一种智能轮椅的机械传动控制方法与系统
Liu et al. Semi-dense visual-inertial odometry and mapping for computationally constrained platforms
Rostum et al. A review of using visual odometery methods in autonomous UAV Navigation in GPS-Denied Environment
WO2018133073A1 (en) Systems and methods for controlling intelligent wheelchair
WO2021210492A1 (ja) 情報処理装置、情報処理方法、およびプログラム
JP7536442B2 (ja) 情報処理装置、情報処理方法およびプログラム
Li et al. A homography-based visual inertial fusion method for robust sensing of a Micro Aerial Vehicle
WO2022071315A1 (ja) 自律移動体制御装置、自律移動体制御方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17892754

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17892754

Country of ref document: EP

Kind code of ref document: A1