US20210201509A1 - Monitoring method and device for mobile target, monitoring system and mobile robot - Google Patents

Monitoring method and device for mobile target, monitoring system and mobile robot Download PDF

Info

Publication number
US20210201509A1
US20210201509A1 US17/184,833 US202117184833A US2021201509A1 US 20210201509 A1 US20210201509 A1 US 20210201509A1 US 202117184833 A US202117184833 A US 202117184833A US 2021201509 A1 US2021201509 A1 US 2021201509A1
Authority
US
United States
Prior art keywords
target
mobile
image
frame images
mobile robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/184,833
Inventor
Yuwei Cui
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ankobot Shanghai Smart Technologies Co Ltd
Ankobot Shenzhen Smart Technologies Co Ltd
Original Assignee
Ankobot Shanghai Smart Technologies Co Ltd
Ankobot Shenzhen Smart Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ankobot Shanghai Smart Technologies Co Ltd, Ankobot Shenzhen Smart Technologies Co Ltd filed Critical Ankobot Shanghai Smart Technologies Co Ltd
Priority to US17/184,833 priority Critical patent/US20210201509A1/en
Assigned to ANKOBOT (SHANGHAI) SMART TECHNOLOGIES CO., LTD., ANKOBOT (SHENZHEN) SMART TECHNOLOGIES CO., LTD. reassignment ANKOBOT (SHANGHAI) SMART TECHNOLOGIES CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CUI, YUWEI
Publication of US20210201509A1 publication Critical patent/US20210201509A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • G06K9/4604
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present application relates to the field of intelligent mobile robots, in particular to a monitoring method and device for a mobile target, a monitoring system and a mobile robot.
  • an objective of the present application is to provide a monitoring method and device for a mobile target, a monitoring system and a mobile robot, so as to solve the problem in the prior art that a mobile target cannot be detected effectively and accurately during the movement of a robot.
  • the present application provides a monitoring device for a mobile target.
  • the monitoring device is used in a mobile robot, and comprises a movement device and an image acquisition device, wherein, the monitoring device for a mobile target comprises: at least one processing device; at least one storage device, configured to store images captured by the image acquisition device under an operating state of the movement device; at least one program, wherein the at least one program is stored in the at least one storage device, and is invoked by the at least one processing device such that the monitoring device performs a monitoring method for a mobile target; the monitoring method for a mobile target comprises the following steps: acquiring multiple-frame images captured by the image acquisition device under the operating state of the movement device; and outputting monitoring information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images selected from the multiple-frame images; wherein the at least two-frame images are images captured by the image acquisition device within partially overlapped field of view, and a position of the mobile target in each of the at least two-frame images has
  • the step of performing comparison between at least two-frame images selected from the multiple-frame images comprises the following steps: detecting a suspected target based on the comparison between the at least two-frame images; and tracking the suspected target to determine the mobile target.
  • the step of detecting a suspected target according to the comparison between the at least two-frame images comprises the following steps: performing image compensation on the at least two-frame images based on movement information of the movement device within a time period between the at least two-frame images; and performing subtraction processing on the compensated at least two-frame images to form a difference image, and detecting the suspected target from the difference image.
  • the step of performing comparison between at least two-frame images selected from the multiple-frame images comprises the following steps: detecting a suspected target based on a matching operation on corresponding feature information in the at least two-frame images; and tracking the suspected target to determine the mobile target.
  • the step of detecting a suspected target based on a matching operation on corresponding feature information in the at least two-frame images comprises the following steps: extracting each feature point in the at least two-frame images respectively, and matching extracted each feature point in the at least two-frame images with a reference three-dimensional coordinate system; wherein the reference three-dimensional coordinate system is formed through performing three-dimensional modeling on a mobile space, and the reference three-dimensional coordinate system is marked with coordinate of each feature point on all static targets in the mobile space; and detecting a feature point set as the suspected target, the feature point set is composed of feature points in the at least two-frame images that are not matched with the reference three-dimensional coordinate system.
  • the step of tracking the suspected target to determine the mobile target comprises the following steps: obtaining a moving track of a suspected target through tracking the suspected target; and determining the suspected target as the mobile target when the moving track of the suspected target is continuous.
  • the step of tracking the suspected target to determine the mobile target comprises the following steps: obtaining a moving track of a suspected target through tracking the suspected target; and determining the suspected target as the mobile target when the moving track of the suspected target is continuous.
  • the monitoring method further comprises the following steps: performing object recognition on the mobile target in the captured images, wherein the object recognition is performed by an object recognizer, the object recognizer includes a trained neural network; and outputting the monitoring information according to result of the object recognition.
  • the monitoring method further comprises the following steps: uploading captured images or videos containing images to a cloud server to perform object recognition on the mobile target in the image; wherein the cloud server includes an object recognizer which includes trained neural networks; and receiving a result of object recognition from the cloud server and outputting the monitoring information.
  • the monitoring information comprises one or more of image information, video information, audio information and text information.
  • the present application provides a mobile robot.
  • the mobile robot comprises a movement device, configured to control movement of the mobile robot according to received control instruction; an image acquisition device, configured to capture multiple-frame images under an operating state of the movement device; and the monitoring device mentioned above.
  • the present application provides a monitoring system.
  • the monitoring system comprises: a cloud server; and a mobile robot, connected with the cloud server; wherein the mobile robot performs the following steps: acquiring multiple-frame images during movement of the mobile robot; outputting detection information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images selected from the multiple-frame images; uploading the captured images or videos containing images to the cloud server based on the detection information; and outputting monitoring information based on a result of object recognition received from the cloud server; or, wherein the mobile robot performs the following steps: acquiring multiple-frame images during movement of the mobile robot, and uploading the multiple-frame images to the cloud server; and the cloud server performs the following steps: outputting detection information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images selected from the multiple-frame images; and outputting an object recognition result of the mobile target to the mobile robot according to a result of performing object recognition
  • the monitoring method and device for a mobile target, the monitoring system and mobile robot of the present application have the following beneficial effects: through the technical solution that acquiring multiple-frame images captured by an image acquisition device under an moving state of a robot in a monitored region, selecting at least two-frame images with an overlapped region from the multiple-frame images, performing comparison between the selected images by image compensation method or feature matching method, and outputting monitoring information containing a mobile target which moves relative to a static target based on the result of comparison, and wherein the position of the mobile target in each of the at least two-frame images has an attribute of indefinite change, the mobile target in the monitored region can be recognized precisely during movement of the mobile robot, and monitoring information about the mobile target can be generated to prompt correspondingly, thereby safety of the monitored region can be effectively ensured.
  • FIG. 1 shows a flow diagram of a monitoring method for a mobile target of the present application in one embodiment.
  • FIG. 2 shows image diagrams of selected two-frame images in one embodiment of the present application.
  • FIG. 3 shows image diagrams of selected two-frame images in another embodiment of the present application.
  • FIG. 4 shows a flow diagram of performing comparison between at least two-frame images selected from multiple-frame images in one embodiment of the present application.
  • FIG. 5 shows a flow diagram of detecting a suspected target based on comparison between at least two-frame images in one embodiment of the present application.
  • FIG. 6 shows an image diagram of a first frame image selected in one embodiment of the present application.
  • FIG. 7 shows an image diagram of a second frame image selected in one embodiment of the present application.
  • FIG. 8 shows a flow diagram of tracking a suspected target to determine that the suspected target is a mobile target in one embodiment of the present application.
  • FIG. 9 shows a flow diagram of performing comparison between at least two-frame images selected from multiple-frame images in one embodiment of the present application.
  • FIG. 10 shows a flow diagram of detecting a suspected target based on a matching operation on corresponding feature information in at least two-frame images in one embodiment of the present application.
  • FIG. 11 shows a flow diagram of tracking a suspected target to determine a mobile target in one embodiment of the present application.
  • FIG. 12 shows a flow diagram of object recognition in one embodiment of the present application.
  • FIG. 13 shows a flow diagram of object recognition in another embodiment of the present application.
  • FIG. 14 shows a structural schematic diagram of a monitoring device for a mobile target used in a mobile robot of the present application in one embodiment.
  • FIG. 15 shows a structural schematic diagram of a mobile robot of the present application in one embodiment.
  • FIG. 16 shows a composition diagram of a monitoring system of the present application in one embodiment.
  • A, B or C indicates “any of the following: A; B; C; A and B; A and C; B and C; A, B and C”. Exceptions of the definition only exist when the combinations of elements, functions, steps or operations are mutually exclusive inherently in some ways.
  • the existing manners for anti-theft mainly include protection by means of persons and protection by means of objects (for example, anti-theft doors, iron barriers, etc.).
  • objects for example, anti-theft doors, iron barriers, etc.
  • the following security ways are also used at home: installing an infrared ray anti-theft alarming devices, installing electromagnetic password locks or installing monitoring cameras at home.
  • the above anti-theft manners are fixed and obvious, and those who break in illegally may easily evade monitoring of these anti-theft devices, therefore, reliable and effective security cannot be provided.
  • Mobile robots perform mobile operations based on navigation control technology. Affected by the situation of mobile robots, when they are in an unknown location of an unknown environment, mobile robots build maps and perform navigation operations based on VSLAM (Visual Simultaneous Localization and Mapping) technology. Specifically, mobile robots construct maps through visual information provided by visual sensors and mobile information provided by position measurement devices, and navigate and move independently based on the map.
  • the visual sensors include, for example, a camera device.
  • the position measurement devices for example include speed sensor, odometer sensor, distance sensor, cliff sensor, etc.
  • the mobile robot moves on the plane where it moves (i.e. moving plane), acquiring and storing images that entity objects projected onto the moving plane.
  • the image acquisition device captures entity objects in the field of view at the location of the mobile robot and projects them to the moving plane, so as to obtain the projection images.
  • entity objects include, for example, a TV set, an air conditioner, a chair, shoes, a leather ball, etc.
  • the mobile robot determines the current position by the position information provided by the position measurement device, and also by identifying image features contained in the images captured by the image acquisition device, through comparing image features captured at the current position with image features stored in the map.
  • the mobile robot is, for example, a security robot. After the security robot is started, a region where security protection is needed can be traversed according to a determined or random route. For the existing security robot, all the captured images are generally uploaded to a monitoring center, and a suspected object in the captured image cannot be prompted based on the situation, therefore, the security robot is not intelligent enough.
  • the present application provides a monitoring method for a mobile target used in a mobile robot.
  • the mobile robot comprises a movement device and an image acquisition device, and the monitoring method can be performed by a processing device contained in the mobile robot.
  • the processing device is an electronic device which is capable of performing numeric calculation, logical calculation and data analysis, and the processing device includes but is not limited to: CPU, GPU and FPGA, and volatile memory configured to temporarily store intermediate data generated during calculation.
  • monitoring information containing a mobile target which moves relative to a static target is output through comparing at least two-frame images selected from multiple-frame images captured by an image acquisition device, wherein the multiple-frame images are captured by the image acquisition device under an operating state of the movement device.
  • the static target for example includes but is not limited to: ball, shoe, wall, flowerpot, cloth and hat, roof, lamp, tree, table, chair, refrigerator, television, sofa, sock, tiled object, and cup.
  • the tiled object includes but is not limited to ground mat or floor tile map paved on the floor, and tapestry and picture hung on a wall.
  • the mobile robot can be for example a specific security robot, and the security robot monitors a monitored region according to the monitoring method of a mobile target in the present application. In some other embodiments, the mobile robot can also be other mobile robot which contains a module configured to perform the monitoring method of a mobile target in the present application.
  • the other mobile robot for example is a cleaning robot, a mobile robot accompanying family members or a robot for cleaning glass.
  • the mobile robot when it is a cleaning robot, it can traverse the whole to-be-cleaned region according to a map constructed in advance using VSLAM technique and in combination with the image acquisition device of the cleaning robot.
  • the cleaning robot is enabled to start cleaning operations, a module which is possessed by the cleaning robot and which carries the monitoring method for a mobile target of the present application is started at the same time, thereby monitoring security while cleaning.
  • the movement device for example includes wheels and drivers of the wheels, wherein the driver can be for example a motor.
  • the movement device is used to drive the robot to move back and forth in a reciprocating manner, move in a rotational manner or move in a curvilinear manner according to a planned moving track, or to drive the mobile robot to adjust a pose.
  • the mobile robot at least includes an image acquisition device.
  • the image acquisition device captures images within a field of view at a position where the mobile robot is located.
  • a mobile robot includes an image acquisition device which is arranged on the top, shoulder or back of the mobile robot, and the principal optic axis of the image acquisition device is vertical to a moving plane of the mobile robot, or the principal optic axis is consistent with a travelling direction of the mobile robot.
  • the principal optic axis can also be set to form a certain angle (for example, an angle between 50° and 86°) with the moving plane on which the mobile robot is located, to acquire a greater image acquisition range.
  • the principal optic axis of the image acquisition device can also be set in many other ways, for example, the image acquisition device can rotate according to a certain rule or rotate randomly, in this case, an angle between the optic axis of the image acquisition device and a travelling direction of the mobile robot is constantly changed, therefore, installation manners of the image acquisition device and states of the principal optic axis of the image acquisition device are not limited to what are enumerated in the present embodiment.
  • the mobile robot includes two or more image acquisition devices, for example, a binocular image acquisition device or a multi-image acquisition devices.
  • the principal optic axis of one image acquisition device is vertical to moving plane of the mobile robot, or the principal optic axis is consistent with a travelling direction of the mobile robot.
  • the principal optic axis can also be set to form a certain angle with the moving plane, so as to acquire a greater image acquisition range.
  • the principal optic axis of the image acquisition device can also be set in many other ways, for example, the image acquisition device can rotate according to a certain rule or rotate randomly, in this case, an angle between the optic axis of the image acquisition device and a travelling direction of the mobile robot is constantly changed, therefore, installation manners of the image acquisition device and states of the principal optic axis of the image acquisition device are not limited to what are enumerated in the present embodiment.
  • the image acquisition device includes but is not limited to: fisheye camera module, wide angle (or non-wide angle) camera module, depth camera module, camera module integrated with an optical system or CCD chip, and camera module integrated with an optical system and CMOS chip.
  • the power supply system of the image acquisition device can be controlled by the power supply system of the mobile robot, and during the period that the mobile robot is powered on and moves, the image acquisition device begins to capture images.
  • FIG. 1 shows a flow diagram of the monitoring method of a mobile target of the present application in one embodiment.
  • the monitoring method of the mobile target includes the following steps:
  • Step S 100 multiple-frame images are captured by an image acquisition device under an operating state of the movement device.
  • a movement device of the mobile robot can include a travelling mechanism and a travelling drive mechanism, wherein the travelling mechanism can be arranged at a bottom of the robot body, and the travelling drive mechanism is arranged inside the robot body.
  • the travelling mechanism can for example include a combination of two straight-going walking wheels and at least one auxiliary steering wheel, wherein the two straight-going walking wheels are respectively arranged at two opposite sides at a bottom of the robot body, and the two straight-going walking wheels can be independently driven by two corresponding travelling drive mechanisms respectively, that is, a left straight-going walking wheel is driven by a left travelling drive mechanism, while a right straight-going walking wheel is driven by a right travelling drive mechanism.
  • the universal walking wheel or the straight-going walking wheel can be provided with a bias drop suspension system which is fixed in a movable manner, for example, the bias drop suspension system can be installed on a robot body in a rotatable manner and receives spring bias which is downwards and away from the robot body.
  • the spring bias enables the universal walking wheel or the straight-going walking wheel to maintain contact and traction with the ground with a certain landing force.
  • the two straight-going walking wheels are mainly used for going forward and backward, while when the at least one auxiliary steering wheel participates and matches with the two straight-going walking wheels, movements such as steering and rotating can be realized.
  • the travelling drive mechanism can include a drive motor and a control circuit configured to control the drive motor, and the drive motor can be used to drive the walking wheels in the travelling mechanism to move.
  • the drive motor can be for example a reversible drive motor, and a gear shift mechanism can be further arranged between the drive motor and the axle of a walking wheel.
  • the travelling drive mechanism can be installed on the robot body in a detachable manner, thereby facilitating disassembly and maintenance.
  • the mobile robot which is a cleaning robot captures multiple-frame images when moving, in other words, in step S 100 , the processing device acquires multiple-frame images captured by the image acquisition device under an operating state of the movement device.
  • the multiple-frame images can be for example multiple-frame images acquired in a continuous time period, or multiple-frame images acquired within two or more discontinuous time periods.
  • Step S 200 monitoring information containing a mobile target which moves relative to a static target is output according to a result of performing comparison between at least two-frame images selected from the multiple-frame images; wherein the at least two-frame images are images captured by the image acquisition device within partially overlapped field of view.
  • a processing device of the mobile robot outputs monitoring information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images selected from the multiple-frame images.
  • the static target for example includes but is not limited to: ball, shoe, wall, flowerpot, cloth and hat, roof, lamp, tree, table, chair, refrigerator, television, sofa, sock, tiled object, and cup.
  • the tiled object includes but is not limited to ground mat or floor tile map paved on the floor, and tapestry and picture hung on a wall.
  • the two-frame images selected by the processing device are images captured by the image acquisition device in a partially overlapped field of view, that is, the processing device determines to select a first frame image and a second frame image on the basis that the two-frame images contain an image overlapped region, and the overlapped field of view contains a static target, so as to monitor a mobile target which moves relative to the static target in the overlapped field of view.
  • the proportion of the image overlapped region in the first frame image and in the second frame image can also be set, for example, the proportion of the image overlapped region in the first frame image and in the second frame image are respectively at least 50% (but the proportions are not limited to this proportion, different proportions of the first frame image and the second frame image can be set depending on the situation).
  • the selection of the first frame image and the second frame image should be continuous to some extent, and while ensuring that the first frame image and the second frame image have a certain proportion of an image overlapped region, the continuity of the moving track of the mobile target can be judged based on acquired images.
  • the processing device respectively selects a first frame image and a second frame image at a first position and a second position, wherein, the image acquisition device has an overlapped field of view at the first position and the second position.
  • the image acquisition device can capture videos, since videos are composed of image frames, during the movement period of the mobile robot, the processing device can continuously or discontinuously collect image frames in the acquired videos to obtain multiple-frame images, and select the first frame image and the second frame image according to a preset number of frame intervals, wherein the two-frame images have a partially overlapped region, and then the processing device performs image comparison between selected two-frame images.
  • the processing device can preset time intervals at which an image acquisition device captures images, and acquire multiple frame-images captured by the image acquisition device at different time.
  • time interval should be at least smaller than the time taken by the mobile robot in moving in one field of view, to ensure that two-frame images selected in the multiple-frame images have a partially overlapped part.
  • the mobile robot captures images within the field of view of the image acquisition device at a preset time interval, and then the processing device acquires images, and selects two of the images as a first frame image and a second frame image, wherein the two-frame images have a partially overlapped part.
  • the time period can be represented by a time unit, or the time period can be represented by number of intervals of image frames.
  • the mobile robot is in communication with an intelligent terminal, and the intelligent terminal can modify the time period through a specific APP (applications).
  • a modification interface of the time period is displayed on a touch screen of the intelligent terminal, and the time period is modified through a touch operation on the modification interface; or a time period modification instruction is directly sent to the mobile robot to modify the time period.
  • the time period modification instruction can be for example a voice containing modification instructions, the voice can be for example “the period is modified to be three seconds”.
  • the voice can be “the number of image frame intervals can be modified to be five”.
  • step S 200 the position of the mobile target in each of at least two-frame images has an attribute of indefinite change.
  • the mobile robot moves, through the movement device, based on a map which is constructed in advance, and the image acquisition device captures multiple-frame images in the movement process.
  • the processing device selects two-frame images from the multiple-frame images for comparison, the selected two-frame images are respectively a first frame image and a second frame image according to the order of the image selection.
  • a corresponding position at which the mobile robot acquires the first frame image is the first position, while a corresponding position at which the mobile robot acquires the second frame image is a second position.
  • the two-frame images have an image overlapped region, and a static target exists in the overlapped field of view of the image acquisition device.
  • the mobile robot Since the mobile robot is under a moving state, the position of the static target in the second frame image has changed definitely relative to the position of the static target in the first frame image, the definite change amplitude of positions of the static target in the two-frame images has correlation with movement information of the mobile robot at the first position and the second position, and the movement information can be for example moving distance and pose change information of the mobile robot from the first position to the second position.
  • the mobile robot contains a position measurement device, which can be used to acquire movement information of the mobile robot, and relative position information between the first position and the second position can be measured according to the movement information.
  • the position measurement device includes but is not limited to a displacement sensor, a ranging sensor, a cliff sensor, an angle sensor, a gyroscope, a binocular image acquisition device and a speed sensor, which are arranged on a mobile robot.
  • the position measurement device constantly detects movement information and provides it to the processing device.
  • the displacement sensor, the gyroscope, the speed sensor and so on can be integrated into one or more chips.
  • the ranging sensor and the cliff sensor can be set at the side of the mobile robot. For example, the ranging sensor in a cleaning robot is set at the edge of the housing; and the cliff sensor in a cleaning robot is arranged at the bottom of the mobile robot.
  • the movement information acquired by the processing device includes but is not limited to displacement information, angle information, information about distance between the robot and an obstacle, velocity information and travelling direction information.
  • the position measurement device is a counting sensor arranged on a motor of the mobile robot, the rotation number that the motor operates is counted so as to acquire relative displacement of the mobile robot from the first position to the second position, and an angle at which the motor operates is used to acquire pose information, etc.
  • a mapping relationship between a unit grid length and actual displacement is determined in advance.
  • the number of grids that the mobile robot moves from a first position to a second position is determined based on the movement information obtained during movement of the mobile robot, and further relative position information between the first position and the second position is acquired.
  • the vector length that the mobile robot moves from the first position to the second position is determined according to movement information obtained during the movement of the mobile robot, and further relative position information of the two positions is obtained.
  • the vector length can be calculated in pixel.
  • the position of the static target in the second frame image is shifted by a vector length which corresponds to the relative position information, therefore, the movement of the static target captured in the second frame image relative to the static target captured in the first frame image can be determined based on the relative position information of the mobile robot, thereby having an attribute of definite change.
  • the movement of the mobile target in the selected two-frame images in the overlapped field of view does not conform to the above attribute of definite change.
  • FIG. 2 shows image diagrams of selected two-frame images in one embodiment of the present application.
  • the principal optic axis of the image acquisition device is set to be vertical to a moving plane, then the plane on which a two-dimensional image captured by an image acquisition device is located is in parallel with the moving plane of the mobile robot.
  • the position of an entity object in a projected image captured by the image acquisition device can be used to indicate the position where the entity object is projected onto the moving plane of the mobile robot, and the angle of the position of the entity object in a projected image relative to a travelling direction of the mobile robot is used to indicate the angle of the position at which the entity object is projected onto the moving plane of the mobile robot relative to the travelling direction of the mobile robot.
  • the selected two-frame images in FIG. 2 are respectively a first frame image and a second frame image, the corresponding position at which the mobile robot acquires the first frame image is the first position P 1 , and the corresponding position at which the mobile robot acquires the second frame image is the second position P 2 .
  • the mobile robot moves from the first position P 1 to the second position P 2 , only the distance is changed, but no pose is changed. Therefore, relative position information of the mobile robot at the first position P 1 and the second position P 2 can be acquired only by measuring relative displacement between the first position P 1 and the second position P 2 .
  • the two-frame images have an image overlapped region, and a static target O as shown in FIG. 2 exists in the overlapped field of view of the image acquisition device.
  • the mobile robot Since the mobile robot is under a moving state, the position of the static target O in the second frame image is changed definitely relative to the position of the static target O in the first frame image, and the definite change amplitude of the position of the static target O in the two-frame images has correlation with movement information of the mobile robot at the first position P 1 and the second position P 2 , and in the present embodiment, the movement information can be for example a movement distance when the mobile robot moves from the first position P 1 to the second position P 2 .
  • the mobile robot includes a position measurement device, and movement information of the mobile robot is acquired by the position measurement device of the mobile robot.
  • the position measurement device measures a moving speed of the mobile robot, and calculates a relative displacement from the first position to the second position based on the moving speed and moving time.
  • the position measurement device is a GPS system (Global Positioning System), and relative position information between the first position P 1 and the second position P 2 is acquired according to localization information of the GPS in the first position and the second position.
  • a projection of the static target O in the first frame image is a static target projection O 1
  • a projection of the static target O in the second frame image is a static target projection O 2 , and it can be clearly seen from FIG.
  • the movement of a static target captured in a second frame image relative to the static target captured in a first frame image can be determined based on relative position information of the mobile robot, which has an attribute of definite change. While the movement of the mobile target in the overlapped field of view in the selected two-frame images does not conform to the above attribute of definite change.
  • the position measurement device is a device which performs localization based on measured wireless signals, for example, the position measurement device is a bluetooth (or WiFi) localization device.
  • the position measurement device determines relative position of the first position P 1 and the second position P 2 relative to a preset wireless locating signal transmitting device respectively by measuring power of received wireless locating signals at the first position P 1 and the second position P 2 , thereby relative position information between the first position P 1 and the second position P 2 is acquired.
  • FIG. 3 shows image diagrams of selected two-frame images in one embodiment of the present application.
  • the principal optic axis of the image acquisition device is set to be vertical to a moving plane, then the plane on which a two-dimensional image captured by the image acquisition device is located is in parallel with the moving plane of the mobile robot.
  • the position of an entity object in the projected image captured by the image acquisition device can be used to indicate the position where the entity object is projected onto the moving plane of the mobile robot, and the angle of the position of the entity object in the projected image relative to the travelling direction of the mobile robot is used to indicate the angle of the position at which the entity object is projected onto the moving plane of the mobile robot relative to the travelling direction of the mobile robot.
  • the selected two-frame images in FIG. 3 are respectively a first frame image and a second frame image, the corresponding position at which the mobile robot acquires the first frame image is the first position P 1 ′, and the corresponding position at which the mobile robot acquires the second frame image is the second position P 2 ′.
  • relative position information of the mobile robot at the first position P 1 ′ and the second position P 2 ′ can be acquired by just measuring relative displacement between the first position P 1 ′ and the second position P 2 ′.
  • the two-frame images have an image overlapped region, and a mobile target Q as shown in FIG. 3 exists in the overlapped field of view of the image acquisition device.
  • the position of the mobile target Q in the second frame image is changed indefinitely relative to the position of the target Q in the first frame image, that is, the position change amplitude of the mobile target Q in the two-frame images has no correlation with movement information of the mobile robot in the first position P 1 ′ and the second position P 2 ′, the position change of the mobile target Q in the two-frame images cannot be figured out based on movement information of the mobile robot at the first position P 1 ′ and the second position P 2 ′, and in the present embodiment, the movement information can be for example a movement distance when the mobile robot moves from the first position P 1 ′ to the second position P 2 ′.
  • the mobile robot includes a position measurement device, and movement information of the mobile robot is acquired by the position measurement device of the mobile robot.
  • the position measurement device measures a moving speed of the mobile robot, and calculates a relative displacement from a first position to a second position based on the moving speed and moving time.
  • the position measurement device is a GPS system or a device which performs localization based on measured wireless signals, and relative position information between the first position P 1 ′ and the second position P 2 ′ is acquired according to localization information of the position measurement device at the first position and the second position. As shown in FIG.
  • a projection of a mobile target Q in a first frame image is a mobile target projection Q 1
  • the position of a mobile target Q′ in a second frame image is a mobile target projection Q 2
  • the projection of the mobile target Q in the second frame image should be a projection Q 2 ′, that is, the mobile target projection Q 2 ′ is an image projection after the mobile target projection Q 1 is subjected to a definite change when the mobile robot moves from a first position P 1 ′ to a second position P 2 ′
  • the position change of the mobile target Q in the two-frame images cannot be figured out according to movement information of the mobile robot at the first position P 1 ′ and the second position P 2 ′, and the mobile target Q has an attribute of indefinite change during the movement of the mobile robot.
  • FIG. 4 shows a flow diagram of performing comparison between at least two-frame images selected from multiple-frame images in one embodiment of the present application.
  • the step of performing comparison between at least two-frame images selected from the multiple-frame images further includes the following step S 210 and step S 220 .
  • step S 210 the processing device detects a suspected target according to a comparison between the at least two-frame images.
  • the suspected target is a target with an attribute of indefinite change in the first frame image and the second frame image, and the suspected target moves relative to a static target within an image overlapped region of the first frame image and the second frame image.
  • FIG. 5 shows a flow diagram of detecting a suspected target according to comparison between at least two-frame images in one embodiment of the present application. That is, the step of detecting a suspected target according to comparison between the at least two-frame images is realized by step S 211 and step S 212 in FIG. 5 .
  • step S 211 the processing device performs image compensation on the at least two-frame images based on movement information of the movement device within the time period between the at least two-frame images.
  • movement information is generated due to movement, herein, the movement information contains relative displacement and relative pose change of the mobile robot from the first position to the second position.
  • Movement information can be measured by the position measurement device, and according to a proportional relationship between actual length and unit length in an image captured by the image acquisition device, a definite relative displacement of the position of a projected image of the static target within an image overlapped region of the second frame image and the first frame image is acquired, and relative pose change of the mobile robot is acquired through a pose detection device of the mobile robot, and further image compensation is performed on the first frame image or the second frame image based on movement information. For example, the image compensation is performed on the first frame image according to movement information or the image compensation is performed on the second frame image according to movement information.
  • step S 212 the processing device performs a subtraction processing on the compensated at least two-frame images to form a difference image, that is, the subtraction processing is performed on the compensated second frame image and the original first frame image to form a difference image, or the subtraction processing is performed on the compensated first frame image and the original second frame image to form a difference image.
  • the result of subtraction between compensated images should be zero, and the difference image between the image overlapped regions of the first frame image and the second frame image should not contain any feature, that is, the image overlapped regions of the compensated second frame image and the original first frame image are the same, or the image overlapped regions of the compensated first frame image and the original second frame image are the same.
  • the result of subtraction between compensated images is not zero, and the difference image regarding the image overlapped region of the first frame image and the second frame image contains discriminative features, that is, the image overlapped regions of the compensated second frame image and the original first frame image are not the same, and parts which cannot be coincided exist, or the image overlapped regions of the compensated first frame image and the original second frame image are not the same, and part which cannot be coincided exist.
  • a suspected target is judged to exist only when it is determined that discriminative feature exists in a difference image of image overlapped regions of compensated two-frame images, or the image overlapped regions of compensated two-frame images cannot be completely coincided with each other, it may lead to misjudgment.
  • discriminative feature exists in the image overlapped region of the captured first frame image and the second frame image, and the difference result of image overlapped regions after the images are compensated is not zero, that is, the image overlapped regions after the images are compensated cannot be completely coincided.
  • the lamp will be misjudged to be a suspected target. Therefore, under the condition that the difference image is not zero, if obtaining a first moving track of an object within the time period during which the image acquisition device captures a first frame image and a second frame image according to the difference image, the object is judged to be the suspected target, that is, a suspected target corresponding to the first moving track exists in the overlapped field of view.
  • FIG. 6 shows an image diagram of a first frame image selected in one embodiment of the present application.
  • FIG. 7 shows an image diagram of a second frame image selected in one embodiment of the present application.
  • FIG. 6 shows a first frame image captured by the image acquisition device of the mobile robot at a first position
  • FIG. 7 shows a second frame image which is captured when the mobile robot moves from the first position to the second position, and movement information of the mobile robot from the first position to the second position is measured by the position measurement device of the mobile robot.
  • the first frame image and the second frame image have an image overlapped region shown by a dotted box in FIG. 6 and FIG.
  • the image overlapped region corresponds to the overlapped field of view of the image acquisition device at the first position and the second position.
  • the overlapped field of view of the image acquisition device contains multiple static targets, for example, chair, window, book shelf, clock, sofa, and bed.
  • a moving butterfly A exists in FIG. 6 and FIG. 7 , and the butterfly A moves relative to static targets in the overlapped field of view in FIG. 6 and FIG. 7 , and the butterfly A is located in the image overlapped region of the first frame image and the second frame image.
  • One static target in the overlapped field of view is selected now to define movement of the butterfly A, and the static target can be selected from any one of chair, window, book shelf, clock, sofa, bed and so on. Since the image of the clock in FIG.
  • the clock in FIG. 6 and FIG. 7 is selected herein to be a static target which shows movement of the butterfly A.
  • the butterfly A is at the left side of the clock
  • the processing device performs subtraction processing on the compensated second frame image and the original first frame image to form a difference image.
  • the difference image has a discriminative feature i.e.
  • the butterfly A which exists simultaneously in the first frame image and the second frame image and which cannot be eliminated through subtraction. Based on this, the butterfly A is judged to move (from the left side of the clock to the right side of the clock) relative to a static target (the static target is for example a clock) in the overlapped field of view during the mobile robot moves from the first position to the second position, and positions of the butterfly A in the first frame image and the second frame image have an attribute of indefinite change.
  • a static target is for example a clock
  • a first frame image as shown in FIG. 6 is captured by an image acquisition device at the first position
  • a second frame image as shown in FIG. 7 is captured by the image acquisition device at the second position.
  • the first frame image and the second frame image have an image overlapped region as shown in FIG. 6 and FIG. 7 , and the image overlapped region corresponds to the overlapped field of view of the image acquisition device at the first position and the second position.
  • the overlapped field of view of the image acquisition device includes multiple static targets, for example, chair, window, book shelf, clock, sofa, bed, and so on.
  • a moving butterfly A exists in the present embodiment, and the butterfly A moves relative to static targets in the overlapped field of view of FIG. 6 and FIG. 7 .
  • the butterfly A for example is at the end of the bed and is within the image overlapped region
  • the butterfly A for example is at the head of the bed and is outside the image overlapped region.
  • the processing device performs subtraction processing on the compensated second frame image and the original first frame image to form a difference image.
  • the difference image has a discriminative feature i.e. butterfly A which exists simultaneously in the first frame image and the second frame image and which cannot be eliminated through subtraction.
  • the butterfly A is judged to move (from an end to a head) relative to a static target (the static target is for example a bed) in an overlapped field of view when the mobile robot moves from the first position to the second position, and positions of the butterfly A in the first frame image and the second frame image have an attribute of indefinite change.
  • step S 220 is further performed, that is, the suspected target is tracked to determine that the suspected target is a mobile target.
  • Special condition is that, for example, some hanging decorations or ceiling lamps swing regularly at a certain amplitude because of wind. Generally, the swing is merely regular back-and-forth movement within a small range or merely irregular movement at a small amplitude, and the movement cannot form continuous movement.
  • the object which swings due to wind forms discriminative feature in the difference image, and forms a moving track. According to the method as shown in FIG. 5 , the object which swings due to wind will be judged to be a suspected target, and these objects which swing at a certain amplitude due to wind will be misjudged to be a mobile target if using the method shown in FIG. 5 .
  • FIG. 8 shows a flow diagram of tracking a suspected target to determine that the suspected target is a mobile target in one embodiment of the present application.
  • the method of tracking the suspected target to determine a mobile target refers to step S 221 and step S 222 shown in FIG. 8 .
  • step S 221 the processing device acquires a moving track of a suspected target through tracking the suspected target; and in step S 222 , if the moving track of the suspected target is continuous, the suspected target is determined to be a mobile target.
  • a third frame image within a field of view when the mobile robot moves from a second position to a third position is continuously captured.
  • the first frame image, the second frame image and the third frame image are images acquired in sequence, the second frame image and the third frame image have an image overlapped region, and a comparison detection is performed on the second frame image and the third frame image according to step S 211 and step S 212 .
  • the second frame image and the compensated third frame image are subjected to subtraction processing and the difference image between image overlapped regions thereof is not zero, that is, when a discriminative feature exists in the difference image, and the discriminative feature simultaneously exists in the second frame image and the third frame image
  • a second moving track of the suspected target within the time period when the image acquisition device captures the second frame image and the third frame image is obtained based on the difference image, and when the first moving track and the second moving track are continuous, the suspected target is determined to be a mobile target.
  • each comparison detection is performed on the newly acquired image and adjacent image according to step S 211 and step S 212 , and further more moving tracks of the suspected target are obtained, so as to judge whether the suspected target is a mobile target.
  • the comparison detection is performed on the second frame image and the third frame image as shown in FIG.
  • step S 211 and step S 212 the subtraction processing is performed between the second frame image and the compensated third frame image as shown in FIG. 7 and a difference image of the image overlapped regions thereof is not zero, meanwhile a discriminative feature i.e. the butterfly A exists, and a second moving track about the butterfly A within the time period when the image acquisition device captures the second frame image and the third frame image can be obtained based on the difference image, thus a moving track of the suspected target (the butterfly A) which moves from the left side of the clock to the right side of the clock and then moves to the head of bed can be obtained, that is, the butterfly A is judged to be a mobile target.
  • the suspected target the butterfly A
  • the suspected target is tracked according to an image feature of the suspected target.
  • the image feature includes a preset graphic feature corresponding to a suspected target, or an image feature obtained through performing an image processing algorithm on a suspected target.
  • the image processing algorithm includes but is not limited to at least one of the following: grayscale processing, sharpening processing, contour extraction, corner extraction, line extraction, and image processing algorithms obtained by machine learning.
  • Image processing algorithm obtained by machine learning includes but is not limited to: neural network algorithm and clustering algorithm.
  • the first frame image, the second frame image and the third frame image are images acquired in sequence, and the second frame image and the third frame image have an image overlapped region, a suspected target is searched in the third frame image according to the image feature of the suspected image.
  • a static object exists within the overlapped field of view in which the image acquisition device captures the second frame image and the third frame image, according to relative position information of the mobile robot at the second position and the third position, and also the position change of the suspected target relative to a same static target in the second frame image and the third frame image, a second moving track of the suspected target within the time period in which the image acquisition device captures the second frame image and the third frame image is obtained.
  • the suspected target is determined to be a mobile target.
  • FIG. 9 shows a flow diagram of comparing between at least two-frame images selected from multiple-frame images in one embodiment of the present application.
  • the comparison between at least two-frame images selected from the multiple-frame images includes step S 210 ′ and step S 220 ′ as shown in FIG. 9 .
  • step S 210 ′ the processing device detects a suspected target on the basis of a matching operation on corresponding feature information in the at least two-frame images.
  • the feature information includes at least one of the following: feature point, feature line, feature color, and so on.
  • FIG. 10 shows a flow diagram of detecting a suspected target according to a matching operation on corresponding feature information in at least two-frame images in one embodiment of the present application.
  • step S 210 ′ is realized through step S 211 ′ and step S 212 ′.
  • step S 211 ′ the processing device extracts each feature point in the at least two-frame images respectively, and matches each feature point extracted from the at least two-frame images with a reference three-dimensional coordinate system; wherein the reference three-dimensional coordinate system is formed by performing three-dimensional modeling on a mobile space, and the reference three-dimensional coordinate system is marked with coordinates of each feature point for all the static targets in the mobile space.
  • the feature point for example includes angular point, end point, inflection point etc., corresponding to the entity objects.
  • a set of feature points corresponding to a static target can form an external contour of the static target, that is, a corresponding static target can be recognized through a set of feature points.
  • An image identification can be performed in advance on all the static targets in a mobile space where a mobile robot moves according to identification conditions, so as to obtain feature points related to each static target respectively, and coordinates of each feature point can be marked on the reference three-dimensional coordinate system. Coordinates of feature points of each static target can be manually uploaded according to a certain format, and the coordinates can be marked on the reference three-dimensional coordinate system.
  • step S 212 ′ the processing device detects a feature point set, constituted by corresponding feature points which are not matched with the reference three-dimensional coordinate system, in the at least two-frame images to be a suspected target.
  • a feature point set constituted by feature points which are not matched with corresponding feature points on the reference three-dimensional coordinate system
  • the feature point set may indicate a static object which is newly added in the mobile space and whose coordinates of feature points are not marked in advance on the reference three-dimensional coordinate system.
  • the feature point set constituted by feature points which are not matched with corresponding feature points on the reference three-dimensional coordinate system, moves or not according to the matching result between two-frame images.
  • the feature point sets, which are not matched with the feature points in the reference coordinates, of the first frame image and the second frame image are the same as or similar to each other.
  • feature points of a static target such as chair, window, book shelf, clock, sofa, bed and so on can all be extracted in advance and marked in the reference three-dimensional coordinate system
  • butterfly A in FIG. 6 and FIG. 7 is a newly added object, and is not marked in the reference three-dimensional coordinate system
  • the set of feature points shows as features of the butterfly A
  • the set of feature points can be displayed as contour features of the butterfly A.
  • the butterfly A is located at a left side of the clock, while in the second frame image as shown in FIG. 7 , the butterfly A is located at a right side of the clock, that is, a first moving track showing the butterfly A moves from a left side of the clock to a right side of the clock can be obtained through matching the first frame image and the second frame image with the feature points marked on the reference three-dimensional coordinate system respectively.
  • step S 220 ′ is further performed, that is, the suspected target is tracked to determine that the suspected target is a mobile target.
  • Special condition is that, for example, some hanging decorations or ceiling lamps swing regularly at a certain amplitude because of wind. Generally, the swing is merely regular back-and-forth movement within a small range or merely irregular movement at a small amplitude, and the movement cannot form continuous movement.
  • the object which swings due to wind forms discriminative feature in the difference image, and forms a moving track. According to the method as shown in FIG.
  • FIG. 11 shows a flow diagram of tracking a suspected target to determine a mobile target in one embodiment of the present application.
  • the method of tracking the suspected target to determine a mobile target please refer to step S 221 ′ and step S 222 ′ as shown in FIG. 11 .
  • step S 221 ′ the processing device acquires a moving track of a suspected target through tracking the suspected target; and in step S 222 ′, if the moving track of the suspected target is continuous, the suspected target is determined to be a mobile target.
  • a third frame image at a third position of the mobile robot is further captured.
  • the first frame image, the second frame image and the third frame image are images acquired in sequence, the second frame image and the third frame image have an image overlapped region, and a comparison detection is performed on the second frame image and the third frame image according to step S 211 and step S 212 .
  • the second frame image and the compensated third frame image are subjected to subtraction processing and the difference image thereof is not zero, that is, when a discriminative feature exists in the difference image, and the discriminative feature simultaneously exists in the second frame image and the third frame image
  • a second moving track of the suspected target within the time period when the image acquisition device captures the second frame image and the third frame image is obtained based on the difference image, and when the first moving track and the second moving track are continuous, the suspected target is determined to be a mobile target.
  • each comparison detection is performed on the newly acquired image and adjacent image according to step S 211 and step S 212 , and further more moving tracks of the suspected target are obtained, so as to judge whether the suspected target is a mobile target.
  • the comparison detection is performed on the second frame image and the third frame image as shown in FIG. 5 according to step S 211 and step S 212 , the subtraction processing is performed between the second frame image and the compensated third frame image as shown in FIG.
  • a discriminative feature i.e. the butterfly A exists, and a second moving track about the butterfly A within the time period when the image acquisition device captures the second frame image and the third frame image can be obtained based on the difference image, thus a moving track of the suspected target (the butterfly A) which moves from the left side of the clock to the right side of the clock and then moves to the head of bed can be obtained, that is, the butterfly A is judged to be a mobile target.
  • the suspected target is tracked according to an image feature of the suspected target.
  • the image feature includes a preset graphic feature corresponding to a suspected target, or an image feature obtained through performing an image processing algorithm on a suspected target.
  • the image processing algorithm includes but is not limited to at least one of the following: grayscale processing, sharpening processing, contour extraction, corner extraction, line extraction, and image processing algorithms obtained by machine learning.
  • Image processing algorithm obtained by machine learning includes but is not limited to: neural network algorithm and clustering algorithm.
  • a third frame image at a third position of the mobile robot is further acquired.
  • the first frame image, the second frame image and the third frame image are images acquired in sequence, and the second frame image and the third frame image have an image overlapped region, a suspected target is searched in the third frame image according to the image feature of the suspected image.
  • a static object exists within the overlapped field of view in which the image acquisition device captures the second frame image and the third frame image, according to relative position information of the mobile robot at the second position and the third position, and also the position change of the suspected target relative to a same static target in the second frame image and the third frame image, a second moving track of the suspected target within the time period in which the image acquisition device captures the second frame image and the third frame image is obtained.
  • the suspected target is determined to be a mobile target.
  • FIG. 12 shows a flow diagram of object recognition in one embodiment of the present application.
  • the monitoring method further includes step S 300 and step S 400 .
  • step S 300 the processing device performs object recognition on a mobile target in a captured image; wherein, the object recognition means recognition of a target object through a method of feature matching or model recognition.
  • a method of object recognition based on feature matching generally includes the following steps: extracting an image feature of an object, describing the extracted feature, and performing feature matching on the described object.
  • the image feature includes graphic feature corresponding to the mobile target, or image feature obtained through an image processing algorithm.
  • the image processing algorithm includes but is not limited to at least one of the following: grayscale processing, sharpening processing, contour extraction, corner extraction, line extraction, and image processing algorithms obtained through machine learning.
  • the mobile target includes for example a mobile person or a mobile animal.
  • the object recognition is realized by an object recognizer which includes a trained neural network.
  • the neural network model is a convolutional neural network, and the network structure includes an input layer, at least one hidden layer and at least one output layer.
  • the input layer is configured to receive captured images or preprocessed images
  • the hidden layer includes a convolutional layer and an activation function layer, and even includes at least one of a normalization layer, a pooling layer and a fusion layer
  • the output layer is configured to output images marked with object type labels.
  • the connection mode is determined according to a connection relationship of each layer in a neural network model, for example, a connection relationship between a front layer and a rear layer set based on data transmission, a connection relationship with data of the front layer set based on size of a convolution kernel in each hidden layer, and a full connection relationship.
  • Features and advantages of an artificial neural network mainly include the following three aspects: firstly, function of self learning, secondly, function of associative storage, and thirdly, capability of searching for an optimal solution at a high speed.
  • the processing device outputs monitoring information according to results of object recognition.
  • the monitoring information includes one or more of image information, video information, audio information, and text information.
  • the monitoring information can be image picture containing the mobile target, and can also be prompt information which is sent to a preset communication address, and the prompt information can be for example prompt message of an APP, short message, mail, voice broadcast, alarm, and so on.
  • the prompt information contains key words related to the mobile target. When a key word of the mobile target is “person”, the prompt information can be prompt message of an APP, short message, mail, voice broadcast and alarm containing the key word “person”, for example, an information of “somebody is intruding” in the form of text or voice.
  • the preset communication address at least contains one of the followings: a telephone number bound with the mobile robot, instant messaging accounts (Wechat accounts, QQ accounts or Facebook accounts, etc.), an e-mail address and a network platform.
  • FIG. 13 shows a flow diagram of object recognition in one embodiment of the present application.
  • the method further includes step S 500 and step S 600 ;
  • step S 500 the processing device uploads the captured image or video containing an image to a cloud server for object recognition on a mobile target in the image; and the cloud server includes an object recognizer containing a trained neural network;
  • the processing device receives results of object recognition from the cloud server and outputs monitoring information.
  • the monitoring information includes one or more of image information, video information, audio information, and text information.
  • the monitoring information can be image picture containing the mobile target, and can also be prompt information which is sent to a preset communication address, and the prompt information can be for example prompt message of an APP, short message, mail, voice broadcast, alarm, and so on.
  • the prompt information contains key words related to the mobile target. When a key word of the mobile target is “person”, the prompt information can be prompt message of an APP, short message, mail, voice broadcast and alarm containing the key word “person”, for example, an information of “somebody is intruding” in the form of text or voice.
  • the mobile robot captures images through an image acquisition device and selects a first frame image and a second frame image among the captured images, then the mobile robot uploads the first frame image and the second frame image to the cloud server for image comparison, and receives results of object recognition sent by the cloud server.
  • the mobile robot directly uploads the captured images to the cloud server after the mobile robot captures images through an image acquisition device, the cloud server selects two-frame images according to the monitoring method for a mobile target and performs image comparison on the selected two-frame images, and then the mobile robot receives results of object recognition sent by the cloud server.
  • the monitoring method of the mobile target please refer to FIG. 1 and related description of FIG. 1 , and the monitoring method will not be described in detail herein.
  • the monitoring method used in a mobile robot for a mobile target has the following beneficial effects: through the technical solution that acquiring multiple-frame images captured by an image acquisition device under an moving state of a robot in a monitored region, selecting at least two-frame images with an overlapped region from the multiple-frame images, performing comparison between the selected images by image compensation method or feature matching method, and outputting monitoring information containing a mobile target which moves relative to a static target based on the result of comparison, and wherein the position of the mobile target in each of the at least two-frame images has an attribute of indefinite change, the mobile target in the monitored region can be recognized precisely during movement of the mobile robot, and monitoring information about the mobile target can be generated to prompt correspondingly, thereby safety of the monitored region can be effectively ensured.
  • FIG. 14 shows a structural schematic diagram of a monitoring device of a mobile target used in a mobile robot of the present application in one embodiment.
  • the mobile robot includes a movement device and an image acquisition device.
  • the image acquisition device is arranged on the mobile robot, and is configured to capture an entity object within a field of view of the image acquisition device at the position where a mobile robot is located, so as to obtain a projected image, wherein the projected image is located on the plane which is parallel to the moving plane.
  • the image acquisition device includes but is not limited to: fisheye camera module, wide angle (or non-wide angle) camera module, depth camera module, camera module integrated with an optical system or CCD chip, and camera module integrated with an optical system and CMOS chip.
  • the mobile robot includes, but is not limited to, a family companion mobile robot, a cleaning robot, a patrol mobile robot, a glass cleaning robot, etc.
  • the power supply system of the image acquisition device can be controlled by the power supply system of the mobile robot, and during the period that the mobile robot is powered on and moves, the image acquisition device begins to capture images.
  • the mobile robot at least includes an image acquisition device.
  • the image acquisition device captures images within a field of view at a position where the mobile robot is located.
  • a mobile robot includes an image acquisition device which is arranged on the top, shoulder or back of the mobile robot, and the principal optic axis of the image acquisition device is vertical to a moving plane of the mobile robot, or the principal optic axis is consistent with a travelling direction of the mobile robot.
  • the principal optic axis can also be set to form a certain angle (for example, an angle between 50° and 86°) with the moving plane on which the mobile robot is located, to acquire a greater image acquisition range.
  • the principal optic axis of the image acquisition device can also be set in many other ways, for example, the image acquisition device can rotate according to a certain rule or rotate randomly, in this case, an angle between the optic axis of the image acquisition device and a travelling direction of the mobile robot is constantly changed, therefore, installation manners of the image acquisition device and states of the principal optic axis of the image acquisition device are not limited to what are enumerated in the present embodiment.
  • the mobile robot includes two or more image acquisition devices, for example, a binocular image acquisition device or a multi-image acquisition devices.
  • the principal optic axis of one image acquisition device is vertical to moving plane of the mobile robot, or the principal optic axis is consistent with a travelling direction of the mobile robot.
  • the principal optic axis can also be set to form a certain angle with the moving plane, so as to acquire a greater image acquisition range.
  • the principal optic axis of the image acquisition device can also be set in many other ways, for example, the image acquisition device can rotate according to a certain rule or rotate randomly, in this case, an angle between the optic axis of the image acquisition device and a travelling direction of the mobile robot is constantly changed, therefore, installation manners of the image acquisition device and states of the principal optic axis of the image acquisition device are not limited to what are enumerated in the present embodiment.
  • a movement device of the mobile robot can include a travelling mechanism and a travelling drive mechanism, wherein the travelling mechanism can be arranged at a bottom of the robot body, and the travelling drive mechanism is arranged inside the robot body.
  • the travelling mechanism can for example include a combination of two straight-going walking wheels and at least one auxiliary steering wheel, wherein the two straight-going walking wheels are respectively arranged at two opposite sides at a bottom of the robot body, and the two straight-going walking wheels can be independently driven by two corresponding travelling drive mechanisms respectively, that is, a left straight-going walking wheel is driven by a left travelling drive mechanism, while a right straight-going walking wheel is driven by a right travelling drive mechanism.
  • the universal walking wheel or the straight-going walking wheel can be provided with a bias drop suspension system which is fixed in a movable manner, for example, the bias drop suspension system can be installed on a robot body in a rotatable manner and receives spring bias which is downwards and away from the robot body.
  • the spring bias enables the universal walking wheel or the straight-going walking wheel to maintain contact and traction with the ground with a certain landing force.
  • the two straight-going walking wheels are mainly used for going forward and backward, while when the at least one auxiliary steering wheel participates and matches with the two straight-going walking wheels, movements such as steering and rotating can be realized.
  • the travelling drive mechanism can include a drive motor and a control circuit configured to control the drive motor, and the drive motor can be used to drive the walking wheels in the travelling mechanism to move.
  • the drive motor can be for example a reversible drive motor, and a gear shift mechanism can be further arranged between the drive motor and the axle of a walking wheel.
  • the travelling drive mechanism can be installed on the robot body in a detachable manner, thereby facilitating disassembly and maintenance.
  • the monitoring device 700 for a mobile target includes: at least one processing device 710 and at least one storage device 720 .
  • the processing device is an electronic device which is capable of performing numeric calculation, logical calculation and data analysis, and the processing device includes but is not limited to: CPU, GPU, FPGA, etc.
  • the storage device 720 may include high-speed RAM (random access memory) and may also include NVM (non-volatile memory), such as one or more disk storage devices, flash memory devices or other non-volatile solid-state storage devices.
  • the storage device may also include a storage device away from one or more processors, such as network attached storage device accessed via RF circuits or external ports and communication networks, wherein the communication network can be the Internet, one or more intranets, LAN, WLAN, SAN, etc., or an appropriate combination thereof.
  • the memory controller can control access to memory by other components of the device, such as CPU and peripheral interfaces.
  • the storage device 720 is used to store images captured by the image acquisition device under an operating state of the movement device. At least one program is stored in the at least one storage device 720 , and is invoked by the at least one processing device 710 such that the monitoring device 700 performs a monitoring method for a mobile target.
  • the monitoring method for a mobile target can be seen in FIG. 1 and its description, thus it's not repeated here.
  • FIG. 15 shows a structural schematic diagram of a mobile robot of the present application in one embodiment.
  • the mobile robot 800 include the movement device 810 , the image acquisition device 820 and the monitoring device 830 .
  • the image acquisition device 820 is arranged on the mobile robot 800 , and is configured to capture an entity object within a field of view of the image acquisition device 820 at the position where the mobile robot 800 is located, so as to obtain a projected image, wherein the projected image is located on the plane which is parallel to the moving plane.
  • the image acquisition device 820 includes but is not limited to: fisheye camera module, wide angle (or non-wide angle) camera module, depth camera module, camera module integrated with an optical system or CCD chip, and camera module integrated with an optical system and CMOS chip.
  • the mobile robot 800 includes, but is not limited to, a family companion mobile robot, a cleaning robot, a patrol mobile robot, a glass cleaning robot, etc.
  • the power supply system of the image acquisition device 820 can be controlled by the power supply system of the mobile robot 800 , and during the period that the mobile robot is powered on and moves, the image acquisition device 820 begins to capture images.
  • the mobile robot 800 at least includes an image acquisition device 820 .
  • the image acquisition device 820 captures images within a field of view at a position where the mobile robot 800 is located.
  • a mobile robot 800 includes an image acquisition device 820 which is arranged on the top, shoulder or back of the mobile robot, and the principal optic axis of the image acquisition device 820 is vertical to a moving plane of the mobile robot, or the principal optic axis is consistent with a travelling direction of the mobile robot.
  • the principal optic axis can also be set to form a certain angle (for example, an angle between 50° and 86°) with the moving plane on which the mobile robot is located, to acquire a greater image acquisition range.
  • the principal optic axis of the image acquisition device 820 can also be set in many other ways, for example, the image acquisition device 820 can rotate according to a certain rule or rotate randomly, in this case, an angle between the optic axis of the image acquisition device 820 and a travelling direction of the mobile robot is constantly changed, therefore, installation manners of the image acquisition device 820 and states of the principal optic axis of the image acquisition device 820 are not limited to what are enumerated in the present embodiment.
  • the mobile robot includes two or more image acquisition devices ( 820 ), for example, a binocular image acquisition device 820 or a multi-image acquisition devices ( 820 ).
  • the principal optic axis of one image acquisition device 820 is vertical to moving plane of the mobile robot, or the principal optic axis is consistent with a travelling direction of the mobile robot.
  • the principal optic axis can also be set to form a certain angle with the moving plane, so as to acquire a greater image acquisition range.
  • the principal optic axis of the image acquisition device 820 can also be set in many other ways, for example, the image acquisition device 820 can rotate according to a certain rule or rotate randomly, in this case, an angle between the optic axis of the image acquisition device 820 and a travelling direction of the mobile robot is constantly changed, therefore, installation manners of the image acquisition device 820 and states of the principal optic axis of the image acquisition device 820 are not limited to what are enumerated in the present embodiment.
  • the movement device 810 of the mobile robot 800 can include a travelling mechanism and a travelling drive mechanism, wherein the travelling mechanism can be arranged at a bottom of the robot body, and the travelling drive mechanism is arranged inside the robot body.
  • the travelling mechanism can for example include a combination of two straight-going walking wheels and at least one auxiliary steering wheel, wherein the two straight-going walking wheels are respectively arranged at two opposite sides at a bottom of the robot body, and the two straight-going walking wheels can be independently driven by two corresponding travelling drive mechanisms respectively, that is, a left straight-going walking wheel is driven by a left travelling drive mechanism, while a right straight-going walking wheel is driven by a right travelling drive mechanism.
  • the universal walking wheel or the straight-going walking wheel can be provided with a bias drop suspension system which is fixed in a movable manner, for example, the bias drop suspension system can be installed on a robot body in a rotatable manner and receives spring bias which is downwards and away from the robot body.
  • the spring bias enables the universal walking wheel or the straight-going walking wheel to maintain contact and traction with the ground with a certain landing force.
  • the two straight-going walking wheels are mainly used for going forward and backward, while when the at least one auxiliary steering wheel participates and matches with the two straight-going walking wheels, movements such as steering and rotating can be realized.
  • the travelling drive mechanism can include a drive motor and a control circuit configured to control the drive motor, and the drive motor can be used to drive the walking wheels in the travelling mechanism to move.
  • the drive motor can be for example a reversible drive motor, and a gear shift mechanism can be further arranged between the drive motor and the axle of a walking wheel.
  • the travelling drive mechanism can be installed on the robot body in a detachable manner, thereby facilitating disassembly and maintenance.
  • the monitoring device 830 is in communication links with the movement device 810 and the image acquisition device 820 , the monitoring device 830 includes image acquisition unit 831 , mobile target detecting unit 832 and information output unit 833 .
  • the image acquisition unit 831 is in communication links with both the movement device 810 and the image acquisition device 820 , and the image acquisition unit 831 acquires multiple-frame images captured by the image acquisition device 820 under the operating state of the movement device 810 .
  • the multiple-frame images can be for example multiple-frame images acquired in a continuous time period, or multiple-frame images acquired within two or more discontinuous time periods.
  • the mobile target detecting unit 832 performs comparison between at least two-frame images selected from the multiple-frame images so as to detect a mobile target.
  • the at least two-frame images are images captured by the mage acquisition device 820 within partially overlapped field of view. That is, the mobile target detecting unit 832 determines to select a first frame image and a second frame image on the basis that the two-frame images contain an image overlapped region, and the overlapped field of view contains a static target, so as to monitor a mobile target which moves relative to the static target in the overlapped field of view.
  • the proportion of the image overlapped region in the first frame image and in the second frame image can also be set, for example, the proportion of the image overlapped region in the first frame image and in the second frame image are respectively at least 50% (but the proportions are not limited to this proportion, different proportions of the first frame image and the second frame image can be set depending on the situation).
  • the selection of the first frame image and the second frame image should be continuous to some extent, and while ensuring that the first frame image and the second frame image have a certain proportion of an image overlapped region, the continuity of the moving track of the mobile target can be judged based on acquired images.
  • a position of the mobile target in each of the at least two-frame images has an attribute of indefinite change.
  • the information output unit 833 output monitoring information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images.
  • the static target for example includes but is not limited to: ball, shoe, wall, flowerpot, cloth and hat, roof, lamp, tree, table, chair, refrigerator, television, sofa, sock, tiled object, and cup.
  • the tiled object includes but is not limited to ground mat or floor tile map paved on the floor, and tapestry and picture hung on a wall.
  • the monitoring information includes one or more of image information, video information, audio information, and text information.
  • the monitoring information can be image picture containing the mobile target, and can also be prompt information which is sent to a preset communication address, and the prompt information can be for example prompt message of an APP, short message, mail, voice broadcast, alarm, and so on.
  • the prompt information contains key words related to the mobile target. When a key word of the mobile target is “person”, the prompt information can be prompt message of an APP, short message, mail, voice broadcast and alarm containing the key word “person”, for example, an information of “somebody is intruding” in the form of text or voice.
  • the preset communication address at least contains one of the followings: a telephone number bound with the mobile robot, instant messaging accounts (Wechat accounts, QQ accounts or Facebook accounts, etc.), an e-mail address and a network platform.
  • the mobile target detecting unit 832 includes a comparing module and a tracking module.
  • the comparing module detects a suspected target based on the comparison between at least two-frame images;
  • the step of the comparing module detecting a suspected target based on the comparison between at least two-frame images includes:
  • the mobile target detecting unit 832 is in communication links with the movement device 810 , so as to acquire movement information of the movement device 810 within the time period between the at least two-frame images and perform image compensation on the at least two-frame images.
  • the tracking module tracks the suspected target to determine the mobile target.
  • the step of tracking the suspected target to determine the mobile target includes:
  • the mobile target detecting unit 832 can identify the mobile target without communication links with the movement device 810 as shown in FIG. 8 .
  • the mobile target detecting unit 832 includes a matching module and a tracking module.
  • the matching module detecting a suspected target based on a matching operation on corresponding feature information in the at least two-frame images include:
  • each feature point in the at least two-frame images respectively, and matching extracted each feature point in the at least two-frame images with a reference three-dimensional coordinate system; wherein the reference three-dimensional coordinate system is formed through performing three-dimensional modeling on a mobile space, and the reference three-dimensional coordinate system is marked with coordinate of each feature point on all static targets in the mobile space; and
  • the feature point set is composed of feature points in the at least two-frame images that are not matched with the reference three-dimensional coordinate system.
  • the tracking module tracks the suspected target to determine the mobile target.
  • the step of tracking the suspected target to determine the mobile target includes:
  • the monitoring device 830 further includes object recognition unit, the object recognition unit performs object recognition in the captured images, so that the information output unit can output the monitoring information based on the result of object recognition.
  • the object recognition unit includes a trained neural network.
  • the object recognition means recognition of a target object through a method of feature matching or model recognition.
  • a method of object recognition based on feature matching generally includes the following steps: extracting an image feature of an object, describing the extracted feature, and performing feature matching on the described object.
  • the image feature includes graphic feature corresponding to the mobile target, or image feature obtained through an image processing algorithm.
  • the image processing algorithm includes but is not limited to at least one of the following: grayscale processing, sharpening processing, contour extraction, corner extraction, line extraction, and image processing algorithms obtained through machine learning.
  • the mobile target includes for example a mobile person or a mobile animal.
  • the object recognition is realized by an object recognizer which includes a trained neural network.
  • the neural network model is a convolutional neural network, and the network structure includes an input layer, at least one hidden layer and at least one output layer.
  • the input layer is configured to receive captured images or preprocessed images
  • the hidden layer includes a convolutional layer and an activation function layer, and even includes at least one of a normalization layer, a pooling layer and a fusion layer
  • the output layer is configured to output images marked with object type labels.
  • the connection mode is determined according to a connection relationship of each layer in a neural network model, for example, a connection relationship between a front layer and a rear layer set based on data transmission, a connection relationship with data of the front layer set based on size of a convolution kernel in each hidden layer, and a full connection relationship.
  • features and advantages of an artificial neural network mainly include the following three aspects: firstly, function of self-learning, secondly, function of associative storage, and thirdly, capability of searching for an optimal solution at a high speed.
  • the monitoring information includes one or more of image information, video information, audio information, and text information.
  • the monitoring information can be image picture containing the mobile target, and can also be prompt information which is sent to a preset communication address, and the prompt information can be for example prompt message of an APP, short message, mail, voice broadcast, alarm, and so on.
  • the prompt information contains key words related to the mobile target. When a key word of the mobile target is “person”, the prompt information can be prompt message of an APP, short message, mail, voice broadcast and alarm containing the key word “person”, for example, an information of “somebody is intruding” in the form of text or voice.
  • the preset communication address at least contains one of the followings: a telephone number bound with the mobile robot, instant messaging accounts (Wechat accounts, QQ accounts or Facebook accounts, etc.), an e-mail address and a network platform.
  • the monitoring device 830 includes a receive-send unit, the receive -send unit uploads the captured image or video containing an image to a cloud server for object recognition on a mobile target in the image and receiving the result of object recognition from the cloud server so that the information output unit can output the monitoring information
  • the cloud server includes an object recognizer containing a trained neural network
  • the mobile robot captures images through an image acquisition device and selects a first frame image and a second frame image among the captured images, then the mobile robot uploads the first frame image and the second frame image to the cloud server for image comparison, and receives results of object recognition sent by the cloud server.
  • the mobile robot directly uploads the captured images to the cloud server after the mobile robot captures images through an image acquisition device, the cloud server selects two-frame images by operating the mobile target detecting unit 832 and performs image comparison on the selected two-frame images, and then the mobile robot receives results of object recognition sent by the cloud server.
  • the cloud server selects two-frame images by operating the mobile target detecting unit 832 and performs image comparison on the selected two-frame images, and then the mobile robot receives results of object recognition sent by the cloud server.
  • the technical solution of a monitoring device 830 of a mobile target in an embodiment in FIG. 15 corresponds to the monitoring method of a mobile target.
  • the monitoring method of the mobile target please refer to FIG. 1 and related description of FIG. 1 , and all the descriptions about the monitoring method of the mobile target can be applied to related embodiments of the monitoring device 830 of a mobile target and will not be repeated described in detail herein.
  • the modules are divided only based on the logical function. While in practical application, the modules can be all or partially integrated into a physical entity or not.
  • the modules can be invoked by software or hardware or both.
  • each module can be an independent processing unit, or the modules can be invoked by one chip integrated in the device.
  • a program code can be stored in the memory of the device, which is invoked by a processing unit of the device and enables the modules to function.
  • the processing unit can be a integrated circuit with the ability to process signals. In practical application, the above steps and modules can be accomplished by integrated logical circuit in hardware or by software.
  • the modules can be configured with one or more integrated circuit to implement the above method, such as one or more ASIC, DSP, FPGA, etc.
  • the processing unit can be general processing unit such as CPU or other.
  • the modules can be integrated and invoked by SOC.
  • the present application further provides a computer storage medium which stores at least one program that executes any monitoring method for a mobile target mentioned above when the program is invoked.
  • the monitoring method for a mobile target is referred in FIG. 1 and its related description, and is not described here.
  • the computer program code can be source code, object code, executable file or some intermediate form, etc.
  • the technical solutions of the present application essentially or the part contributing to the prior art can be embodied in the form of a software product
  • the computer software product can include one or more machine readable media which store machine executable instructions thereon, when these instructions are executed by one or more machines such as a computer, a computer network or other electronic apparatus, such one or more machines can execute operations based on the embodiments of the present application.
  • the machine readable media include but are not limited to, any entity or device capable of carrying the computer program code, a recording medium, a U disk, a mobile hard disk, a computer memory, a floppy disk, an optical disk, a CD-ROM (a compact disc-read only memory), a magnetic optical disc, an ROM (read-only memory), an RAM (random access memory), an EPROM (erasable programmable read-only memory), an EEPROM (electrically erasable programmable read-only memory), a magnetic card or optical card, a flash memory, an electric carrier signal, a telecommunication signal and a software distribution media or other types of media/machine readable media which are applicable to storing machine executable instructions.
  • the machine readable media can be different according to the related legislation and patent law in different jurisdictions.
  • an electric carrier signal or a telecommunication signal is not included in the computer readable media.
  • the storage media can be located in the mobile robot and can also be located in a third-party server, for example, in a server providing a certain application store.
  • Specific application stores are not limited herein, and can be a MIUI application store, a Huawei application store, and an Apple application store, etc.
  • FIG. 16 shows a structural schematic diagram of a monitoring system of the present application in one embodiment.
  • the monitoring system 900 includes a cloud server 910 and a mobile robot 920 , and the mobile robot 920 is in communication with the cloud server 910 .
  • the mobile robot 920 includes an image acquisition device and a movement device.
  • the mobile robot 920 moves in a three-dimensional space shown in FIG. 16 , and a mobile target butterfly A exists in the mobile space shown in FIG. 16 .
  • the mobile robot 920 captures multiple-frame images, selects two-frame images from the multiple-frame images and compares the two-frame images, and outputs a mobile target which moves relative to a static target.
  • the selected two-frame images are for example the first frame image as shown in FIG. 6 and the second frame image as shown in FIG. 7 .
  • the first frame image and the second frame image have an image overlapped region shown by a dotted box in FIG. 6 and FIG. 7 , and the image overlapped region corresponds to an overlapped field of view of the image acquisition device at the first position and the second position.
  • there are multiple static targets in the overlapped field of view of the image acquisition device for example, chair, window, book shelf, clock, sofa, bed, and so on.
  • a butterfly A is located at a left side of a clock, while in FIG.
  • the butterfly A is located at a right side of the clock, and the image comparison is performed on the first frame image and the second frame image to obtain a suspected target which moves relative to the static target.
  • the method for image comparison for example can be referred to FIG. 5 and related description of FIG. 5 , that is, the processing device of the mobile robot 920 obtains the movement information of the mobile robot 920 during the image acquisition device captures the first frame image and the second frame image, compensates the first frame image or the second frame image according to the movement information, and performs difference subtraction between the compensated image and the other original image to obtain a suspected target (butterfly A) with a regional moving track (from a left side of the clock to a right side of the clock).
  • a manner of feature comparison as shown in FIG. 10 is referred to, wherein each feature point in the first frame image and the second frame image is extracted, and each feature point extracted from the two-frame images is matched on a reference three-dimensional coordinate system, wherein the reference three-dimensional coordinate system is formed through modeling a mobile space shown in FIG. 16 .
  • the processing device of the mobile robot 920 detects a feature point set, constituted by corresponding feature points which are not matched on the reference three-dimensional coordinate system, in the two-frame images to be a suspected target.
  • the suspected target is tracked to acquire a continuous moving track of the suspected target and determine the suspected target (butterfly A) to be a mobile target (butterfly A). Further, according to the method shown in FIG. 8 or FIG. 11 , the suspected target is tracked to obtain a moving track of the suspected target, and when the moving track of the suspected target is continuous, the suspected target is determined to be a mobile target.
  • a continuous moving track of the butterfly A is obtained, for example, the butterfly A moves from a left side of the clock to a right side of the clock, and then moves to a head of the bed.
  • the mobile robot 920 uploads images or videos containing the mobile target to the cloud server 910 , and the mobile robot 920 outputs monitoring information according to results of object recognition of the mobile target received from the cloud server 910 .
  • the object recognition process for example includes recognizing images and videos containing the mobile target through preset image features, wherein the image feature can be for example image point feature, image line feature or image color feature.
  • the mobile target is recognized as the butterfly A through detection on the contour of the butterfly A.
  • the mobile robot 920 receives results of object recognition sent by the cloud server 910 , and output monitoring information to a designated client according to the results of object recognition.
  • the client can be for example electronic device with an intelligent data processing function such as a smart phone, a tablet computer, a smart watch and so on.
  • the cloud server 910 performs the object recognition to obtain a result of object recognition, and outputs monitoring information to the designated client according to the result directly.
  • the cloud server 910 performs object recognition on the received images or videos containing images, and sends the results of object recognition to the mobile robot 920 .
  • the mobile robot 920 captures multiple-frame images under a moving state and uploads them to the cloud server 910 ; and the cloud server 910 selects two-frame images from the multiple-frame images and compares the two-frame images, wherein the selected two-frame images are for example the first frame image as shown in FIG. 6 and the second frame image as shown in FIG. 7 .
  • the first frame image and the second frame image have an image overlapped region shown by a dotted box in FIG. 6 and FIG.
  • the image overlapped region corresponds to an overlapped field of view of the image acquisition device at the first position and the second position.
  • there are multiple static targets in the overlapped field of view of the image acquisition device for example, chair, window, book shelf, clock, sofa, bed, and so on.
  • a butterfly A is located at a left side of a clock
  • the butterfly A is located at a right side of the clock
  • the image comparison is performed on the first frame image and the second frame image to obtain a suspected target which moves relative to the static target.
  • the method for image comparison for example can be referred to FIG. 5 and related description of FIG.
  • the processing device of the mobile robot 920 obtains the movement information of the mobile robot 920 during the image acquisition device captures the first frame image and the second frame image, compensates the first frame image or the second frame image according to the movement information, and performs difference subtraction between the compensated image and the other original image to obtain a suspected target (butterfly A) with a regional moving track (from a left side of the clock to a right side of the clock).
  • a suspected target (butterfly A) with a regional moving track (from a left side of the clock to a right side of the clock).
  • a manner of feature comparison as shown in FIG.
  • each feature point in the first frame image and the second frame image is extracted, and each feature point extracted from the two-frame images is matched on a reference three-dimensional coordinate system, wherein the reference three-dimensional coordinate system is formed through modeling a mobile space shown in FIG. 16 .
  • the processing device of the mobile robot 920 detects a feature point set, constituted by corresponding feature points which are not matched on the reference three-dimensional coordinate system, in the two-frame images to be a suspected target. Further, according to multiple-frame images which are acquired subsequently, the suspected target is tracked to acquire a continuous moving track of the suspected target and determine the suspected target (butterfly A) to be a mobile target (butterfly A).
  • the first frame image and the second frame image are compared according to the method of difference subtraction on a compensated image as shown in FIG. 5 or according to the method of feature comparison as shown in FIG. 10 , to obtain a suspected target (butterfly A) which moves relative to the static target (for example, a clock). Further, according to the method shown in FIG. 8 or FIG. 11 , the suspected target is tracked to obtain a moving track of the suspected target, and when the moving track of the suspected target is continuous, the suspected target is determined to be a mobile target.
  • a suspected target for example, a clock
  • a continuous moving track of the butterfly A is obtained, for example, the butterfly A moves from a left side of the clock to a right side of the clock, and then moves to a head of the bed.
  • the cloud server 910 sends the recognized results to the mobile robot 920 .
  • the mobile robot 920 can receive results of object recognition sent by the cloud server 910 , and output monitoring information to a designated client according to the results of object recognition.
  • the client can be for example electronic device with an intelligent data processing function such as a smart phone, a tablet computer and a smart watch.
  • the cloud server 910 performs the object recognition to obtain a result of object recognition, and outputs monitoring information to the designated client according to the result directly.
  • the mobile robot 920 further communicates with designated client through a mobile network
  • the client can be for example electronic device with an intelligent data processing function such as a smart phone, a tablet computer and a smart watch.
  • the monitoring method and device for a mobile target, the monitoring system and mobile robot of the present application have the following beneficial effects: through the technical solution that acquiring multiple-frame images captured by an image acquisition device under an moving state of a robot in a monitored region, selecting at least two-frame images with an overlapped region from the multiple-frame images, performing comparison between the selected images by image compensation method or feature matching method, and outputting monitoring information containing a mobile target which moves relative to a static target based on the result of comparison, and wherein the position of the mobile target in each of the at least two-frame images has an attribute of indefinite change, the mobile target in the monitored region can be recognized precisely during movement of the mobile robot, and monitoring information about the mobile target can be generated to prompt correspondingly, thereby safety of the monitored region can be effectively ensured.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Signal Processing (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The present application provides a monitoring method and device for a mobile target, a monitoring system and mobile robot. The present application through the technical solution that acquiring multiple-frame images captured by an image acquisition device under an moving state of a robot in a monitored region, selecting at least two-frame images with an overlapped region from the multiple-frame images, performing comparison between the selected images by image compensation method or feature matching method, and outputting monitoring information containing a mobile target which moves relative to a static target based on the result of comparison, wherein the position of the mobile target has an attribute of indefinite change, the mobile target in the monitored region can be recognized precisely during movement of the mobile robot, and monitoring information about the mobile target can be generated to prompt correspondingly, thereby safety of the monitored region can be effectively ensured.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation application of U.S. patent application Ser. No. 16/522,717, field Jul. 26, 2019, which is a continuation application of International Patent Application No. PCT/CN2018/119293, filed Dec. 5, 2018, the entire contents of which are hereby incorporated by reference.
  • TECHNICAL FIELD
  • The present application relates to the field of intelligent mobile robots, in particular to a monitoring method and device for a mobile target, a monitoring system and a mobile robot.
  • BACKGROUND
  • Nowadays, the world has entered a critical period of reform and development as well as social transformations. Unprecedented social changes, such as profound changes of economic systems, rapid transformations of social structures, accelerated adjustment of benefit patterns, rapid changes of views and values, and great increase in floating populations, have brought tremendous vitality to social development of the world, meanwhile, various security risks have also emerged, and a dramatic change in public security has also put enormous pressure on social security.
  • In order to ensure security of home environments, many people now consider to install an anti-theft system in their home, and the existing manners for anti-theft mainly include protection by means of persons and protection by means of objects (for example, anti-theft doors, iron barriers, etc.). Under some conditions, the following security ways are also used at home: installing an infrared ray anti-theft alarming device, installing electromagnetic password lock or installing monitoring camera at home. The above anti-theft manners are fixed and obvious, these monitoring devices cannot accurately detect moving targets which illegally invade, and those who break in illegally may easily evade monitoring of these anti-theft devices, therefore, reliable, effective and flexible security cannot be provided.
  • SUMMARY
  • In view of the above shortcomings in the prior art, an objective of the present application is to provide a monitoring method and device for a mobile target, a monitoring system and a mobile robot, so as to solve the problem in the prior art that a mobile target cannot be detected effectively and accurately during the movement of a robot.
  • In one aspect, the present application provides a monitoring device for a mobile target. The monitoring device is used in a mobile robot, and comprises a movement device and an image acquisition device, wherein, the monitoring device for a mobile target comprises: at least one processing device; at least one storage device, configured to store images captured by the image acquisition device under an operating state of the movement device; at least one program, wherein the at least one program is stored in the at least one storage device, and is invoked by the at least one processing device such that the monitoring device performs a monitoring method for a mobile target; the monitoring method for a mobile target comprises the following steps: acquiring multiple-frame images captured by the image acquisition device under the operating state of the movement device; and outputting monitoring information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images selected from the multiple-frame images; wherein the at least two-frame images are images captured by the image acquisition device within partially overlapped field of view, and a position of the mobile target in each of the at least two-frame images has an attribute of indefinite change.
  • In some embodiments, the step of performing comparison between at least two-frame images selected from the multiple-frame images comprises the following steps: detecting a suspected target based on the comparison between the at least two-frame images; and tracking the suspected target to determine the mobile target.
  • In some embodiments, the step of detecting a suspected target according to the comparison between the at least two-frame images comprises the following steps: performing image compensation on the at least two-frame images based on movement information of the movement device within a time period between the at least two-frame images; and performing subtraction processing on the compensated at least two-frame images to form a difference image, and detecting the suspected target from the difference image.
  • In some embodiments, the step of performing comparison between at least two-frame images selected from the multiple-frame images comprises the following steps: detecting a suspected target based on a matching operation on corresponding feature information in the at least two-frame images; and tracking the suspected target to determine the mobile target.
  • In some embodiments, the step of detecting a suspected target based on a matching operation on corresponding feature information in the at least two-frame images comprises the following steps: extracting each feature point in the at least two-frame images respectively, and matching extracted each feature point in the at least two-frame images with a reference three-dimensional coordinate system; wherein the reference three-dimensional coordinate system is formed through performing three-dimensional modeling on a mobile space, and the reference three-dimensional coordinate system is marked with coordinate of each feature point on all static targets in the mobile space; and detecting a feature point set as the suspected target, the feature point set is composed of feature points in the at least two-frame images that are not matched with the reference three-dimensional coordinate system.
  • In some embodiments, the step of tracking the suspected target to determine the mobile target comprises the following steps: obtaining a moving track of a suspected target through tracking the suspected target; and determining the suspected target as the mobile target when the moving track of the suspected target is continuous.
  • In some embodiments, the step of tracking the suspected target to determine the mobile target comprises the following steps: obtaining a moving track of a suspected target through tracking the suspected target; and determining the suspected target as the mobile target when the moving track of the suspected target is continuous.
  • In some embodiments, the monitoring method further comprises the following steps: performing object recognition on the mobile target in the captured images, wherein the object recognition is performed by an object recognizer, the object recognizer includes a trained neural network; and outputting the monitoring information according to result of the object recognition.
  • In some embodiments, the monitoring method further comprises the following steps: uploading captured images or videos containing images to a cloud server to perform object recognition on the mobile target in the image; wherein the cloud server includes an object recognizer which includes trained neural networks; and receiving a result of object recognition from the cloud server and outputting the monitoring information.
  • In some embodiments, the monitoring information comprises one or more of image information, video information, audio information and text information.
  • In another aspect, the present application provides a mobile robot. The mobile robot comprises a movement device, configured to control movement of the mobile robot according to received control instruction; an image acquisition device, configured to capture multiple-frame images under an operating state of the movement device; and the monitoring device mentioned above.
  • In another aspect, the present application provides a monitoring system. The monitoring system comprises: a cloud server; and a mobile robot, connected with the cloud server; wherein the mobile robot performs the following steps: acquiring multiple-frame images during movement of the mobile robot; outputting detection information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images selected from the multiple-frame images; uploading the captured images or videos containing images to the cloud server based on the detection information; and outputting monitoring information based on a result of object recognition received from the cloud server; or, wherein the mobile robot performs the following steps: acquiring multiple-frame images during movement of the mobile robot, and uploading the multiple-frame images to the cloud server; and the cloud server performs the following steps: outputting detection information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images selected from the multiple-frame images; and outputting an object recognition result of the mobile target to the mobile robot according to a result of performing object recognition on the mobile target in multiple-frame images, such that the mobile robot outputs monitoring information.
  • As mentioned above, the monitoring method and device for a mobile target, the monitoring system and mobile robot of the present application have the following beneficial effects: through the technical solution that acquiring multiple-frame images captured by an image acquisition device under an moving state of a robot in a monitored region, selecting at least two-frame images with an overlapped region from the multiple-frame images, performing comparison between the selected images by image compensation method or feature matching method, and outputting monitoring information containing a mobile target which moves relative to a static target based on the result of comparison, and wherein the position of the mobile target in each of the at least two-frame images has an attribute of indefinite change, the mobile target in the monitored region can be recognized precisely during movement of the mobile robot, and monitoring information about the mobile target can be generated to prompt correspondingly, thereby safety of the monitored region can be effectively ensured.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a flow diagram of a monitoring method for a mobile target of the present application in one embodiment.
  • FIG. 2 shows image diagrams of selected two-frame images in one embodiment of the present application.
  • FIG. 3 shows image diagrams of selected two-frame images in another embodiment of the present application.
  • FIG. 4 shows a flow diagram of performing comparison between at least two-frame images selected from multiple-frame images in one embodiment of the present application.
  • FIG. 5 shows a flow diagram of detecting a suspected target based on comparison between at least two-frame images in one embodiment of the present application.
  • FIG. 6 shows an image diagram of a first frame image selected in one embodiment of the present application.
  • FIG. 7 shows an image diagram of a second frame image selected in one embodiment of the present application.
  • FIG. 8 shows a flow diagram of tracking a suspected target to determine that the suspected target is a mobile target in one embodiment of the present application.
  • FIG. 9 shows a flow diagram of performing comparison between at least two-frame images selected from multiple-frame images in one embodiment of the present application.
  • FIG. 10 shows a flow diagram of detecting a suspected target based on a matching operation on corresponding feature information in at least two-frame images in one embodiment of the present application.
  • FIG. 11 shows a flow diagram of tracking a suspected target to determine a mobile target in one embodiment of the present application.
  • FIG. 12 shows a flow diagram of object recognition in one embodiment of the present application.
  • FIG. 13 shows a flow diagram of object recognition in another embodiment of the present application.
  • FIG. 14 shows a structural schematic diagram of a monitoring device for a mobile target used in a mobile robot of the present application in one embodiment.
  • FIG. 15 shows a structural schematic diagram of a mobile robot of the present application in one embodiment.
  • FIG. 16 shows a composition diagram of a monitoring system of the present application in one embodiment.
  • DETAILED DESCRIPTION
  • Implementations of the present application will be described below through specific embodiments, and those skilled in the art can easily understand other advantages and effects of the present application from the contents disclosed in the present specification.
  • In the following description, several embodiments of the present application are described with reference to the attached figures. It should be understood that other embodiments may also be used, and changes of mechanical composition, structure, electric and operation may be made without departing from the spirit and scope of this application. The following detailed description should not be considered restrictive, and the scope of the implementation of this application is limited only by the published patent claims. The terminology used herein is intended only to describe specific embodiments and is not intended to restrict this application. Spatial terms, such as “up”, “down”, “left”, “right”, “lower part”, “higher part”, “bottom”, “below”, “above”, “upwards”, etc., can be used in this application to illustrate the relationship between one element and another or one feature and another shown in the figure.
  • Moreover, as used herein, such single forms as “one”, “a” and “the” aim at also including the plural forms, unless contrarily indicated in the text. It should be further understood that, such terms as “comprise” and “include” indicate the existence of the features, steps, operations, elements, components, items, types and/or groups, but do not exclude the existence, emergence or addition of one or more other features, steps, operations, elements, components, items, types and/or groups. The terms “or” and “and/or” used herein are explained to be inclusive, or indicate any one or any combination. Therefore, “A, B or C” or “A, B and/or C” indicates “any of the following: A; B; C; A and B; A and C; B and C; A, B and C”. Exceptions of the definition only exist when the combinations of elements, functions, steps or operations are mutually exclusive inherently in some ways.
  • In order to ensure security of family environments, many people now consider to install an anti-theft system at home. The existing manners for anti-theft mainly include protection by means of persons and protection by means of objects (for example, anti-theft doors, iron barriers, etc.). Under some conditions, the following security ways are also used at home: installing an infrared ray anti-theft alarming devices, installing electromagnetic password locks or installing monitoring cameras at home. The above anti-theft manners are fixed and obvious, and those who break in illegally may easily evade monitoring of these anti-theft devices, therefore, reliable and effective security cannot be provided.
  • Mobile robots perform mobile operations based on navigation control technology. Affected by the situation of mobile robots, when they are in an unknown location of an unknown environment, mobile robots build maps and perform navigation operations based on VSLAM (Visual Simultaneous Localization and Mapping) technology. Specifically, mobile robots construct maps through visual information provided by visual sensors and mobile information provided by position measurement devices, and navigate and move independently based on the map. Wherein, the visual sensors include, for example, a camera device. The position measurement devices for example include speed sensor, odometer sensor, distance sensor, cliff sensor, etc. The mobile robot moves on the plane where it moves (i.e. moving plane), acquiring and storing images that entity objects projected onto the moving plane. The image acquisition device captures entity objects in the field of view at the location of the mobile robot and projects them to the moving plane, so as to obtain the projection images. The entity objects include, for example, a TV set, an air conditioner, a chair, shoes, a leather ball, etc. In the existing practical applications, the mobile robot determines the current position by the position information provided by the position measurement device, and also by identifying image features contained in the images captured by the image acquisition device, through comparing image features captured at the current position with image features stored in the map.
  • The mobile robot is, for example, a security robot. After the security robot is started, a region where security protection is needed can be traversed according to a determined or random route. For the existing security robot, all the captured images are generally uploaded to a monitoring center, and a suspected object in the captured image cannot be prompted based on the situation, therefore, the security robot is not intelligent enough.
  • In view of this, the present application provides a monitoring method for a mobile target used in a mobile robot. The mobile robot comprises a movement device and an image acquisition device, and the monitoring method can be performed by a processing device contained in the mobile robot. Wherein the processing device is an electronic device which is capable of performing numeric calculation, logical calculation and data analysis, and the processing device includes but is not limited to: CPU, GPU and FPGA, and volatile memory configured to temporarily store intermediate data generated during calculation. In the monitoring method, monitoring information containing a mobile target which moves relative to a static target is output through comparing at least two-frame images selected from multiple-frame images captured by an image acquisition device, wherein the multiple-frame images are captured by the image acquisition device under an operating state of the movement device. The static target for example includes but is not limited to: ball, shoe, wall, flowerpot, cloth and hat, roof, lamp, tree, table, chair, refrigerator, television, sofa, sock, tiled object, and cup. Wherein, the tiled object includes but is not limited to ground mat or floor tile map paved on the floor, and tapestry and picture hung on a wall. The mobile robot can be for example a specific security robot, and the security robot monitors a monitored region according to the monitoring method of a mobile target in the present application. In some other embodiments, the mobile robot can also be other mobile robot which contains a module configured to perform the monitoring method of a mobile target in the present application. The other mobile robot for example is a cleaning robot, a mobile robot accompanying family members or a robot for cleaning glass. For example, when the mobile robot is a cleaning robot, it can traverse the whole to-be-cleaned region according to a map constructed in advance using VSLAM technique and in combination with the image acquisition device of the cleaning robot. While the cleaning robot is enabled to start cleaning operations, a module which is possessed by the cleaning robot and which carries the monitoring method for a mobile target of the present application is started at the same time, thereby monitoring security while cleaning. The movement device for example includes wheels and drivers of the wheels, wherein the driver can be for example a motor. The movement device is used to drive the robot to move back and forth in a reciprocating manner, move in a rotational manner or move in a curvilinear manner according to a planned moving track, or to drive the mobile robot to adjust a pose.
  • Herein, the mobile robot at least includes an image acquisition device. The image acquisition device captures images within a field of view at a position where the mobile robot is located. For example, a mobile robot includes an image acquisition device which is arranged on the top, shoulder or back of the mobile robot, and the principal optic axis of the image acquisition device is vertical to a moving plane of the mobile robot, or the principal optic axis is consistent with a travelling direction of the mobile robot. In some other embodiments, the principal optic axis can also be set to form a certain angle (for example, an angle between 50° and 86°) with the moving plane on which the mobile robot is located, to acquire a greater image acquisition range. In other embodiments, the principal optic axis of the image acquisition device can also be set in many other ways, for example, the image acquisition device can rotate according to a certain rule or rotate randomly, in this case, an angle between the optic axis of the image acquisition device and a travelling direction of the mobile robot is constantly changed, therefore, installation manners of the image acquisition device and states of the principal optic axis of the image acquisition device are not limited to what are enumerated in the present embodiment. For another example, the mobile robot includes two or more image acquisition devices, for example, a binocular image acquisition device or a multi-image acquisition devices. For two or more image acquisition devices, the principal optic axis of one image acquisition device is vertical to moving plane of the mobile robot, or the principal optic axis is consistent with a travelling direction of the mobile robot. In some other embodiments, the principal optic axis can also be set to form a certain angle with the moving plane, so as to acquire a greater image acquisition range. In other embodiments, the principal optic axis of the image acquisition device can also be set in many other ways, for example, the image acquisition device can rotate according to a certain rule or rotate randomly, in this case, an angle between the optic axis of the image acquisition device and a travelling direction of the mobile robot is constantly changed, therefore, installation manners of the image acquisition device and states of the principal optic axis of the image acquisition device are not limited to what are enumerated in the present embodiment. The image acquisition device includes but is not limited to: fisheye camera module, wide angle (or non-wide angle) camera module, depth camera module, camera module integrated with an optical system or CCD chip, and camera module integrated with an optical system and CMOS chip.
  • The power supply system of the image acquisition device can be controlled by the power supply system of the mobile robot, and during the period that the mobile robot is powered on and moves, the image acquisition device begins to capture images.
  • Please refer to FIG. 1 which shows a flow diagram of the monitoring method of a mobile target of the present application in one embodiment. As shown in FIG. 1, the monitoring method of the mobile target includes the following steps:
  • Step S100: multiple-frame images are captured by an image acquisition device under an operating state of the movement device. In an embodiment where the mobile robot is a cleaning robot, a movement device of the mobile robot can include a travelling mechanism and a travelling drive mechanism, wherein the travelling mechanism can be arranged at a bottom of the robot body, and the travelling drive mechanism is arranged inside the robot body. The travelling mechanism can for example include a combination of two straight-going walking wheels and at least one auxiliary steering wheel, wherein the two straight-going walking wheels are respectively arranged at two opposite sides at a bottom of the robot body, and the two straight-going walking wheels can be independently driven by two corresponding travelling drive mechanisms respectively, that is, a left straight-going walking wheel is driven by a left travelling drive mechanism, while a right straight-going walking wheel is driven by a right travelling drive mechanism. The universal walking wheel or the straight-going walking wheel can be provided with a bias drop suspension system which is fixed in a movable manner, for example, the bias drop suspension system can be installed on a robot body in a rotatable manner and receives spring bias which is downwards and away from the robot body. The spring bias enables the universal walking wheel or the straight-going walking wheel to maintain contact and traction with the ground with a certain landing force. In practical applications, under the condition that the at least one auxiliary steering wheel does not work, the two straight-going walking wheels are mainly used for going forward and backward, while when the at least one auxiliary steering wheel participates and matches with the two straight-going walking wheels, movements such as steering and rotating can be realized. The travelling drive mechanism can include a drive motor and a control circuit configured to control the drive motor, and the drive motor can be used to drive the walking wheels in the travelling mechanism to move. In specific implementations, the drive motor can be for example a reversible drive motor, and a gear shift mechanism can be further arranged between the drive motor and the axle of a walking wheel. The travelling drive mechanism can be installed on the robot body in a detachable manner, thereby facilitating disassembly and maintenance. In the present embodiment, the mobile robot which is a cleaning robot captures multiple-frame images when moving, in other words, in step S100, the processing device acquires multiple-frame images captured by the image acquisition device under an operating state of the movement device. In the embodiment, the multiple-frame images can be for example multiple-frame images acquired in a continuous time period, or multiple-frame images acquired within two or more discontinuous time periods.
  • Step S200: monitoring information containing a mobile target which moves relative to a static target is output according to a result of performing comparison between at least two-frame images selected from the multiple-frame images; wherein the at least two-frame images are images captured by the image acquisition device within partially overlapped field of view. In the present embodiment, a processing device of the mobile robot outputs monitoring information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images selected from the multiple-frame images.
  • In some embodiments, the static target for example includes but is not limited to: ball, shoe, wall, flowerpot, cloth and hat, roof, lamp, tree, table, chair, refrigerator, television, sofa, sock, tiled object, and cup. Wherein, the tiled object includes but is not limited to ground mat or floor tile map paved on the floor, and tapestry and picture hung on a wall.
  • Wherein, in step S200, the two-frame images selected by the processing device are images captured by the image acquisition device in a partially overlapped field of view, that is, the processing device determines to select a first frame image and a second frame image on the basis that the two-frame images contain an image overlapped region, and the overlapped field of view contains a static target, so as to monitor a mobile target which moves relative to the static target in the overlapped field of view. In order to ensure effectiveness of compared results between the selected two-frame images, the proportion of the image overlapped region in the first frame image and in the second frame image can also be set, for example, the proportion of the image overlapped region in the first frame image and in the second frame image are respectively at least 50% (but the proportions are not limited to this proportion, different proportions of the first frame image and the second frame image can be set depending on the situation). The selection of the first frame image and the second frame image should be continuous to some extent, and while ensuring that the first frame image and the second frame image have a certain proportion of an image overlapped region, the continuity of the moving track of the mobile target can be judged based on acquired images. Several manners of selecting the image will be enumerated below, the image selection methods described in the examples are merely some specific manners, and the manners of selecting the first frame image and the second frame image in practical applications are not limited to image selection methods herein, while other image selection manners which can ensure that selected two-frame images are relatively continuous and have an image overlapped region with a set proportion can all be applied to the present application.
  • In some implementations, the processing device respectively selects a first frame image and a second frame image at a first position and a second position, wherein, the image acquisition device has an overlapped field of view at the first position and the second position.
  • For example, the image acquisition device can capture videos, since videos are composed of image frames, during the movement period of the mobile robot, the processing device can continuously or discontinuously collect image frames in the acquired videos to obtain multiple-frame images, and select the first frame image and the second frame image according to a preset number of frame intervals, wherein the two-frame images have a partially overlapped region, and then the processing device performs image comparison between selected two-frame images.
  • For another example, during the movement period of the mobile robot, the processing device can preset time intervals at which an image acquisition device captures images, and acquire multiple frame-images captured by the image acquisition device at different time. In multiple-frame images, two-frame images are selected for comparison, and the time interval should be at least smaller than the time taken by the mobile robot in moving in one field of view, to ensure that two-frame images selected in the multiple-frame images have a partially overlapped part.
  • For another example, the mobile robot captures images within the field of view of the image acquisition device at a preset time interval, and then the processing device acquires images, and selects two of the images as a first frame image and a second frame image, wherein the two-frame images have a partially overlapped part. Wherein the time period can be represented by a time unit, or the time period can be represented by number of intervals of image frames.
  • For another example, the mobile robot is in communication with an intelligent terminal, and the intelligent terminal can modify the time period through a specific APP (applications). For example, after the APP is opened, a modification interface of the time period is displayed on a touch screen of the intelligent terminal, and the time period is modified through a touch operation on the modification interface; or a time period modification instruction is directly sent to the mobile robot to modify the time period. The time period modification instruction can be for example a voice containing modification instructions, the voice can be for example “the period is modified to be three seconds”. For another example, the voice can be “the number of image frame intervals can be modified to be five”.
  • In step S200, the position of the mobile target in each of at least two-frame images has an attribute of indefinite change. In some embodiments, the mobile robot moves, through the movement device, based on a map which is constructed in advance, and the image acquisition device captures multiple-frame images in the movement process. The processing device selects two-frame images from the multiple-frame images for comparison, the selected two-frame images are respectively a first frame image and a second frame image according to the order of the image selection. A corresponding position at which the mobile robot acquires the first frame image is the first position, while a corresponding position at which the mobile robot acquires the second frame image is a second position. The two-frame images have an image overlapped region, and a static target exists in the overlapped field of view of the image acquisition device. Since the mobile robot is under a moving state, the position of the static target in the second frame image has changed definitely relative to the position of the static target in the first frame image, the definite change amplitude of positions of the static target in the two-frame images has correlation with movement information of the mobile robot at the first position and the second position, and the movement information can be for example moving distance and pose change information of the mobile robot from the first position to the second position. In some embodiments, the mobile robot contains a position measurement device, which can be used to acquire movement information of the mobile robot, and relative position information between the first position and the second position can be measured according to the movement information.
  • The position measurement device includes but is not limited to a displacement sensor, a ranging sensor, a cliff sensor, an angle sensor, a gyroscope, a binocular image acquisition device and a speed sensor, which are arranged on a mobile robot. During the movement of the mobile robot, the position measurement device constantly detects movement information and provides it to the processing device. The displacement sensor, the gyroscope, the speed sensor and so on can be integrated into one or more chips. The ranging sensor and the cliff sensor can be set at the side of the mobile robot. For example, the ranging sensor in a cleaning robot is set at the edge of the housing; and the cliff sensor in a cleaning robot is arranged at the bottom of the mobile robot. According to the type and number of sensors arranged on a mobile robot, the movement information acquired by the processing device includes but is not limited to displacement information, angle information, information about distance between the robot and an obstacle, velocity information and travelling direction information. For example, the position measurement device is a counting sensor arranged on a motor of the mobile robot, the rotation number that the motor operates is counted so as to acquire relative displacement of the mobile robot from the first position to the second position, and an angle at which the motor operates is used to acquire pose information, etc.
  • In some other embodiments, taking a grid map as an example, a mapping relationship between a unit grid length and actual displacement is determined in advance. The number of grids that the mobile robot moves from a first position to a second position is determined based on the movement information obtained during movement of the mobile robot, and further relative position information between the first position and the second position is acquired.
  • Taking the map which is constructed in advance being a vector map as an example, a mapping relationship between a unit vector length and actual displacement is determined beforehand, the vector length that the mobile robot moves from the first position to the second position is determined according to movement information obtained during the movement of the mobile robot, and further relative position information of the two positions is obtained. The vector length can be calculated in pixel. Moreover, relative to the position of the static target in the first frame image, the position of the static target in the second frame image is shifted by a vector length which corresponds to the relative position information, therefore, the movement of the static target captured in the second frame image relative to the static target captured in the first frame image can be determined based on the relative position information of the mobile robot, thereby having an attribute of definite change. The movement of the mobile target in the selected two-frame images in the overlapped field of view does not conform to the above attribute of definite change.
  • Please refer to FIG. 2 which shows image diagrams of selected two-frame images in one embodiment of the present application. In the present embodiment, the principal optic axis of the image acquisition device is set to be vertical to a moving plane, then the plane on which a two-dimensional image captured by an image acquisition device is located is in parallel with the moving plane of the mobile robot. In this setting manner, the position of an entity object in a projected image captured by the image acquisition device can be used to indicate the position where the entity object is projected onto the moving plane of the mobile robot, and the angle of the position of the entity object in a projected image relative to a travelling direction of the mobile robot is used to indicate the angle of the position at which the entity object is projected onto the moving plane of the mobile robot relative to the travelling direction of the mobile robot. The selected two-frame images in FIG. 2 are respectively a first frame image and a second frame image, the corresponding position at which the mobile robot acquires the first frame image is the first position P1, and the corresponding position at which the mobile robot acquires the second frame image is the second position P2. In the present embodiment, when the mobile robot moves from the first position P1 to the second position P2, only the distance is changed, but no pose is changed. Therefore, relative position information of the mobile robot at the first position P1 and the second position P2 can be acquired only by measuring relative displacement between the first position P1 and the second position P2. The two-frame images have an image overlapped region, and a static target O as shown in FIG. 2 exists in the overlapped field of view of the image acquisition device. Since the mobile robot is under a moving state, the position of the static target O in the second frame image is changed definitely relative to the position of the static target O in the first frame image, and the definite change amplitude of the position of the static target O in the two-frame images has correlation with movement information of the mobile robot at the first position P1 and the second position P2, and in the present embodiment, the movement information can be for example a movement distance when the mobile robot moves from the first position P1 to the second position P2. In the present embodiment, the mobile robot includes a position measurement device, and movement information of the mobile robot is acquired by the position measurement device of the mobile robot. For another example, the position measurement device measures a moving speed of the mobile robot, and calculates a relative displacement from the first position to the second position based on the moving speed and moving time. In some other embodiments, the position measurement device is a GPS system (Global Positioning System), and relative position information between the first position P1 and the second position P2 is acquired according to localization information of the GPS in the first position and the second position. As shown in FIG. 2, a projection of the static target O in the first frame image is a static target projection O1, and a projection of the static target O in the second frame image is a static target projection O2, and it can be clearly seen from FIG. 2 that, the position of static target projection O1 in the first frame image is changed to the position of the static target projection O2 in the second frame image, the positions have been changed, and changes in distance of the static target projection O2 relative to the static target projection O1 in the images is in a certain proportion to the relative displacement between the first position P1 and the second position P2, and the changes in distance of the static target projection O2 relative to the static target projection O1 in images can be definitely acquired according to a ratio of unit actual distance to a pixel in the image. Therefore, the movement of a static target captured in a second frame image relative to the static target captured in a first frame image can be determined based on relative position information of the mobile robot, which has an attribute of definite change. While the movement of the mobile target in the overlapped field of view in the selected two-frame images does not conform to the above attribute of definite change.
  • In some other embodiments, the position measurement device is a device which performs localization based on measured wireless signals, for example, the position measurement device is a bluetooth (or WiFi) localization device. The position measurement device determines relative position of the first position P1 and the second position P2 relative to a preset wireless locating signal transmitting device respectively by measuring power of received wireless locating signals at the first position P1 and the second position P2, thereby relative position information between the first position P1 and the second position P2 is acquired.
  • During the movement of a mobile robot, when a mobile target exists in a captured first frame image and second frame image, the movement of the mobile target has an attribute of indefinite change. Please refer to FIG. 3 which shows image diagrams of selected two-frame images in one embodiment of the present application. In the present embodiment, the principal optic axis of the image acquisition device is set to be vertical to a moving plane, then the plane on which a two-dimensional image captured by the image acquisition device is located is in parallel with the moving plane of the mobile robot. In this setting manner, the position of an entity object in the projected image captured by the image acquisition device can be used to indicate the position where the entity object is projected onto the moving plane of the mobile robot, and the angle of the position of the entity object in the projected image relative to the travelling direction of the mobile robot is used to indicate the angle of the position at which the entity object is projected onto the moving plane of the mobile robot relative to the travelling direction of the mobile robot. The selected two-frame images in FIG. 3 are respectively a first frame image and a second frame image, the corresponding position at which the mobile robot acquires the first frame image is the first position P1′, and the corresponding position at which the mobile robot acquires the second frame image is the second position P2′. In the present embodiment, when the mobile robot moves from the first position P1′ to the second position P2′, only a distance is changed, while no pose is changed. Therefore, relative position information of the mobile robot at the first position P1′ and the second position P2′ can be acquired by just measuring relative displacement between the first position P1′ and the second position P2′. The two-frame images have an image overlapped region, and a mobile target Q as shown in FIG. 3 exists in the overlapped field of view of the image acquisition device. In the process that the mobile robot moves from the first position to the second position, the mobile target Q moves and becomes the mobile target Q′ at a new position, the position of the mobile target Q in the second frame image is changed indefinitely relative to the position of the target Q in the first frame image, that is, the position change amplitude of the mobile target Q in the two-frame images has no correlation with movement information of the mobile robot in the first position P1′ and the second position P2′, the position change of the mobile target Q in the two-frame images cannot be figured out based on movement information of the mobile robot at the first position P1′ and the second position P2′, and in the present embodiment, the movement information can be for example a movement distance when the mobile robot moves from the first position P1′ to the second position P2′. In the present embodiment, the mobile robot includes a position measurement device, and movement information of the mobile robot is acquired by the position measurement device of the mobile robot. For another example, the position measurement device measures a moving speed of the mobile robot, and calculates a relative displacement from a first position to a second position based on the moving speed and moving time. In some other embodiments, the position measurement device is a GPS system or a device which performs localization based on measured wireless signals, and relative position information between the first position P1′ and the second position P2′ is acquired according to localization information of the position measurement device at the first position and the second position. As shown in FIG. 3, a projection of a mobile target Q in a first frame image is a mobile target projection Q1, the position of a mobile target Q′ in a second frame image is a mobile target projection Q2, and when the mobile target Q is a static target, the projection of the mobile target Q in the second frame image should be a projection Q2′, that is, the mobile target projection Q2′ is an image projection after the mobile target projection Q1 is subjected to a definite change when the mobile robot moves from a first position P1′ to a second position P2′, while in the present embodiment, the position change of the mobile target Q in the two-frame images cannot be figured out according to movement information of the mobile robot at the first position P1′ and the second position P2′, and the mobile target Q has an attribute of indefinite change during the movement of the mobile robot.
  • Please refer to FIG. 4 which shows a flow diagram of performing comparison between at least two-frame images selected from multiple-frame images in one embodiment of the present application. The step of performing comparison between at least two-frame images selected from the multiple-frame images further includes the following step S210 and step S220.
  • In step S210, the processing device detects a suspected target according to a comparison between the at least two-frame images. Wherein the suspected target is a target with an attribute of indefinite change in the first frame image and the second frame image, and the suspected target moves relative to a static target within an image overlapped region of the first frame image and the second frame image. And in some embodiments, please refer to FIG. 5 which shows a flow diagram of detecting a suspected target according to comparison between at least two-frame images in one embodiment of the present application. That is, the step of detecting a suspected target according to comparison between the at least two-frame images is realized by step S211 and step S212 in FIG. 5.
  • In step S211, the processing device performs image compensation on the at least two-frame images based on movement information of the movement device within the time period between the at least two-frame images. In some embodiments, during the mobile robot moves from a first position to a second position, movement information is generated due to movement, herein, the movement information contains relative displacement and relative pose change of the mobile robot from the first position to the second position. Movement information can be measured by the position measurement device, and according to a proportional relationship between actual length and unit length in an image captured by the image acquisition device, a definite relative displacement of the position of a projected image of the static target within an image overlapped region of the second frame image and the first frame image is acquired, and relative pose change of the mobile robot is acquired through a pose detection device of the mobile robot, and further image compensation is performed on the first frame image or the second frame image based on movement information. For example, the image compensation is performed on the first frame image according to movement information or the image compensation is performed on the second frame image according to movement information.
  • In step S212, the processing device performs a subtraction processing on the compensated at least two-frame images to form a difference image, that is, the subtraction processing is performed on the compensated second frame image and the original first frame image to form a difference image, or the subtraction processing is performed on the compensated first frame image and the original second frame image to form a difference image. When a mobile target which moves relative to a static target does not exist in the image overlapped region of the first frame image and the second frame image, the result of subtraction between compensated images should be zero, and the difference image between the image overlapped regions of the first frame image and the second frame image should not contain any feature, that is, the image overlapped regions of the compensated second frame image and the original first frame image are the same, or the image overlapped regions of the compensated first frame image and the original second frame image are the same. When a mobile target which moves relative to a static target exists in an image overlapped region of the first frame image and the second frame image, the result of subtraction between compensated images is not zero, and the difference image regarding the image overlapped region of the first frame image and the second frame image contains discriminative features, that is, the image overlapped regions of the compensated second frame image and the original first frame image are not the same, and parts which cannot be coincided exist, or the image overlapped regions of the compensated first frame image and the original second frame image are not the same, and part which cannot be coincided exist. If a suspected target is judged to exist only when it is determined that discriminative feature exists in a difference image of image overlapped regions of compensated two-frame images, or the image overlapped regions of compensated two-frame images cannot be completely coincided with each other, it may lead to misjudgment. For example, when the mobile robot is at the first position, and a lamp which is turned off exists in the overlapped field of view, but when the mobile robot is at the second position and the lamp in the overlapped field of view is turned on, according to the above step, discriminative feature exists in the image overlapped region of the captured first frame image and the second frame image, and the difference result of image overlapped regions after the images are compensated is not zero, that is, the image overlapped regions after the images are compensated cannot be completely coincided. Therefore, if a suspected target is judged only by the above manner, the lamp will be misjudged to be a suspected target. Therefore, under the condition that the difference image is not zero, if obtaining a first moving track of an object within the time period during which the image acquisition device captures a first frame image and a second frame image according to the difference image, the object is judged to be the suspected target, that is, a suspected target corresponding to the first moving track exists in the overlapped field of view.
  • In some embodiments, please refer to FIG. 6 which shows an image diagram of a first frame image selected in one embodiment of the present application. Please refer to FIG. 7 which shows an image diagram of a second frame image selected in one embodiment of the present application. Wherein FIG. 6 shows a first frame image captured by the image acquisition device of the mobile robot at a first position, and FIG. 7 shows a second frame image which is captured when the mobile robot moves from the first position to the second position, and movement information of the mobile robot from the first position to the second position is measured by the position measurement device of the mobile robot. The first frame image and the second frame image have an image overlapped region shown by a dotted box in FIG. 6 and FIG. 7, and the image overlapped region corresponds to the overlapped field of view of the image acquisition device at the first position and the second position. Moreover, the overlapped field of view of the image acquisition device contains multiple static targets, for example, chair, window, book shelf, clock, sofa, and bed. A moving butterfly A exists in FIG. 6 and FIG. 7, and the butterfly A moves relative to static targets in the overlapped field of view in FIG. 6 and FIG. 7, and the butterfly A is located in the image overlapped region of the first frame image and the second frame image. One static target in the overlapped field of view is selected now to define movement of the butterfly A, and the static target can be selected from any one of chair, window, book shelf, clock, sofa, bed and so on. Since the image of the clock in FIG. 6 and FIG. 7 is relatively complete, the shape of the clock is regular and is easily recognized, and the size of the clock can better show movement of the butterfly A, therefore, the clock in FIG. 6 and FIG. 7 is selected herein to be a static target which shows movement of the butterfly A. Obviously, it can be seen from FIG. 6 that the butterfly A is at the left side of the clock, while in FIG. 7, the butterfly A is at the right side of the clock. According to the movement information measured by the position measurement device, the second frame image is compensated, and the processing device performs subtraction processing on the compensated second frame image and the original first frame image to form a difference image. The difference image has a discriminative feature i.e. butterfly A which exists simultaneously in the first frame image and the second frame image and which cannot be eliminated through subtraction. Based on this, the butterfly A is judged to move (from the left side of the clock to the right side of the clock) relative to a static target (the static target is for example a clock) in the overlapped field of view during the mobile robot moves from the first position to the second position, and positions of the butterfly A in the first frame image and the second frame image have an attribute of indefinite change.
  • In another specific embodiment, when a mobile robot is at a first position, a first frame image as shown in FIG. 6 is captured by an image acquisition device at the first position, and when the mobile robot moves to a second position, a second frame image as shown in FIG. 7 is captured by the image acquisition device at the second position. The first frame image and the second frame image have an image overlapped region as shown in FIG. 6 and FIG. 7, and the image overlapped region corresponds to the overlapped field of view of the image acquisition device at the first position and the second position. Moreover, the overlapped field of view of the image acquisition device includes multiple static targets, for example, chair, window, book shelf, clock, sofa, bed, and so on. In addition, a moving butterfly A exists in the present embodiment, and the butterfly A moves relative to static targets in the overlapped field of view of FIG. 6 and FIG. 7. In the first frame image, the butterfly A for example is at the end of the bed and is within the image overlapped region, and in the second frame image, the butterfly A for example is at the head of the bed and is outside the image overlapped region. In this case, according to the movement information measured by the position measurement device, the second frame image is compensated, and the processing device performs subtraction processing on the compensated second frame image and the original first frame image to form a difference image. The difference image has a discriminative feature i.e. butterfly A which exists simultaneously in the first frame image and the second frame image and which cannot be eliminated through subtraction. Based on this, the butterfly A is judged to move (from an end to a head) relative to a static target (the static target is for example a bed) in an overlapped field of view when the mobile robot moves from the first position to the second position, and positions of the butterfly A in the first frame image and the second frame image have an attribute of indefinite change.
  • Movement of a mobile target is generally continuous. In order to prevent misjudgment in some special situations, and to improve accuracy and effectiveness of the system, herein, step S220 is further performed, that is, the suspected target is tracked to determine that the suspected target is a mobile target. Special condition is that, for example, some hanging decorations or ceiling lamps swing regularly at a certain amplitude because of wind. Generally, the swing is merely regular back-and-forth movement within a small range or merely irregular movement at a small amplitude, and the movement cannot form continuous movement. The object which swings due to wind forms discriminative feature in the difference image, and forms a moving track. According to the method as shown in FIG. 5, the object which swings due to wind will be judged to be a suspected target, and these objects which swing at a certain amplitude due to wind will be misjudged to be a mobile target if using the method shown in FIG. 5.
  • Please refer to FIG. 8 which shows a flow diagram of tracking a suspected target to determine that the suspected target is a mobile target in one embodiment of the present application. In some embodiments, the method of tracking the suspected target to determine a mobile target refers to step S221 and step S222 shown in FIG. 8.
  • In step S221, the processing device acquires a moving track of a suspected target through tracking the suspected target; and in step S222, if the moving track of the suspected target is continuous, the suspected target is determined to be a mobile target. In some embodiments, among multiple-frame images captured by the image acquisition device, a third frame image within a field of view when the mobile robot moves from a second position to a third position is continuously captured. The first frame image, the second frame image and the third frame image are images acquired in sequence, the second frame image and the third frame image have an image overlapped region, and a comparison detection is performed on the second frame image and the third frame image according to step S211 and step S212. When the second frame image and the compensated third frame image are subjected to subtraction processing and the difference image between image overlapped regions thereof is not zero, that is, when a discriminative feature exists in the difference image, and the discriminative feature simultaneously exists in the second frame image and the third frame image, a second moving track of the suspected target within the time period when the image acquisition device captures the second frame image and the third frame image is obtained based on the difference image, and when the first moving track and the second moving track are continuous, the suspected target is determined to be a mobile target. In order to ensure accuracy in identifying a mobile target, more images in a relative continuous time period can be acquired in sequence by the image acquisition device, and each comparison detection is performed on the newly acquired image and adjacent image according to step S211 and step S212, and further more moving tracks of the suspected target are obtained, so as to judge whether the suspected target is a mobile target. thus ensure accuracy of judged results. For example, as for the butterfly A in FIG. 6 and FIG. 7, after a third frame image is acquired, the butterfly A moves to the head of the bed. In view of this, the comparison detection is performed on the second frame image and the third frame image as shown in FIG. 7 according to step S211 and step S212, the subtraction processing is performed between the second frame image and the compensated third frame image as shown in FIG. 7 and a difference image of the image overlapped regions thereof is not zero, meanwhile a discriminative feature i.e. the butterfly A exists, and a second moving track about the butterfly A within the time period when the image acquisition device captures the second frame image and the third frame image can be obtained based on the difference image, thus a moving track of the suspected target (the butterfly A) which moves from the left side of the clock to the right side of the clock and then moves to the head of bed can be obtained, that is, the butterfly A is judged to be a mobile target.
  • In some other embodiments, the suspected target is tracked according to an image feature of the suspected target. Wherein the image feature includes a preset graphic feature corresponding to a suspected target, or an image feature obtained through performing an image processing algorithm on a suspected target. Wherein the image processing algorithm includes but is not limited to at least one of the following: grayscale processing, sharpening processing, contour extraction, corner extraction, line extraction, and image processing algorithms obtained by machine learning. Image processing algorithm obtained by machine learning includes but is not limited to: neural network algorithm and clustering algorithm. Among multiple-frame images captured by the image acquisition device, a third frame image within the field of view when the mobile robot moves from a second position to a third position is continuously acquired. The first frame image, the second frame image and the third frame image are images acquired in sequence, and the second frame image and the third frame image have an image overlapped region, a suspected target is searched in the third frame image according to the image feature of the suspected image. And a static object exists within the overlapped field of view in which the image acquisition device captures the second frame image and the third frame image, according to relative position information of the mobile robot at the second position and the third position, and also the position change of the suspected target relative to a same static target in the second frame image and the third frame image, a second moving track of the suspected target within the time period in which the image acquisition device captures the second frame image and the third frame image is obtained. When the first moving track and the second moving track are continuous, the suspected target is determined to be a mobile target. In order to ensure accuracy in identifying a mobile target, more images can be acquired, and as for the newly acquired image and an adjacent image, the suspected target are tracked according to image features of the suspected target, and further more moving tracks of the suspected target are obtained, so as to judge whether the suspected target is a mobile target, thus ensure accuracy of judged results.
  • In some embodiments, please refer to FIG. 9 which shows a flow diagram of comparing between at least two-frame images selected from multiple-frame images in one embodiment of the present application. The comparison between at least two-frame images selected from the multiple-frame images includes step S210′ and step S220′ as shown in FIG. 9.
  • In step S210′, the processing device detects a suspected target on the basis of a matching operation on corresponding feature information in the at least two-frame images. The feature information includes at least one of the following: feature point, feature line, feature color, and so on. Please refer to FIG. 10 which shows a flow diagram of detecting a suspected target according to a matching operation on corresponding feature information in at least two-frame images in one embodiment of the present application. And step S210′ is realized through step S211′ and step S212′.
  • In step S211′, the processing device extracts each feature point in the at least two-frame images respectively, and matches each feature point extracted from the at least two-frame images with a reference three-dimensional coordinate system; wherein the reference three-dimensional coordinate system is formed by performing three-dimensional modeling on a mobile space, and the reference three-dimensional coordinate system is marked with coordinates of each feature point for all the static targets in the mobile space. The feature point for example includes angular point, end point, inflection point etc., corresponding to the entity objects. In some embodiments, a set of feature points corresponding to a static target can form an external contour of the static target, that is, a corresponding static target can be recognized through a set of feature points. An image identification can be performed in advance on all the static targets in a mobile space where a mobile robot moves according to identification conditions, so as to obtain feature points related to each static target respectively, and coordinates of each feature point can be marked on the reference three-dimensional coordinate system. Coordinates of feature points of each static target can be manually uploaded according to a certain format, and the coordinates can be marked on the reference three-dimensional coordinate system.
  • In step S212′, the processing device detects a feature point set, constituted by corresponding feature points which are not matched with the reference three-dimensional coordinate system, in the at least two-frame images to be a suspected target. When a feature point set, constituted by feature points which are not matched with corresponding feature points on the reference three-dimensional coordinate system, is found in a single image, it can only indicate that the feature point set does not belong to any static target which has been marked on the reference three-dimensional coordinate system, however, the feature point set may indicate a static object which is newly added in the mobile space and whose coordinates of feature points are not marked in advance on the reference three-dimensional coordinate system. Based on this, it is necessary to determine whether the feature point set, constituted by feature points which are not matched with corresponding feature points on the reference three-dimensional coordinate system, moves or not according to the matching result between two-frame images. In some embodiments, the feature point sets, which are not matched with the feature points in the reference coordinates, of the first frame image and the second frame image are the same as or similar to each other. When the feature point set moves relative to the static target within the time period during which the image acquisition device captures the first frame image and the second frame image, a first moving track of the feature point set is formed, thus the feature point set is detected to be a suspected target.
  • For example, in FIG. 6 and FIG. 7, feature points of a static target such as chair, window, book shelf, clock, sofa, bed and so on can all be extracted in advance and marked in the reference three-dimensional coordinate system, while butterfly A in FIG. 6 and FIG. 7 is a newly added object, and is not marked in the reference three-dimensional coordinate system, in this case, by matching each feature point extracted from the first frame image and the second frame image with the reference three-dimensional coordinate system, feature point which is not marked on the reference three-dimensional coordinate system is obtained, and the set of feature points shows as features of the butterfly A, for example, the set of feature points can be displayed as contour features of the butterfly A. Moreover, in the first frame image as shown in FIG. 6, the butterfly A is located at a left side of the clock, while in the second frame image as shown in FIG. 7, the butterfly A is located at a right side of the clock, that is, a first moving track showing the butterfly A moves from a left side of the clock to a right side of the clock can be obtained through matching the first frame image and the second frame image with the feature points marked on the reference three-dimensional coordinate system respectively.
  • Movement of a mobile target is generally continuous. In order to prevent misjudgment in some special situations, and to improve accuracy and effectiveness of the system, herein, step S220′ is further performed, that is, the suspected target is tracked to determine that the suspected target is a mobile target. Special condition is that, for example, some hanging decorations or ceiling lamps swing regularly at a certain amplitude because of wind. Generally, the swing is merely regular back-and-forth movement within a small range or merely irregular movement at a small amplitude, and the movement cannot form continuous movement. The object which swings due to wind forms discriminative feature in the difference image, and forms a moving track. According to the method as shown in FIG. 10, the object which swings due to wind will be judged to be a suspected target, and these objects which swing at a certain amplitude due to wind will be misjudged to be a mobile target if using the method shown in FIG. 10. In some embodiments, please refer to FIG. 11 which shows a flow diagram of tracking a suspected target to determine a mobile target in one embodiment of the present application. For the method of tracking the suspected target to determine a mobile target, please refer to step S221′ and step S222′ as shown in FIG. 11.
  • In step S221′, the processing device acquires a moving track of a suspected target through tracking the suspected target; and in step S222′, if the moving track of the suspected target is continuous, the suspected target is determined to be a mobile target. In some embodiments, among multiple-frame images captured by the image acquisition device, a third frame image at a third position of the mobile robot is further captured. The first frame image, the second frame image and the third frame image are images acquired in sequence, the second frame image and the third frame image have an image overlapped region, and a comparison detection is performed on the second frame image and the third frame image according to step S211 and step S212. When the second frame image and the compensated third frame image are subjected to subtraction processing and the difference image thereof is not zero, that is, when a discriminative feature exists in the difference image, and the discriminative feature simultaneously exists in the second frame image and the third frame image, a second moving track of the suspected target within the time period when the image acquisition device captures the second frame image and the third frame image is obtained based on the difference image, and when the first moving track and the second moving track are continuous, the suspected target is determined to be a mobile target. In order to ensure accuracy in identifying a mobile target, more images can be acquired in sequence by the image acquisition device, and each comparison detection is performed on the newly acquired image and adjacent image according to step S211 and step S212, and further more moving tracks of the suspected target are obtained, so as to judge whether the suspected target is a mobile target. thus ensure accuracy of judged results. For example, as for the butterfly A in FIG. 4 and FIG. 5, after a third frame image is acquired, the butterfly A moves to the head of the bed. In view of this, the comparison detection is performed on the second frame image and the third frame image as shown in FIG. 5 according to step S211 and step S212, the subtraction processing is performed between the second frame image and the compensated third frame image as shown in FIG. 5 and a difference image of the image overlapped regions thereof is not zero, meanwhile a discriminative feature i.e. the butterfly A exists, and a second moving track about the butterfly A within the time period when the image acquisition device captures the second frame image and the third frame image can be obtained based on the difference image, thus a moving track of the suspected target (the butterfly A) which moves from the left side of the clock to the right side of the clock and then moves to the head of bed can be obtained, that is, the butterfly A is judged to be a mobile target.
  • In some other embodiments, the suspected target is tracked according to an image feature of the suspected target. Wherein the image feature includes a preset graphic feature corresponding to a suspected target, or an image feature obtained through performing an image processing algorithm on a suspected target. Wherein the image processing algorithm includes but is not limited to at least one of the following: grayscale processing, sharpening processing, contour extraction, corner extraction, line extraction, and image processing algorithms obtained by machine learning. Image processing algorithm obtained by machine learning includes but is not limited to: neural network algorithm and clustering algorithm. Among multiple-frame images captured by the image acquisition device, a third frame image at a third position of the mobile robot is further acquired. The first frame image, the second frame image and the third frame image are images acquired in sequence, and the second frame image and the third frame image have an image overlapped region, a suspected target is searched in the third frame image according to the image feature of the suspected image. And a static object exists within the overlapped field of view in which the image acquisition device captures the second frame image and the third frame image, according to relative position information of the mobile robot at the second position and the third position, and also the position change of the suspected target relative to a same static target in the second frame image and the third frame image, a second moving track of the suspected target within the time period in which the image acquisition device captures the second frame image and the third frame image is obtained. When the first moving track and the second moving track are continuous, the suspected target is determined to be a mobile target. In order to ensure accuracy in identifying a mobile target, more images can be acquired, and as for the newly acquired image and an adjacent image, the suspected target are tracked according to image features of the suspected target, and further more moving tracks of the suspected target are obtained, so as to judge whether the suspected target is a mobile target, thus ensure accuracy of judged results.
  • In some embodiments, please refer to FIG. 12 which shows a flow diagram of object recognition in one embodiment of the present application. The monitoring method further includes step S300 and step S400. In step S300, the processing device performs object recognition on a mobile target in a captured image; wherein, the object recognition means recognition of a target object through a method of feature matching or model recognition. A method of object recognition based on feature matching generally includes the following steps: extracting an image feature of an object, describing the extracted feature, and performing feature matching on the described object. The image feature includes graphic feature corresponding to the mobile target, or image feature obtained through an image processing algorithm. Wherein the image processing algorithm includes but is not limited to at least one of the following: grayscale processing, sharpening processing, contour extraction, corner extraction, line extraction, and image processing algorithms obtained through machine learning. The mobile target includes for example a mobile person or a mobile animal. Herein, the object recognition is realized by an object recognizer which includes a trained neural network. In some embodiments, the neural network model is a convolutional neural network, and the network structure includes an input layer, at least one hidden layer and at least one output layer. Wherein the input layer is configured to receive captured images or preprocessed images; the hidden layer includes a convolutional layer and an activation function layer, and even includes at least one of a normalization layer, a pooling layer and a fusion layer; and the output layer is configured to output images marked with object type labels. The connection mode is determined according to a connection relationship of each layer in a neural network model, for example, a connection relationship between a front layer and a rear layer set based on data transmission, a connection relationship with data of the front layer set based on size of a convolution kernel in each hidden layer, and a full connection relationship. Features and advantages of an artificial neural network mainly include the following three aspects: firstly, function of self learning, secondly, function of associative storage, and thirdly, capability of searching for an optimal solution at a high speed.
  • In step S400, the processing device outputs monitoring information according to results of object recognition. The monitoring information includes one or more of image information, video information, audio information, and text information. The monitoring information can be image picture containing the mobile target, and can also be prompt information which is sent to a preset communication address, and the prompt information can be for example prompt message of an APP, short message, mail, voice broadcast, alarm, and so on. The prompt information contains key words related to the mobile target. When a key word of the mobile target is “person”, the prompt information can be prompt message of an APP, short message, mail, voice broadcast and alarm containing the key word “person”, for example, an information of “somebody is intruding” in the form of text or voice. The preset communication address at least contains one of the followings: a telephone number bound with the mobile robot, instant messaging accounts (Wechat accounts, QQ accounts or Facebook accounts, etc.), an e-mail address and a network platform.
  • Training of the object recognizer and object recognition based on the object recognizer both involve a complex calculating process, which means huge calculating quantity and a high requirement on hardware of device which contains and runs the object recognizer, therefore, in some embodiments, please refer to FIG. 13 which shows a flow diagram of object recognition in one embodiment of the present application. The method further includes step S500 and step S600;
  • In step S500, the processing device uploads the captured image or video containing an image to a cloud server for object recognition on a mobile target in the image; and the cloud server includes an object recognizer containing a trained neural network;
  • In step S600, the processing device receives results of object recognition from the cloud server and outputs monitoring information. The monitoring information includes one or more of image information, video information, audio information, and text information. The monitoring information can be image picture containing the mobile target, and can also be prompt information which is sent to a preset communication address, and the prompt information can be for example prompt message of an APP, short message, mail, voice broadcast, alarm, and so on. The prompt information contains key words related to the mobile target. When a key word of the mobile target is “person”, the prompt information can be prompt message of an APP, short message, mail, voice broadcast and alarm containing the key word “person”, for example, an information of “somebody is intruding” in the form of text or voice.
  • Due to the detection and recognition operations of a mobile target are performed on the cloud server, operating pressure of a local mobile robot can be lowed, requirements on hardware of the mobile robot can be reduced, and execution efficiency of the mobile robot can be improved; moreover, strong processing functions of a cloud server can be sufficiently used, thereby enabling the implementation of the methods to be more rapid and accurate.
  • In some other embodiments, the mobile robot captures images through an image acquisition device and selects a first frame image and a second frame image among the captured images, then the mobile robot uploads the first frame image and the second frame image to the cloud server for image comparison, and receives results of object recognition sent by the cloud server. Or for example, the mobile robot directly uploads the captured images to the cloud server after the mobile robot captures images through an image acquisition device, the cloud server selects two-frame images according to the monitoring method for a mobile target and performs image comparison on the selected two-frame images, and then the mobile robot receives results of object recognition sent by the cloud server. For the monitoring method of the mobile target, please refer to FIG. 1 and related description of FIG. 1, and the monitoring method will not be described in detail herein. When more data processing programs are executed at cloud, requirements on hardware of the mobile robot itself will be further lowered. When programs need to be revised and updated, the programs in the cloud can be directly revised and updated conveniently, thereby improving efficiency and flexibility of system updating.
  • As mentioned above, the monitoring method used in a mobile robot for a mobile target has the following beneficial effects: through the technical solution that acquiring multiple-frame images captured by an image acquisition device under an moving state of a robot in a monitored region, selecting at least two-frame images with an overlapped region from the multiple-frame images, performing comparison between the selected images by image compensation method or feature matching method, and outputting monitoring information containing a mobile target which moves relative to a static target based on the result of comparison, and wherein the position of the mobile target in each of the at least two-frame images has an attribute of indefinite change, the mobile target in the monitored region can be recognized precisely during movement of the mobile robot, and monitoring information about the mobile target can be generated to prompt correspondingly, thereby safety of the monitored region can be effectively ensured.
  • Please refer to FIG. 14 which shows a structural schematic diagram of a monitoring device of a mobile target used in a mobile robot of the present application in one embodiment. The mobile robot includes a movement device and an image acquisition device. The image acquisition device is arranged on the mobile robot, and is configured to capture an entity object within a field of view of the image acquisition device at the position where a mobile robot is located, so as to obtain a projected image, wherein the projected image is located on the plane which is parallel to the moving plane. The image acquisition device includes but is not limited to: fisheye camera module, wide angle (or non-wide angle) camera module, depth camera module, camera module integrated with an optical system or CCD chip, and camera module integrated with an optical system and CMOS chip. The mobile robot includes, but is not limited to, a family companion mobile robot, a cleaning robot, a patrol mobile robot, a glass cleaning robot, etc. The power supply system of the image acquisition device can be controlled by the power supply system of the mobile robot, and during the period that the mobile robot is powered on and moves, the image acquisition device begins to capture images. The mobile robot at least includes an image acquisition device. The image acquisition device captures images within a field of view at a position where the mobile robot is located. For example, a mobile robot includes an image acquisition device which is arranged on the top, shoulder or back of the mobile robot, and the principal optic axis of the image acquisition device is vertical to a moving plane of the mobile robot, or the principal optic axis is consistent with a travelling direction of the mobile robot. In some other embodiments, the principal optic axis can also be set to form a certain angle (for example, an angle between 50° and 86°) with the moving plane on which the mobile robot is located, to acquire a greater image acquisition range. In other embodiments, the principal optic axis of the image acquisition device can also be set in many other ways, for example, the image acquisition device can rotate according to a certain rule or rotate randomly, in this case, an angle between the optic axis of the image acquisition device and a travelling direction of the mobile robot is constantly changed, therefore, installation manners of the image acquisition device and states of the principal optic axis of the image acquisition device are not limited to what are enumerated in the present embodiment. For another example, the mobile robot includes two or more image acquisition devices, for example, a binocular image acquisition device or a multi-image acquisition devices. For two or more image acquisition devices, the principal optic axis of one image acquisition device is vertical to moving plane of the mobile robot, or the principal optic axis is consistent with a travelling direction of the mobile robot. In some other embodiments, the principal optic axis can also be set to form a certain angle with the moving plane, so as to acquire a greater image acquisition range. In other embodiments, the principal optic axis of the image acquisition device can also be set in many other ways, for example, the image acquisition device can rotate according to a certain rule or rotate randomly, in this case, an angle between the optic axis of the image acquisition device and a travelling direction of the mobile robot is constantly changed, therefore, installation manners of the image acquisition device and states of the principal optic axis of the image acquisition device are not limited to what are enumerated in the present embodiment.
  • In an embodiment where the mobile robot is a cleaning robot, a movement device of the mobile robot can include a travelling mechanism and a travelling drive mechanism, wherein the travelling mechanism can be arranged at a bottom of the robot body, and the travelling drive mechanism is arranged inside the robot body. The travelling mechanism can for example include a combination of two straight-going walking wheels and at least one auxiliary steering wheel, wherein the two straight-going walking wheels are respectively arranged at two opposite sides at a bottom of the robot body, and the two straight-going walking wheels can be independently driven by two corresponding travelling drive mechanisms respectively, that is, a left straight-going walking wheel is driven by a left travelling drive mechanism, while a right straight-going walking wheel is driven by a right travelling drive mechanism. The universal walking wheel or the straight-going walking wheel can be provided with a bias drop suspension system which is fixed in a movable manner, for example, the bias drop suspension system can be installed on a robot body in a rotatable manner and receives spring bias which is downwards and away from the robot body. The spring bias enables the universal walking wheel or the straight-going walking wheel to maintain contact and traction with the ground with a certain landing force. In practical applications, under the condition that the at least one auxiliary steering wheel does not work, the two straight-going walking wheels are mainly used for going forward and backward, while when the at least one auxiliary steering wheel participates and matches with the two straight-going walking wheels, movements such as steering and rotating can be realized. The travelling drive mechanism can include a drive motor and a control circuit configured to control the drive motor, and the drive motor can be used to drive the walking wheels in the travelling mechanism to move. In specific implementations, the drive motor can be for example a reversible drive motor, and a gear shift mechanism can be further arranged between the drive motor and the axle of a walking wheel. The travelling drive mechanism can be installed on the robot body in a detachable manner, thereby facilitating disassembly and maintenance.
  • In this embodiment, the monitoring device 700 for a mobile target includes: at least one processing device 710 and at least one storage device 720. Wherein the processing device is an electronic device which is capable of performing numeric calculation, logical calculation and data analysis, and the processing device includes but is not limited to: CPU, GPU, FPGA, etc. The storage device 720 may include high-speed RAM (random access memory) and may also include NVM (non-volatile memory), such as one or more disk storage devices, flash memory devices or other non-volatile solid-state storage devices. In some embodiments, the storage device may also include a storage device away from one or more processors, such as network attached storage device accessed via RF circuits or external ports and communication networks, wherein the communication network can be the Internet, one or more intranets, LAN, WLAN, SAN, etc., or an appropriate combination thereof. The memory controller can control access to memory by other components of the device, such as CPU and peripheral interfaces.
  • Wherein, the storage device 720 is used to store images captured by the image acquisition device under an operating state of the movement device. At least one program is stored in the at least one storage device 720, and is invoked by the at least one processing device 710 such that the monitoring device 700 performs a monitoring method for a mobile target. The monitoring method for a mobile target can be seen in FIG. 1 and its description, thus it's not repeated here.
  • Please refer to FIG. 15, which shows a structural schematic diagram of a mobile robot of the present application in one embodiment. The mobile robot 800 include the movement device 810, the image acquisition device 820 and the monitoring device 830. The image acquisition device 820 is arranged on the mobile robot 800, and is configured to capture an entity object within a field of view of the image acquisition device 820 at the position where the mobile robot 800 is located, so as to obtain a projected image, wherein the projected image is located on the plane which is parallel to the moving plane. The image acquisition device 820 includes but is not limited to: fisheye camera module, wide angle (or non-wide angle) camera module, depth camera module, camera module integrated with an optical system or CCD chip, and camera module integrated with an optical system and CMOS chip. The mobile robot 800 includes, but is not limited to, a family companion mobile robot, a cleaning robot, a patrol mobile robot, a glass cleaning robot, etc. The power supply system of the image acquisition device 820 can be controlled by the power supply system of the mobile robot 800, and during the period that the mobile robot is powered on and moves, the image acquisition device 820 begins to capture images. The mobile robot 800 at least includes an image acquisition device 820. The image acquisition device 820 captures images within a field of view at a position where the mobile robot 800 is located. For example, a mobile robot 800 includes an image acquisition device 820 which is arranged on the top, shoulder or back of the mobile robot, and the principal optic axis of the image acquisition device 820 is vertical to a moving plane of the mobile robot, or the principal optic axis is consistent with a travelling direction of the mobile robot. In some other embodiments, the principal optic axis can also be set to form a certain angle (for example, an angle between 50° and 86°) with the moving plane on which the mobile robot is located, to acquire a greater image acquisition range. In other embodiments, the principal optic axis of the image acquisition device 820 can also be set in many other ways, for example, the image acquisition device 820 can rotate according to a certain rule or rotate randomly, in this case, an angle between the optic axis of the image acquisition device 820 and a travelling direction of the mobile robot is constantly changed, therefore, installation manners of the image acquisition device 820 and states of the principal optic axis of the image acquisition device 820 are not limited to what are enumerated in the present embodiment. For another example, the mobile robot includes two or more image acquisition devices (820), for example, a binocular image acquisition device 820 or a multi-image acquisition devices (820). For two or more image acquisition devices (820), the principal optic axis of one image acquisition device 820 is vertical to moving plane of the mobile robot, or the principal optic axis is consistent with a travelling direction of the mobile robot. In some other embodiments, the principal optic axis can also be set to form a certain angle with the moving plane, so as to acquire a greater image acquisition range. In other embodiments, the principal optic axis of the image acquisition device 820 can also be set in many other ways, for example, the image acquisition device 820 can rotate according to a certain rule or rotate randomly, in this case, an angle between the optic axis of the image acquisition device 820 and a travelling direction of the mobile robot is constantly changed, therefore, installation manners of the image acquisition device 820 and states of the principal optic axis of the image acquisition device 820 are not limited to what are enumerated in the present embodiment.
  • In an embodiment where the mobile robot 800 is a cleaning robot, the movement device 810 of the mobile robot 800 can include a travelling mechanism and a travelling drive mechanism, wherein the travelling mechanism can be arranged at a bottom of the robot body, and the travelling drive mechanism is arranged inside the robot body. The travelling mechanism can for example include a combination of two straight-going walking wheels and at least one auxiliary steering wheel, wherein the two straight-going walking wheels are respectively arranged at two opposite sides at a bottom of the robot body, and the two straight-going walking wheels can be independently driven by two corresponding travelling drive mechanisms respectively, that is, a left straight-going walking wheel is driven by a left travelling drive mechanism, while a right straight-going walking wheel is driven by a right travelling drive mechanism. The universal walking wheel or the straight-going walking wheel can be provided with a bias drop suspension system which is fixed in a movable manner, for example, the bias drop suspension system can be installed on a robot body in a rotatable manner and receives spring bias which is downwards and away from the robot body. The spring bias enables the universal walking wheel or the straight-going walking wheel to maintain contact and traction with the ground with a certain landing force. In practical applications, under the condition that the at least one auxiliary steering wheel does not work, the two straight-going walking wheels are mainly used for going forward and backward, while when the at least one auxiliary steering wheel participates and matches with the two straight-going walking wheels, movements such as steering and rotating can be realized. The travelling drive mechanism can include a drive motor and a control circuit configured to control the drive motor, and the drive motor can be used to drive the walking wheels in the travelling mechanism to move. In specific implementations, the drive motor can be for example a reversible drive motor, and a gear shift mechanism can be further arranged between the drive motor and the axle of a walking wheel. The travelling drive mechanism can be installed on the robot body in a detachable manner, thereby facilitating disassembly and maintenance.
  • The monitoring device 830 is in communication links with the movement device 810 and the image acquisition device 820, the monitoring device 830 includes image acquisition unit 831, mobile target detecting unit 832 and information output unit 833.
  • The image acquisition unit 831 is in communication links with both the movement device 810 and the image acquisition device 820, and the image acquisition unit 831 acquires multiple-frame images captured by the image acquisition device 820 under the operating state of the movement device 810. In some embodiments, the multiple-frame images can be for example multiple-frame images acquired in a continuous time period, or multiple-frame images acquired within two or more discontinuous time periods.
  • The mobile target detecting unit 832 performs comparison between at least two-frame images selected from the multiple-frame images so as to detect a mobile target. The at least two-frame images are images captured by the mage acquisition device 820 within partially overlapped field of view. That is, the mobile target detecting unit 832 determines to select a first frame image and a second frame image on the basis that the two-frame images contain an image overlapped region, and the overlapped field of view contains a static target, so as to monitor a mobile target which moves relative to the static target in the overlapped field of view. In order to ensure effectiveness of compared results between the selected two-frame images, the proportion of the image overlapped region in the first frame image and in the second frame image can also be set, for example, the proportion of the image overlapped region in the first frame image and in the second frame image are respectively at least 50% (but the proportions are not limited to this proportion, different proportions of the first frame image and the second frame image can be set depending on the situation). The selection of the first frame image and the second frame image should be continuous to some extent, and while ensuring that the first frame image and the second frame image have a certain proportion of an image overlapped region, the continuity of the moving track of the mobile target can be judged based on acquired images. A position of the mobile target in each of the at least two-frame images has an attribute of indefinite change.
  • The information output unit 833 output monitoring information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images. In some embodiments, the static target for example includes but is not limited to: ball, shoe, wall, flowerpot, cloth and hat, roof, lamp, tree, table, chair, refrigerator, television, sofa, sock, tiled object, and cup. Wherein, the tiled object includes but is not limited to ground mat or floor tile map paved on the floor, and tapestry and picture hung on a wall. In some embodiments, the monitoring information includes one or more of image information, video information, audio information, and text information. The monitoring information can be image picture containing the mobile target, and can also be prompt information which is sent to a preset communication address, and the prompt information can be for example prompt message of an APP, short message, mail, voice broadcast, alarm, and so on. The prompt information contains key words related to the mobile target. When a key word of the mobile target is “person”, the prompt information can be prompt message of an APP, short message, mail, voice broadcast and alarm containing the key word “person”, for example, an information of “somebody is intruding” in the form of text or voice. The preset communication address at least contains one of the followings: a telephone number bound with the mobile robot, instant messaging accounts (Wechat accounts, QQ accounts or Facebook accounts, etc.), an e-mail address and a network platform.
  • In some embodiments, the mobile target detecting unit 832 includes a comparing module and a tracking module.
  • The comparing module detects a suspected target based on the comparison between at least two-frame images; In some embodiments, the step of the comparing module detecting a suspected target based on the comparison between at least two-frame images includes:
  • performing image compensation on the at least two-frame images based on movement information of the movement device 810 within a time period between the at least two-frame images; That is, in this embodiment, as shown in FIG. 8, the mobile target detecting unit 832 is in communication links with the movement device 810, so as to acquire movement information of the movement device 810 within the time period between the at least two-frame images and perform image compensation on the at least two-frame images.
  • performing subtraction processing on the compensated at least two-frame images to form a difference image, and detecting the suspected target from the difference image.
  • The tracking module tracks the suspected target to determine the mobile target. In some embodiments, the step of tracking the suspected target to determine the mobile target includes:
  • obtaining a moving track of a suspected target through tracking the suspected target detected by the comparing module; and
  • determining the suspected target as the mobile target when the moving track of the suspected target is continuous.
  • In yet some embodiments, the mobile target detecting unit 832 can identify the mobile target without communication links with the movement device 810 as shown in FIG. 8. In this embodiment, the mobile target detecting unit 832 includes a matching module and a tracking module.
  • The matching module detecting a suspected target based on a matching operation on corresponding feature information in the at least two-frame images. In some embodiments, the step of the matching module detecting a suspected target based on a matching operation on corresponding feature information in the at least two-frame images include:
  • extracting each feature point in the at least two-frame images respectively, and matching extracted each feature point in the at least two-frame images with a reference three-dimensional coordinate system; wherein the reference three-dimensional coordinate system is formed through performing three-dimensional modeling on a mobile space, and the reference three-dimensional coordinate system is marked with coordinate of each feature point on all static targets in the mobile space; and
  • detecting a feature point set as the suspected target, the feature point set is composed of feature points in the at least two-frame images that are not matched with the reference three-dimensional coordinate system.
  • The tracking module tracks the suspected target to determine the mobile target. In some embodiments, the step of tracking the suspected target to determine the mobile target includes:
  • obtaining a moving track of a suspected target through tracking the suspected target detected by the matching module; and
  • determining the suspected target as the mobile target when the moving track of the suspected target is continuous.
  • In some embodiments, the monitoring device 830 further includes object recognition unit, the object recognition unit performs object recognition in the captured images, so that the information output unit can output the monitoring information based on the result of object recognition. The object recognition unit includes a trained neural network. Wherein, the object recognition means recognition of a target object through a method of feature matching or model recognition. A method of object recognition based on feature matching generally includes the following steps: extracting an image feature of an object, describing the extracted feature, and performing feature matching on the described object. The image feature includes graphic feature corresponding to the mobile target, or image feature obtained through an image processing algorithm. Wherein the image processing algorithm includes but is not limited to at least one of the following: grayscale processing, sharpening processing, contour extraction, corner extraction, line extraction, and image processing algorithms obtained through machine learning. The mobile target includes for example a mobile person or a mobile animal. Herein, the object recognition is realized by an object recognizer which includes a trained neural network. In some embodiments, the neural network model is a convolutional neural network, and the network structure includes an input layer, at least one hidden layer and at least one output layer. Wherein the input layer is configured to receive captured images or preprocessed images; the hidden layer includes a convolutional layer and an activation function layer, and even includes at least one of a normalization layer, a pooling layer and a fusion layer; and the output layer is configured to output images marked with object type labels. The connection mode is determined according to a connection relationship of each layer in a neural network model, for example, a connection relationship between a front layer and a rear layer set based on data transmission, a connection relationship with data of the front layer set based on size of a convolution kernel in each hidden layer, and a full connection relationship. Features and advantages of an artificial neural network mainly include the following three aspects: firstly, function of self-learning, secondly, function of associative storage, and thirdly, capability of searching for an optimal solution at a high speed.
  • The monitoring information includes one or more of image information, video information, audio information, and text information. The monitoring information can be image picture containing the mobile target, and can also be prompt information which is sent to a preset communication address, and the prompt information can be for example prompt message of an APP, short message, mail, voice broadcast, alarm, and so on. The prompt information contains key words related to the mobile target. When a key word of the mobile target is “person”, the prompt information can be prompt message of an APP, short message, mail, voice broadcast and alarm containing the key word “person”, for example, an information of “somebody is intruding” in the form of text or voice. The preset communication address at least contains one of the followings: a telephone number bound with the mobile robot, instant messaging accounts (Wechat accounts, QQ accounts or Facebook accounts, etc.), an e-mail address and a network platform.
  • Training of the object recognition unit and object recognition based on the object recognition unit both involve a complex calculating process, which means huge calculating quantity and a high requirement on hardware of device which contains and runs the object recognizer, therefore, in some embodiments, the monitoring device 830 includes a receive-send unit, the receive -send unit uploads the captured image or video containing an image to a cloud server for object recognition on a mobile target in the image and receiving the result of object recognition from the cloud server so that the information output unit can output the monitoring information The cloud server includes an object recognizer containing a trained neural network;
  • Due to the detection and recognition operations of a mobile target are performed on the cloud server, operating pressure of a local mobile robot can be lowed, requirements on hardware of the mobile robot can be reduced, and execution efficiency of the mobile robot can be improved; moreover, strong processing functions of a cloud server can be sufficiently used, thereby enabling the implementation of the methods to be more rapid and accurate.
  • In some other embodiments, the mobile robot captures images through an image acquisition device and selects a first frame image and a second frame image among the captured images, then the mobile robot uploads the first frame image and the second frame image to the cloud server for image comparison, and receives results of object recognition sent by the cloud server. Or for example, the mobile robot directly uploads the captured images to the cloud server after the mobile robot captures images through an image acquisition device, the cloud server selects two-frame images by operating the mobile target detecting unit 832 and performs image comparison on the selected two-frame images, and then the mobile robot receives results of object recognition sent by the cloud server. When more data processing programs are executed at cloud, requirements on hardware of the mobile robot itself will be further lowered. When programs need to be revised and updated, the programs in the cloud can be directly revised and updated conveniently, thereby improving efficiency and flexibility of system updating.
  • The technical solution of a monitoring device 830 of a mobile target in an embodiment in FIG. 15 corresponds to the monitoring method of a mobile target. As to the monitoring method of the mobile target, please refer to FIG. 1 and related description of FIG. 1, and all the descriptions about the monitoring method of the mobile target can be applied to related embodiments of the monitoring device 830 of a mobile target and will not be repeated described in detail herein.
  • It should be noted that, in a device shown in FIG. 15, the modules are divided only based on the logical function. While in practical application, the modules can be all or partially integrated into a physical entity or not. The modules can be invoked by software or hardware or both. For example, each module can be an independent processing unit, or the modules can be invoked by one chip integrated in the device. For another example, a program code can be stored in the memory of the device, which is invoked by a processing unit of the device and enables the modules to function. The processing unit can be a integrated circuit with the ability to process signals. In practical application, the above steps and modules can be accomplished by integrated logical circuit in hardware or by software.
  • For example, the modules can be configured with one or more integrated circuit to implement the above method, such as one or more ASIC, DSP, FPGA, etc. For another example, when a module functions by a processing unit invoking a program code, the processing unit can be general processing unit such as CPU or other. For another example, the modules can be integrated and invoked by SOC.
  • In addition, it should also be noted that, through the description of the above implementations, those skilled in the art can clearly understand that part or all of the present application can be realized by means of software and in combination with necessary general-purpose hardware platforms. Based on this, the present application further provides a computer storage medium which stores at least one program that executes any monitoring method for a mobile target mentioned above when the program is invoked. The monitoring method for a mobile target is referred in FIG. 1 and its related description, and is not described here. It should be noted that the computer program code can be source code, object code, executable file or some intermediate form, etc.
  • Based on this understanding, the technical solutions of the present application essentially or the part contributing to the prior art can be embodied in the form of a software product, the computer software product can include one or more machine readable media which store machine executable instructions thereon, when these instructions are executed by one or more machines such as a computer, a computer network or other electronic apparatus, such one or more machines can execute operations based on the embodiments of the present application. The machine readable media include but are not limited to, any entity or device capable of carrying the computer program code, a recording medium, a U disk, a mobile hard disk, a computer memory, a floppy disk, an optical disk, a CD-ROM (a compact disc-read only memory), a magnetic optical disc, an ROM (read-only memory), an RAM (random access memory), an EPROM (erasable programmable read-only memory), an EEPROM (electrically erasable programmable read-only memory), a magnetic card or optical card, a flash memory, an electric carrier signal, a telecommunication signal and a software distribution media or other types of media/machine readable media which are applicable to storing machine executable instructions. It should be noted that, the machine readable media can be different according to the related legislation and patent law in different jurisdictions. For instance, in some jurisdiction, an electric carrier signal or a telecommunication signal is not included in the computer readable media. Wherein the storage media can be located in the mobile robot and can also be located in a third-party server, for example, in a server providing a certain application store. Specific application stores are not limited herein, and can be a MIUI application store, a Huawei application store, and an Apple application store, etc.
  • Please refer to FIG. 16 which shows a structural schematic diagram of a monitoring system of the present application in one embodiment. The monitoring system 900 includes a cloud server 910 and a mobile robot 920, and the mobile robot 920 is in communication with the cloud server 910. The mobile robot 920 includes an image acquisition device and a movement device. The mobile robot 920 moves in a three-dimensional space shown in FIG. 16, and a mobile target butterfly A exists in the mobile space shown in FIG. 16. During movement of the mobile robot 920 through a movement device, the mobile robot 920 captures multiple-frame images, selects two-frame images from the multiple-frame images and compares the two-frame images, and outputs a mobile target which moves relative to a static target. The selected two-frame images are for example the first frame image as shown in FIG. 6 and the second frame image as shown in FIG. 7. The first frame image and the second frame image have an image overlapped region shown by a dotted box in FIG. 6 and FIG. 7, and the image overlapped region corresponds to an overlapped field of view of the image acquisition device at the first position and the second position. Moreover, there are multiple static targets in the overlapped field of view of the image acquisition device, for example, chair, window, book shelf, clock, sofa, bed, and so on. In FIG. 6, a butterfly A is located at a left side of a clock, while in FIG. 7, the butterfly A is located at a right side of the clock, and the image comparison is performed on the first frame image and the second frame image to obtain a suspected target which moves relative to the static target. The method for image comparison for example can be referred to FIG. 5 and related description of FIG. 5, that is, the processing device of the mobile robot 920 obtains the movement information of the mobile robot 920 during the image acquisition device captures the first frame image and the second frame image, compensates the first frame image or the second frame image according to the movement information, and performs difference subtraction between the compensated image and the other original image to obtain a suspected target (butterfly A) with a regional moving track (from a left side of the clock to a right side of the clock). As to the method for image comparison, for another example, a manner of feature comparison as shown in FIG. 10 is referred to, wherein each feature point in the first frame image and the second frame image is extracted, and each feature point extracted from the two-frame images is matched on a reference three-dimensional coordinate system, wherein the reference three-dimensional coordinate system is formed through modeling a mobile space shown in FIG. 16. The processing device of the mobile robot 920 detects a feature point set, constituted by corresponding feature points which are not matched on the reference three-dimensional coordinate system, in the two-frame images to be a suspected target. Further, according to multiple-frame images which are acquired subsequently, the suspected target is tracked to acquire a continuous moving track of the suspected target and determine the suspected target (butterfly A) to be a mobile target (butterfly A). Further, according to the method shown in FIG. 8 or FIG. 11, the suspected target is tracked to obtain a moving track of the suspected target, and when the moving track of the suspected target is continuous, the suspected target is determined to be a mobile target. In the present embodiment, through tracking a butterfly A which serves as a suspected target, a continuous moving track of the butterfly A is obtained, for example, the butterfly A moves from a left side of the clock to a right side of the clock, and then moves to a head of the bed.
  • Training of the object recognizer and object recognition based on the object recognizer both involve a complex calculating process, which means huge calculating quantity and a high requirement on hardware of device which contains and runs the object recognizer. In some embodiments, the mobile robot 920 uploads images or videos containing the mobile target to the cloud server 910, and the mobile robot 920 outputs monitoring information according to results of object recognition of the mobile target received from the cloud server 910. The object recognition process for example includes recognizing images and videos containing the mobile target through preset image features, wherein the image feature can be for example image point feature, image line feature or image color feature. In the present embodiment, the mobile target is recognized as the butterfly A through detection on the contour of the butterfly A. The mobile robot 920 receives results of object recognition sent by the cloud server 910, and output monitoring information to a designated client according to the results of object recognition. The client can be for example electronic device with an intelligent data processing function such as a smart phone, a tablet computer, a smart watch and so on. Or for example, the cloud server 910 performs the object recognition to obtain a result of object recognition, and outputs monitoring information to the designated client according to the result directly. In the present embodiment, the cloud server 910 performs object recognition on the received images or videos containing images, and sends the results of object recognition to the mobile robot 920.
  • Due to the detection and recognition operations of a mobile target are performed on the cloud server, operating pressure of a local mobile robot can be lowed, requirements on hardware of the mobile robot can be reduced, and execution efficiency of the mobile robot can be improved; moreover, strong processing functions of a cloud server can be sufficiently used, thereby enabling the implementation of the methods to be more rapid and accurate.
  • When more data processing programs are executed at a cloud, requirements on hardware of the mobile robot itself will be further lowered. When running programs need to be revised and updated, running programs in the cloud can be directly revised and updated conveniently, thereby improving efficiency and flexibility of system updating. Therefore, in some other embodiments, the mobile robot 920 captures multiple-frame images under a moving state and uploads them to the cloud server 910; and the cloud server 910 selects two-frame images from the multiple-frame images and compares the two-frame images, wherein the selected two-frame images are for example the first frame image as shown in FIG. 6 and the second frame image as shown in FIG. 7. The first frame image and the second frame image have an image overlapped region shown by a dotted box in FIG. 6 and FIG. 7, and the image overlapped region corresponds to an overlapped field of view of the image acquisition device at the first position and the second position. Moreover, there are multiple static targets in the overlapped field of view of the image acquisition device, for example, chair, window, book shelf, clock, sofa, bed, and so on. In FIG. 6, a butterfly A is located at a left side of a clock, while in FIG. 7, the butterfly A is located at a right side of the clock, and the image comparison is performed on the first frame image and the second frame image to obtain a suspected target which moves relative to the static target. The method for image comparison for example can be referred to FIG. 5 and related description of FIG. 5, that is, the processing device of the mobile robot 920 obtains the movement information of the mobile robot 920 during the image acquisition device captures the first frame image and the second frame image, compensates the first frame image or the second frame image according to the movement information, and performs difference subtraction between the compensated image and the other original image to obtain a suspected target (butterfly A) with a regional moving track (from a left side of the clock to a right side of the clock). As to the method for image comparison, for another example, a manner of feature comparison as shown in FIG. 10 is referred to, wherein each feature point in the first frame image and the second frame image is extracted, and each feature point extracted from the two-frame images is matched on a reference three-dimensional coordinate system, wherein the reference three-dimensional coordinate system is formed through modeling a mobile space shown in FIG. 16. The processing device of the mobile robot 920 detects a feature point set, constituted by corresponding feature points which are not matched on the reference three-dimensional coordinate system, in the two-frame images to be a suspected target. Further, according to multiple-frame images which are acquired subsequently, the suspected target is tracked to acquire a continuous moving track of the suspected target and determine the suspected target (butterfly A) to be a mobile target (butterfly A). The first frame image and the second frame image are compared according to the method of difference subtraction on a compensated image as shown in FIG. 5 or according to the method of feature comparison as shown in FIG. 10, to obtain a suspected target (butterfly A) which moves relative to the static target (for example, a clock). Further, according to the method shown in FIG. 8 or FIG. 11, the suspected target is tracked to obtain a moving track of the suspected target, and when the moving track of the suspected target is continuous, the suspected target is determined to be a mobile target. In the present embodiment, through tracking a butterfly A which serves as a suspected target, a continuous moving track of the butterfly A is obtained, for example, the butterfly A moves from a left side of the clock to a right side of the clock, and then moves to a head of the bed. Moreover, after performing object recognition on the mobile target, the cloud server 910 sends the recognized results to the mobile robot 920.
  • The mobile robot 920 can receive results of object recognition sent by the cloud server 910, and output monitoring information to a designated client according to the results of object recognition. The client can be for example electronic device with an intelligent data processing function such as a smart phone, a tablet computer and a smart watch. Or for example, the cloud server 910 performs the object recognition to obtain a result of object recognition, and outputs monitoring information to the designated client according to the result directly.
  • In some other embodiments, the mobile robot 920 further communicates with designated client through a mobile network, and the client can be for example electronic device with an intelligent data processing function such as a smart phone, a tablet computer and a smart watch.
  • The monitoring method and device for a mobile target, the monitoring system and mobile robot of the present application have the following beneficial effects: through the technical solution that acquiring multiple-frame images captured by an image acquisition device under an moving state of a robot in a monitored region, selecting at least two-frame images with an overlapped region from the multiple-frame images, performing comparison between the selected images by image compensation method or feature matching method, and outputting monitoring information containing a mobile target which moves relative to a static target based on the result of comparison, and wherein the position of the mobile target in each of the at least two-frame images has an attribute of indefinite change, the mobile target in the monitored region can be recognized precisely during movement of the mobile robot, and monitoring information about the mobile target can be generated to prompt correspondingly, thereby safety of the monitored region can be effectively ensured.
  • While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. It is not intended that the invention be limited by the specific examples provided within the specification. While the invention has been described with reference to the aforementioned specification, the descriptions and illustrations of the embodiments herein are not meant to be construed in a limiting sense. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. Furthermore, it shall be understood that all aspects of the invention are not limited to the specific depictions, configurations or relative proportions set forth herein which depend upon a variety of conditions and variables. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is therefore contemplated that the invention shall also cover any such alternatives, modifications, variations or equivalents. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.

Claims (12)

What is claims is:
1. A monitoring device for a mobile target, used in a mobile robot, the mobile robot comprises a movement device and an image acquisition device, wherein, the monitoring device for a mobile target comprises:
at least one processing device;
at least one storage device, configured to store images captured by the image acquisition device under an operating state of the movement device;
at least one program, wherein the at least one program is stored in the at least one storage device, and is invoked by the at least one processing device such that the monitoring device performs a monitoring method for a mobile target;
the monitoring method for a mobile target comprises the following steps:
acquiring multiple-frame images captured by the image acquisition device under the operating state of the movement device; and
outputting monitoring information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images selected from the multiple-frame images; wherein the at least two-frame images are images captured by the image acquisition device within partially overlapped field of view, and a position of the mobile target in each of the at least two-frame images has an attribute of indefinite change, the mobile target and the static target are both contained in each of the two-fame images.
2. The monitoring device for a mobile target of claim 1, wherein the step of performing comparison between at least two-frame images selected from the multiple-frame images comprises the following steps:
detecting a suspected target based on the comparison between the at least two-frame images; and
tracking the suspected target to determine the mobile target.
3. The monitoring device for a mobile target of claim 2, wherein the step of detecting a suspected target according to the comparison between the at least two-frame images comprises the following steps:
performing image compensation on the at least two-frame images based on movement information of the movement device within a time period between the at least two-frame images; and
performing subtraction processing on the compensated at least two-frame images to form a difference image, and detecting the suspected target from the difference image.
4. The monitoring device for a mobile target of claim 1, wherein the step of performing comparison between at least two-frame images selected from the multiple-frame images comprises the following steps:
detecting a suspected target based on a matching operation on corresponding feature information in the at least two-frame images; and
tracking the suspected target to determine the mobile target.
5. The monitoring device for a mobile target of claim 4, wherein the step of detecting a suspected target based on a matching operation on corresponding feature information in the at least two-frame images comprises the following steps:
extracting each feature point in the at least two-frame images respectively, and matching extracted each feature point in the at least two-frame images with a reference three-dimensional coordinate system; wherein the reference three-dimensional coordinate system is formed through performing three-dimensional modeling on a mobile space, and the reference three-dimensional coordinate system is marked with coordinate of each feature point on all static targets in the mobile space; and
detecting a feature point set as the suspected target, the feature point set is composed of feature points in the at least two-frame images that are not matched with the reference three-dimensional coordinate system.
6. The monitoring device for a mobile target of claim 2, wherein the step of tracking the suspected target to determine the mobile target comprises the following steps:
obtaining a moving track of a suspected target through tracking the suspected target; and
determining the suspected target as the mobile target when the moving track of the suspected target is continuous.
7. The monitoring device for a mobile target of claim 4, wherein the step of tracking the suspected target to determine the mobile target comprises the following steps:
obtaining a moving track of a suspected target through tracking the suspected target; and
determining the suspected target as the mobile target when the moving track of the suspected target is continuous.
8. The monitoring device for a mobile target of claim 1, further comprising the following steps:
performing object recognition on the mobile target in the captured images, wherein the object recognition is performed by an object recognizer, the object recognizer includes a trained neural network; and
outputting the monitoring information according to result of the object recognition.
9. The monitoring device for a mobile target of claim 1, further comprising the following steps:
uploading captured images or videos containing images to a cloud server to perform object recognition on the mobile target in the image; wherein the cloud server includes an object recognizer which includes trained neural networks; and
receiving a result of object recognition from the cloud server and outputting the monitoring information.
10. The monitoring device for a mobile target of claim 1, wherein the monitoring information comprises one or more of image information, video information, audio information and text information.
11. A mobile robot, comprising:
a movement device, configured to control movement of the mobile robot according to received control instruction;
an image acquisition device, configured to capture multiple-frame images under an operating state of the movement device; and
a monitoring device, the monitoring device comprises:
at least one processing device;
at least one storage device, configured to store images captured by the image acquisition device under an operating state of the movement device;
at least one program, wherein the at least one program is stored in the at least one storage device, and is invoked by the at least one processing device such that the monitoring device performs a monitoring method for a mobile target;
the monitoring method for a mobile target comprises the following steps:
acquiring multiple-frame images captured by the image acquisition device under the operating state of the movement device; and
outputting monitoring information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images selected from the multiple-frame images; wherein the at least two-frame images are images captured by the image acquisition device within partially overlapped field of view, and a position of the mobile target in each of the at least two-frame images has an attribute of indefinite change, the mobile target and the static target are both contained in each of the two-fame images.
12. A monitoring system, comprising:
a cloud server; and
a mobile robot, connected with the cloud server;
wherein the mobile robot performs the following steps: acquiring multiple-frame images during movement of the mobile robot; outputting detection information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images selected from the multiple-frame images; uploading the captured images or videos containing images to the cloud server based on the detection information; and outputting monitoring information based on a result of object recognition received from the cloud server;
or, wherein the mobile robot performs the following steps: acquiring multiple-frame images during movement of the mobile robot, and uploading the multiple-frame images to the cloud server; and the cloud server performs the following steps: outputting detection information containing a mobile target which moves relative to a static target according to a result of performing comparison between at least two-frame images selected from the multiple-frame images; and outputting an object recognition result of the mobile target to the mobile robot according to a result of performing object recognition on the mobile target in multiple-frame images, such that the mobile robot outputs monitoring information.
US17/184,833 2018-12-05 2021-02-25 Monitoring method and device for mobile target, monitoring system and mobile robot Abandoned US20210201509A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/184,833 US20210201509A1 (en) 2018-12-05 2021-02-25 Monitoring method and device for mobile target, monitoring system and mobile robot

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
PCT/CN2018/119293 WO2020113452A1 (en) 2018-12-05 2018-12-05 Monitoring method and device for moving target, monitoring system, and mobile robot
US16/522,717 US10970859B2 (en) 2018-12-05 2019-07-26 Monitoring method and device for mobile target, monitoring system and mobile robot
US17/184,833 US20210201509A1 (en) 2018-12-05 2021-02-25 Monitoring method and device for mobile target, monitoring system and mobile robot

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/522,717 Continuation US10970859B2 (en) 2018-12-05 2019-07-26 Monitoring method and device for mobile target, monitoring system and mobile robot

Publications (1)

Publication Number Publication Date
US20210201509A1 true US20210201509A1 (en) 2021-07-01

Family

ID=66191860

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/522,717 Active 2039-02-01 US10970859B2 (en) 2018-12-05 2019-07-26 Monitoring method and device for mobile target, monitoring system and mobile robot
US17/184,833 Abandoned US20210201509A1 (en) 2018-12-05 2021-02-25 Monitoring method and device for mobile target, monitoring system and mobile robot

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/522,717 Active 2039-02-01 US10970859B2 (en) 2018-12-05 2019-07-26 Monitoring method and device for mobile target, monitoring system and mobile robot

Country Status (3)

Country Link
US (2) US10970859B2 (en)
CN (2) CN115086606A (en)
WO (1) WO2020113452A1 (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686981B (en) * 2019-10-17 2024-04-12 华为终端有限公司 Picture rendering method and device, electronic equipment and storage medium
CN110930455B (en) * 2019-11-29 2023-12-29 深圳市优必选科技股份有限公司 Positioning method, positioning device, terminal equipment and storage medium
US11023730B1 (en) * 2020-01-02 2021-06-01 International Business Machines Corporation Fine-grained visual recognition in mobile augmented reality
CN111152266B (en) * 2020-01-09 2021-07-30 安徽宇润道路保洁服务有限公司 Control method and system of cleaning robot
CN111152226B (en) * 2020-01-19 2021-09-07 吉利汽车研究院(宁波)有限公司 Robot working track planning method and system
CN113763416A (en) * 2020-06-02 2021-12-07 璞洛泰珂(上海)智能科技有限公司 Automatic labeling and tracking method, device, equipment and medium based on target detection
CN111738134A (en) * 2020-06-18 2020-10-02 北京市商汤科技开发有限公司 Method, device, equipment and medium for acquiring passenger flow data
CN111797728B (en) * 2020-06-19 2024-06-14 浙江大华技术股份有限公司 Method and device for detecting moving object, computing equipment and storage medium
CN111783892B (en) * 2020-07-06 2021-10-01 广东工业大学 Robot instruction identification method and device, electronic equipment and storage medium
CN111862154B (en) * 2020-07-13 2024-03-01 中移(杭州)信息技术有限公司 Robot vision tracking method and device, robot and storage medium
CN112001296B (en) * 2020-08-20 2024-03-29 广东电网有限责任公司清远供电局 Three-dimensional safety monitoring method and device for transformer substation, server and storage medium
US11277658B1 (en) 2020-08-21 2022-03-15 Beam, Inc. Integrating overlaid digital content into displayed data via graphics processing circuitry
CN112215871B (en) * 2020-09-29 2023-04-21 武汉联影智融医疗科技有限公司 Moving target tracking method and device based on robot vision
CN112287794B (en) * 2020-10-22 2022-09-16 中国电子科技集团公司第三十八研究所 Method for managing number consistency of video image automatic identification target
CN112738204B (en) * 2020-12-25 2022-09-16 国网湖南省电力有限公司 Automatic arrangement system and method for security measures of secondary screen door of transformer substation
CN112819770B (en) * 2021-01-26 2022-11-22 中国人民解放军陆军军医大学第一附属医院 Iodine contrast agent allergy monitoring method and system
CN113066050B (en) * 2021-03-10 2022-10-21 天津理工大学 Method for resolving course attitude of airdrop cargo bed based on vision
CN113098948B (en) * 2021-03-26 2023-04-28 华南理工大学广州学院 Disinfection control method and system for face mask detection
US11481933B1 (en) * 2021-04-08 2022-10-25 Mobeus Industries, Inc. Determining a change in position of displayed digital content in subsequent frames via graphics processing circuitry
US11601276B2 (en) 2021-04-30 2023-03-07 Mobeus Industries, Inc. Integrating and detecting visual data security token in displayed data via graphics processing circuitry using a frame buffer
US11475610B1 (en) 2021-04-30 2022-10-18 Mobeus Industries, Inc. Controlling interactivity of digital content overlaid onto displayed data via graphics processing circuitry using a frame buffer
US11483156B1 (en) 2021-04-30 2022-10-25 Mobeus Industries, Inc. Integrating digital content into displayed data on an application layer via processing circuitry of a server
US11477020B1 (en) 2021-04-30 2022-10-18 Mobeus Industries, Inc. Generating a secure random number by determining a change in parameters of digital content in subsequent frames via graphics processing circuitry
US11586835B2 (en) 2021-04-30 2023-02-21 Mobeus Industries, Inc. Integrating overlaid textual digital content into displayed data via graphics processing circuitry using a frame buffer
US11682101B2 (en) 2021-04-30 2023-06-20 Mobeus Industries, Inc. Overlaying displayed digital content transmitted over a communication network via graphics processing circuitry using a frame buffer
US11562153B1 (en) 2021-07-16 2023-01-24 Mobeus Industries, Inc. Systems and methods for recognizability of objects in a multi-layer display
CN114513608A (en) * 2022-02-21 2022-05-17 深圳市美科星通信技术有限公司 Movement detection method and device and electronic equipment
CN115100595A (en) * 2022-06-27 2022-09-23 深圳市神州云海智能科技有限公司 Potential safety hazard detection method and system, computer equipment and storage medium
CN115633321B (en) * 2022-12-05 2023-05-05 北京数字众智科技有限公司 Wireless communication network monitoring method and system
CN116703975B (en) * 2023-06-13 2023-12-15 武汉天进科技有限公司 Intelligent target image tracking method for unmanned aerial vehicle

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10737395B2 (en) * 2017-12-29 2020-08-11 Irobot Corporation Mobile robot docking systems and methods

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101023207B1 (en) * 2007-09-05 2011-03-18 한국전자통신연구원 Video object abstraction apparatus and its method
CN101303732B (en) * 2008-04-11 2011-06-22 西安交通大学 Method for apperceiving and alarming movable target based on vehicle-mounted monocular camera
CN101930072B (en) * 2010-07-28 2013-01-02 重庆大学 Multi-feature fusion based infrared small dim moving target track starting method
CN102074022B (en) * 2011-01-10 2012-12-12 南京理工大学 Infrared image-based weak and small moving target detecting method
CN103149939B (en) * 2013-02-26 2015-10-21 北京航空航天大学 A kind of unmanned plane dynamic target tracking of view-based access control model and localization method
CN103336947B (en) * 2013-06-21 2016-05-04 上海交通大学 Based on conspicuousness and structural infrared moving small target recognition methods
US9996976B2 (en) * 2014-05-05 2018-06-12 Avigilon Fortress Corporation System and method for real-time overlay of map features onto a video feed
CN103984315A (en) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 Domestic multifunctional intelligent robot
US20150329217A1 (en) * 2014-05-19 2015-11-19 Honeywell International Inc. Aircraft strike zone display
CN106534614A (en) * 2015-09-10 2017-03-22 南京理工大学 Rapid movement compensation method of moving target detection under mobile camera
CN105374031A (en) * 2015-10-14 2016-03-02 江苏美的清洁电器股份有限公司 Household security protection data processing method and system based on robot
CN105447888B (en) * 2015-11-16 2018-06-29 中国航天时代电子公司 A kind of UAV Maneuver object detection method judged based on effective target
TWI543611B (en) * 2015-11-20 2016-07-21 晶睿通訊股份有限公司 Image stitching method and camera system with an image stitching function
FR3047103B1 (en) * 2016-01-26 2019-05-24 Thales METHOD FOR DETECTING TARGETS ON THE GROUND AND MOVING IN A VIDEO STREAM ACQUIRED BY AN AIRBORNE CAMERA
WO2017169365A1 (en) * 2016-03-29 2017-10-05 Kyb株式会社 Road surface displacement detection device and suspension control method
CN106056625B (en) * 2016-05-25 2018-11-27 中国民航大学 A kind of Airborne IR moving target detecting method based on geographical same place registration
US10482737B2 (en) * 2016-08-12 2019-11-19 Amazon Technologies, Inc. Parcel theft deterrence for A/V recording and communication devices
US20180150718A1 (en) * 2016-11-30 2018-05-31 Gopro, Inc. Vision-based navigation system
CN106682619B (en) * 2016-12-28 2020-08-11 上海木木聚枞机器人科技有限公司 Object tracking method and device
CN106846367B (en) * 2017-02-15 2019-10-01 北京大学深圳研究生院 A kind of Mobile object detection method of the complicated dynamic scene based on kinematic constraint optical flow method
CN107092926A (en) * 2017-03-30 2017-08-25 哈尔滨工程大学 Service robot object recognition algorithm based on deep learning
CN107133969B (en) * 2017-05-02 2018-03-06 中国人民解放军火箭军工程大学 A kind of mobile platform moving target detecting method based on background back projection
CN107256560B (en) * 2017-05-16 2020-02-14 北京环境特性研究所 Infrared weak and small target detection method and system thereof
CN107352032B (en) * 2017-07-14 2024-02-27 广东工业大学 Method for monitoring people flow data and unmanned aerial vehicle
US10788584B2 (en) * 2017-08-22 2020-09-29 Michael Leon Scott Apparatus and method for determining defects in dielectric materials and detecting subsurface objects
US10796142B2 (en) * 2017-08-28 2020-10-06 Nutech Ventures Systems for tracking individual animals in a group-housed environment
US10509413B2 (en) * 2017-09-07 2019-12-17 GM Global Technology Operations LLC Ground reference determination for autonomous vehicle operations
US10657833B2 (en) * 2017-11-30 2020-05-19 Intel Corporation Vision-based cooperative collision avoidance
CN108806142A (en) * 2018-06-29 2018-11-13 炬大科技有限公司 A kind of unmanned security system, method and sweeping robot
WO2020041734A1 (en) * 2018-08-24 2020-02-27 Bossa Nova Robotics Ip, Inc. Shelf-viewing camera with multiple focus depths

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10737395B2 (en) * 2017-12-29 2020-08-11 Irobot Corporation Mobile robot docking systems and methods

Also Published As

Publication number Publication date
CN109691090A (en) 2019-04-26
WO2020113452A1 (en) 2020-06-11
US10970859B2 (en) 2021-04-06
US20200184658A1 (en) 2020-06-11
CN115086606A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
US10970859B2 (en) Monitoring method and device for mobile target, monitoring system and mobile robot
US11842500B2 (en) Fault-tolerance to provide robust tracking for autonomous and non-autonomous positional awareness
US11501527B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous tracking
US11544867B2 (en) Mapping optimization in autonomous and non-autonomous platforms
US10390003B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous device
US11398096B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous mapping
US10366508B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous device
US10410328B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous device
US9286678B2 (en) Camera calibration using feature identification
WO2017167282A1 (en) Target tracking method, electronic device, and computer storage medium
KR20180044279A (en) System and method for depth map sampling
WO2018015754A1 (en) Vehicle localisation using the ground surface with an event camera
TW201539385A (en) Intrusion detection with directional sensing
US20210025717A1 (en) Navigation system
CN105760846A (en) Object detection and location method and system based on depth data
CN108544494A (en) A kind of positioning device, method and robot based on inertia and visual signature
Zou et al. Active pedestrian detection for excavator robots based on multi-sensor fusion
CN116259001A (en) Multi-view fusion three-dimensional pedestrian posture estimation and tracking method
Wang et al. A system of automated training sample generation for visual-based car detection
Pirker et al. Histogram of Oriented Cameras-A New Descriptor for Visual SLAM in Dynamic Environments.
CN117635660A (en) Dynamic environment SLAM method based on vision
JP2021099682A (en) Position estimation device, moving body, position estimation method and program
Tibaldi Image processing techniques for the perception of automotive environments with applications to pedestrian detection
Yu et al. Detecting and identifying people in mobile videos

Legal Events

Date Code Title Description
AS Assignment

Owner name: ANKOBOT (SHANGHAI) SMART TECHNOLOGIES CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CUI, YUWEI;REEL/FRAME:055408/0052

Effective date: 20190724

Owner name: ANKOBOT (SHENZHEN) SMART TECHNOLOGIES CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CUI, YUWEI;REEL/FRAME:055408/0052

Effective date: 20190724

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION