WO2020113452A1 - 移动目标的监控方法、装置、监控系统及移动机器人 - Google Patents

移动目标的监控方法、装置、监控系统及移动机器人 Download PDF

Info

Publication number
WO2020113452A1
WO2020113452A1 PCT/CN2018/119293 CN2018119293W WO2020113452A1 WO 2020113452 A1 WO2020113452 A1 WO 2020113452A1 CN 2018119293 W CN2018119293 W CN 2018119293W WO 2020113452 A1 WO2020113452 A1 WO 2020113452A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
images
frames
image
monitoring
Prior art date
Application number
PCT/CN2018/119293
Other languages
English (en)
French (fr)
Inventor
崔彧玮
Original Assignee
珊口(深圳)智能科技有限公司
珊口(上海)智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珊口(深圳)智能科技有限公司, 珊口(上海)智能科技有限公司 filed Critical 珊口(深圳)智能科技有限公司
Priority to PCT/CN2018/119293 priority Critical patent/WO2020113452A1/zh
Priority to CN202210665232.3A priority patent/CN115086606A/zh
Priority to CN201880002424.8A priority patent/CN109691090A/zh
Priority to US16/522,717 priority patent/US10970859B2/en
Publication of WO2020113452A1 publication Critical patent/WO2020113452A1/zh
Priority to US17/184,833 priority patent/US20210201509A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present application relates to the field of intelligent mobile robots, in particular to a method, device, monitoring system and mobile robot for monitoring a moving target.
  • the purpose of the present application is to provide a method, device, monitoring system and mobile robot for monitoring a moving target, which are used to solve the problem that the prior art cannot effectively and accurately detect the robot during the process of moving The problem of moving targets.
  • a first aspect of the present application provides a method for monitoring a moving target applied to a mobile robot.
  • the mobile robot includes a mobile device and a camera device.
  • the method for monitoring a moving target includes the following steps : Obtaining multi-frame images captured by the camera device in the working state of the mobile device; according to the comparison of at least two frames of images selected from the multi-frame images, output monitoring information of a moving target containing movement behavior of a relatively static target ; Wherein the at least two frames of images are images captured by the camera device in a partially overlapping field of view, and the position of the moving target presented in the at least two frames of images has an indefinitely changing attribute.
  • the comparison based on at least two frames of images selected from the multi-frame images includes the following steps: detecting a suspect based on the comparison of the at least two frames of images Target; track the suspected target to confirm the moving target.
  • the detection of the suspected target based on the comparison of the at least two frames of images includes the step of: based on the time between the at least two frames of images of the mobile device The motion information within the image compensation is performed on the at least two frames of images; the at least two frames of the image-compensated images are subtracted to form a differential image, and a suspect target is detected from the differential image.
  • the comparison based on at least two frames of images selected from the multiple frames of images includes the following steps: according to the corresponding feature information in the at least two frames of images
  • the matching operation detects the suspected target; the suspected target is tracked to confirm the moving target.
  • the detection of the suspected target based on the matching operation based on the corresponding feature information in the at least two frames of images includes the following steps: extracting each of the at least two frames of images separately Feature points, to match each feature point in the extracted at least two frames of images on a reference three-dimensional coordinate system; the reference three-dimensional coordinate system is formed by three-dimensional modeling of the moving space, the reference three-dimensional coordinate system The coordinates of each feature point in all static targets in the moving space are identified on the above; a feature point set composed of corresponding feature points in the at least two frames of images that have not been matched on the reference three-dimensional coordinate system is detected as a suspect target.
  • the tracking of the suspected target to confirm the moving target includes the following steps: obtaining the movement trajectory of the suspected target according to the tracking of the suspected target; if the movement trajectory of the suspected target is When continuous, the suspected target is confirmed as a moving target.
  • the method further includes the following steps: performing object recognition on the moving target in the captured image; the object recognition is performed by an object recognizer trained by a neural network; according to the object The result of the recognition outputs the monitoring information.
  • the method further includes the steps of: uploading the captured image or the video containing the image to a cloud server to perform object recognition on the moving target in the image;
  • the cloud server includes a An object recognizer trained by a neural network; receives the object recognition result of the cloud server and outputs the monitoring information.
  • the monitoring information includes: one or more of image information, video information, audio information, and text information.
  • a second aspect of the present application provides a monitoring device for a mobile target applied to a mobile robot.
  • the mobile robot includes a mobile device and a camera device.
  • the monitoring device for a mobile target includes: at least A processor; at least one memory for storing images captured by the camera device in the working state of the mobile device; at least one program, wherein the at least one program is stored in the at least one memory and It is configured that the at least one processor executes instructions, and the execution of the at least one processor causes the monitoring device to execute and implement the method for monitoring a moving target as described in any one of the above.
  • a third aspect of the present application provides a monitoring device for a moving target of a mobile robot
  • the mobile robot includes a mobile device and a camera device
  • the monitoring device includes: an image acquisition unit, It is used to acquire the multi-frame images taken by the camera device under the working state of the mobile device;
  • the moving target detection unit is used to compare at least two frames of images selected from the multi-frame images to detect the moving target; wherein, The at least two frames of images are images captured by the camera device in a partially overlapping field of view, and the position of the moving target in the at least two frames of images has an indefinitely changing attribute;
  • an information output unit is used to The comparison output of at least two frames of images contains the monitoring information of the moving target with respect to the moving behavior of the static target.
  • the moving target detection unit includes: a comparison module for detecting a suspect target based on the comparison of the at least two frames of images; a tracking module for tracking Describe the suspected target to confirm the moving target.
  • the comparison module detecting the suspect target according to the comparison of the at least two frames of images includes: based on the mobile device’s location between the at least two frames of images The movement information within time performs image compensation on the at least two frames of images; the at least two frames of the image-compensated images are subtracted to form a differential image, and a suspect target is detected from the differential image.
  • the tracking module tracking the suspected target to confirm the moving target includes: obtaining the movement trajectory of the suspected target according to the tracking of the suspected target; if the movement trajectory of the suspected target is When continuous, the suspected target is confirmed as a moving target.
  • the moving target detection unit includes: a matching module for detecting a suspect target based on a matching operation of corresponding feature information in the at least two frames of images; a tracking module, Used to track the suspected target to confirm the moving target.
  • the matching module detecting the suspected target according to the matching operation of the corresponding feature information in the at least two frames of images includes: extracting each of the at least two frames of images separately Feature points, to match each feature point in the extracted at least two frames of images on a reference three-dimensional coordinate system; the reference three-dimensional coordinate system is formed by three-dimensional modeling of the moving space, the reference three-dimensional coordinate system The coordinates of each feature point in all static targets in the moving space are identified on the above; a feature point set composed of corresponding feature points in the at least two frames of images that have not been matched on the reference three-dimensional coordinate system is detected as a suspect target.
  • the tracking module tracking the suspected target to confirm the moving target includes: obtaining the movement trajectory of the suspected target according to the tracking of the suspected target; if the movement trajectory of the suspected target is When continuous, the suspected target is confirmed as a moving target.
  • it further includes: an object recognition unit configured to perform object recognition on the moving target in the captured image for the information output unit to output the object according to the structure of object recognition Monitoring information; the object recognition unit is formed by neural network training.
  • it further includes: a transceiving unit for uploading the captured image or the video containing the image to a cloud server to perform object recognition on the moving object in the image and receive the cloud
  • the object recognition result of the server is used by the information output unit to output the monitoring information
  • the cloud server includes an object recognizer trained by a neural network.
  • the monitoring information includes: one or more of image information, video information, audio information, and text information.
  • a fourth aspect of the present application provides a mobile robot, including: a mobile device for controlling the mobile robot to move according to the received control instructions; and a camera device for working state of the mobile device Multi-frame images are taken down; the monitoring device as described in any one of the above.
  • a fifth aspect of the present application provides a monitoring system, including: a cloud server; a mobile robot connected to the cloud server; wherein the mobile robot performs the following steps: in a mobile state Acquire multi-frame images; output the detection information of the moving target containing the movement behavior of the relatively static target according to the comparison of at least two frames of images selected from the multi-frame images; the captured image or the included image according to the detection information Upload the video to the cloud server; output monitoring information according to the result of object recognition received from the cloud server.
  • a sixth aspect of the present application provides a monitoring system, including: a cloud server; a mobile robot connected to the cloud server; wherein the mobile robot performs the following steps: in a mobile state Obtaining multi-frame images and uploading the multi-frame images to a cloud server; the cloud server performs the following steps: according to the comparison output of at least two frames of images selected from the multi-frame images includes movement behavior relative to a static target The detection information of the moving target in the multi-frame image; the object recognition result of the moving target is output to the mobile robot according to the recognition of the moving target in the multi-frame image for the mobile robot to output monitoring information.
  • the monitoring method, device, monitoring system and mobile robot of the mobile object of the present application are based on the multi-frame images acquired by the camera device while the mobile robot is moving in the monitoring area, and at least at least the areas where the image overlaps are selected from the multi-frame images Two frames of images, and compare the selected images according to the image compensation method or feature matching method, and output the monitoring information of the moving target containing the movement behavior of the relatively static target according to the comparison result.
  • the position of the moving target in the at least two frames of images has an attribute of uncertain change.
  • the application can accurately identify the moving target in the monitoring area during the movement of the mobile robot, and generate monitoring information about the moving target to make corresponding reminders, and effectively guarantee the security of the monitoring area.
  • FIG. 1 shows a schematic flowchart of a method for monitoring a mobile target of the present application in a specific embodiment.
  • FIG. 2 shows a schematic image diagram of two frames of images selected in a specific embodiment of the present application.
  • FIG. 3 shows a schematic image diagram of two frames of images selected in a specific embodiment of the present application.
  • FIG. 4 shows a schematic flowchart of selecting at least two frames of images from multiple frames for comparison in a specific embodiment of the present application.
  • FIG. 5 shows a schematic flowchart of detecting a suspected target based on the comparison of at least two frames of images in a specific embodiment of the present application.
  • FIG. 6 shows a schematic image diagram of the first frame image selected in an embodiment of the present application.
  • FIG. 7 shows a schematic image diagram of a second frame image selected in an embodiment of the present application.
  • FIG. 8 shows a schematic flowchart of tracking a suspected target to confirm that the suspected target is a moving target in a specific embodiment of the present application.
  • FIG. 9 shows a schematic flow chart of comparison based on at least two frames of images selected from multiple frames of images in a specific embodiment of the present application.
  • FIG. 10 shows a schematic flowchart of detecting a suspect target according to a matching operation of corresponding feature information in at least two frames of images in a specific embodiment of the present application.
  • FIG. 11 shows a schematic flowchart of tracking a suspected target to confirm a moving target in a specific embodiment of the present application.
  • FIG. 12 shows a schematic flowchart of object recognition in a specific embodiment of the present application.
  • FIG. 13 shows a schematic flowchart of object recognition in a specific embodiment of the present application.
  • FIG. 14 shows a schematic diagram of the composition of a monitoring device applied to a mobile target of a mobile robot in a specific embodiment of the present application.
  • 15 shows a schematic diagram of the composition of the mobile robot of this application in a specific embodiment.
  • 16 shows a schematic diagram of the composition of the monitoring system of this application in a specific embodiment.
  • A, B or C or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A, B and C” .
  • the exception to this definition only occurs when a combination of elements, functions, steps, or operations are inherently mutually exclusive in certain ways.
  • Mobile robots perform mobile operations based on navigation control technology.
  • the VSLAM Visual Simultaneous Localization and Mapping
  • the mobile robot constructs a map through the visual information provided by the visual sensor and the mobile information provided by the position measuring device, and provides the mobile robot with a navigation capability according to the constructed map, so that the mobile robot can autonomously move.
  • the vision sensor includes a camera device.
  • the position measuring device include speed sensors, odometer sensors, distance sensors, cliff sensors, and the like.
  • the mobile robot moves on the traveling plane, and acquires and stores a projection image about the traveling plane in advance.
  • the camera device takes a solid object within the field of view at the location of the mobile robot and projects it onto the traveling plane of the mobile robot to obtain a projected image.
  • the physical objects include, for example, televisions, air conditioners, chairs, shoes, and leather balls.
  • the mobile robot determines the current position in combination with the position information provided by the position measuring device, and helps locate the current position by identifying the image features contained in the image taken by the camera device, so that the mobile robot can locate it
  • the image features captured at the current position correspond to the stored positions of the matching image features on the map, thereby achieving rapid positioning.
  • the mobile robot is, for example, a security robot. After the security robot is started, it can traverse an area requiring security with a certain or random route. Existing security robots usually upload all the acquired images to the monitoring center. The suspicious objects in the captured images cannot be reminded in a targeted manner, and the intelligence is poor.
  • the present application provides a method for monitoring a moving target applied to a mobile robot.
  • the mobile robot includes a mobile device and a camera device.
  • the monitoring method may be performed by a processing device included in the mobile robot.
  • the processing device is an electronic device capable of performing numerical operations, logical operations, and data analysis, including but not limited to: CPU, GPU, FPGA, etc., and an intermediate data used for temporarily storing the intermediate data generated during the operation Volatile memory, etc.
  • the monitoring method compares at least two frames of the multi-frame images captured by the camera device, and outputs monitoring information of a moving target containing a moving behavior of a relatively static target, wherein the multi-frame images are
  • the camera device captures when the mobile device is in a working state.
  • the static target are, but not limited to: balls, shoes, walls, flower pots, coats and hats, roofs, lights, trees, tables, chairs, refrigerators, televisions, sofas, socks, tiled objects, cups, etc.
  • the tiled objects include, but are not limited to, floor mats, floor tile maps laid on the floor, and tapestries and paintings hanging on the wall.
  • the mobile robot is, for example, a specific security robot, and the security robot implements monitoring of the monitoring area according to the method for monitoring a moving target of the present application.
  • the mobile robot may also be another mobile robot equipped with a module applying the monitoring method of the moving target of the present application, for example, the other mobile robot is a sweeping robot, a home companion mobile robot, or a glass cleaner Robots etc.
  • the mobile robot may control the sweeping robot to traverse the entire area to be cleaned according to a map constructed in advance by VSLAM technology and a camera device mounted on the sweeping robot.
  • the sweeping robot is started to start the sweeping work, and at the same time, the module equipped with the monitoring method of the moving target applied by the sweeping robot is activated to realize the security monitoring while sweeping the floor.
  • Examples of the mobile device include a roller and a driver of the roller, where the driver is, for example, a motor.
  • the mobile device is used to drive the robot to perform back-and-forth reciprocating motion, rotary motion or curvilinear motion according to the planned moving trajectory, or drive the mobile robot to adjust the posture.
  • the mobile robot includes at least one camera device.
  • the camera device captures images within the field of view at the location of the mobile robot.
  • the mobile robot includes a camera device that is disposed on the top, shoulder, or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot, or the main optical axis is consistent with the traveling direction of the mobile robot
  • the main optical axis may also be set at an angle (for example, an angle between 50° and 86°) with the traveling plane where the mobile robot is located to obtain more Large camera range.
  • the main optical axis of the camera device can also be set in many other ways, for example, the camera device can rotate in a certain regular or random manner, at this time, the optical axis of the camera device and the traveling direction of the mobile robot The angle of is changing from moment to moment, so the installation method of the imaging device and the state of the main optical axis of the imaging device are not limited to the enumeration in this embodiment.
  • the mobile robot includes two or more camera devices, for example, a binocular camera device or a multi-eye camera device larger than two.
  • the main optical axis of one camera device is perpendicular to the traveling plane of the mobile robot, or the main optical axis is consistent with the traveling direction of the mobile robot.
  • the main optical axis may also be set at an angle with the traveling direction of the mobile robot in a direction perpendicular to the traveling plane to obtain a larger imaging range.
  • the main optical axis of the camera device can also be set in many other ways, for example, the camera device can rotate in a certain regular or random manner, at this time, the optical axis of the camera device and the traveling direction of the mobile robot The angle of is changing from moment to moment, so the installation method of the imaging device and the state of the main optical axis of the imaging device are not limited to the enumeration in this embodiment.
  • the camera device includes but is not limited to: a fisheye camera module, a wide-angle (or non-wide-angle) camera module, a depth camera module, a camera module integrated with an optical system or a CCD chip, a camera module integrated with an optical system and a CMOS chip, and the like.
  • the power supply system of the camera device can be controlled by the power supply system of the mobile robot.
  • the camera device starts to take images.
  • FIG. 1 shows a schematic flowchart of a method for monitoring a moving target of the present application in a specific embodiment. As shown in FIG. 1, the method for monitoring a moving target includes the following steps:
  • Step S100 acquiring the multi-frame images captured by the camera device in the working state of the mobile device; in an embodiment where the mobile robot is a sweeping robot, the mobile device of the mobile robot may include a walking mechanism and a walking driving mechanism, wherein The walking mechanism may be provided at the bottom of the robot body, and the walking driving mechanism is built into the robot body.
  • the walking mechanism may include, for example, a combination of two straight traveling wheels and at least one auxiliary steering wheel, the two straight traveling wheels are respectively disposed on opposite sides of the bottom of the robot body, and the two straight traveling wheels may be respectively composed of
  • the corresponding two traveling driving mechanisms are independently driven, that is, the left straight traveling wheel is driven by the left traveling driving mechanism, and the right straight traveling wheel is driven by the right traveling driving mechanism.
  • the universal walking wheel or the straight walking wheel may have an offset drop type suspension system, which is fastened in a movable manner, for example, rotatably mounted on the robot body, and receives a spring biased downward and away from the robot body Offset.
  • the spring bias allows the universal walking wheel or the straight traveling wheel to maintain contact and traction with the ground with a certain landing force.
  • the two straight running wheels are mainly used for forward and backward, while in the at least one auxiliary steering wheel participating and traveling with the two straight When the walking wheels cooperate, the steering and rotation can be achieved.
  • the walking driving mechanism may include a driving motor and a control circuit that controls the driving motor, and the driving motor may be used to drive a walking wheel in the walking mechanism to achieve movement.
  • the drive motor may be, for example, a reversible drive motor, and a speed change mechanism may be provided between the drive motor and the axle of the walking wheel.
  • the walking driving mechanism can be detachably installed on the robot body, which is convenient for disassembly and maintenance.
  • the mobile robot which is a cleaning robot, captures multiple frames of images while walking, in other words, in step S100, the processing device acquires the multiple frames of images captured by the camera device while the mobile device is in operation.
  • the multi-frame image is, for example, a multi-frame image acquired in a continuous time period, or a multi-frame image acquired in two or more intermittent time periods.
  • Step S200 According to the comparison of at least two frames of images selected from the multiple frames of images, the monitoring information of the moving target containing the moving behavior of the relatively static target is output; the at least two frames of images are the partially overlapping fields of view of the camera device The captured image.
  • the processing device of the mobile robot outputs the monitoring information of the moving target including the moving behavior of the relatively static target according to the comparison of the at least two frames of images selected from the multi-frame images.
  • the static targets are exemplified but not limited to: balls, shoes, walls, flower pots, coats, roofs, lights, trees, tables, chairs, refrigerators, TVs, sofas, socks, tiled objects, cups Wait.
  • tiled objects include but are not limited to floor mats, floor tile maps laid on the floor, tapestries and paintings hanging on the wall.
  • the two frames of images selected by the processing device should be images captured by the camera device in a partially overlapping field of view. That is, the processing device determines that the basis for selecting the first frame image and the second frame image is that the two frame images contain the image overlapping area, and the overlapping field of view contains a static target, so as to move the relative static target in the overlapping field of view Monitor the moving target.
  • the ratio of the image overlap area of the first frame image and the second frame image can also be set, for example, the image overlap areas are set separately
  • the ratio of the first frame image and the second frame image is at least 50% (but not limited to this, different ratios of the first frame image and the second frame image can be set according to the actual situation) .
  • the selection of the first frame image and the second frame image should have a certain continuity. While ensuring that the two have a certain proportion of the image overlapping area, it can also be performed according to the continuity of the acquired image to the moving trajectory of the moving target Judgment. The following will exemplify several ways to select the image.
  • the image selection method described in this example is just some specific methods. In actual application, the method of selecting the first frame image and the second frame image is not With this limitation, other image selection methods that can ensure that the selected two frames of images are relatively continuous images and have a set ratio of image overlapping regions can be applied in this application.
  • the processing device selects the first frame image and the second frame image respectively at the first position and the second position with overlapping fields of view according to the field of view of the camera device.
  • the camera can shoot video. Since the video is composed of image frames, during the movement of the mobile robot, the processing device continuously or discontinuously captures the image frames in the acquired video to obtain multiple frames of images, and according to the preset The first frame image and the second frame image are selected according to the number of spaced frames, and the two frame images have partial overlapping areas, and then the processing device performs image comparison according to the selected two frame images.
  • the continuous acquisition device of the camera device may pre-set the time interval of the image captured by the camera device to acquire the multi-frame images at different times captured by the camera device; among the multi-frame images, select two frames of images For comparison, the time interval should be at least less than the time taken by the mobile robot to move a field of view to ensure that there is a partial overlap between the two frames of the two frames of images selected.
  • the camera device causes the mobile robot to take images within the field of view in a preset time period, and then the processing device acquires images at different times taken by the camera device in a preset time period and selects them
  • the two images serve as a first frame image and a second frame image, and there is a partially overlapping part between the two frames.
  • the time period can be represented by a time unit, or the time period is represented by the number of intervals of image frames.
  • the mobile robot communicates with an intelligent terminal, and the intelligent terminal can modify the time period through a specific APP (application program). For example, after opening the APP, the modification interface of the time period is displayed on the touch screen of the smart terminal, and the modification of the time period is completed by touching the modification interface; or directly to the The mobile robot sends a time period modification instruction to modify the time period.
  • the time period modification instruction is, for example, a speech including the modification instruction, and the speech is, for example, "period modification to 3 seconds.” As another example, the speech is "the image frame interval is modified to 5 pictures”.
  • the position of the moving target in the at least two frames of images has an indefinitely changing attribute.
  • the mobile robot moves in a pre-built map through the mobile device, and the camera device captures multiple frames of images during the movement, and the processing device selects two frames of images from the multiple frames of images For comparison, according to the order of image selection, the two selected frames are the first frame image and the second frame image, and the position corresponding to the first frame image of the mobile robot is the first position.
  • the position corresponding to the second frame image is the second position, the two frame images have an image overlapping area, and the camera has a static target in the overlapping field of view.
  • the mobile robot Because the mobile robot is in a moving state, the static target is in the second frame image
  • the position in the image has changed deterministically with respect to the position in the first frame of the image, and the deterministic change in the position of the static target in the two frames of image is the same as that of the mobile robot at the first and second positions
  • the movement information is related, for example, the movement distance and posture change information of the mobile robot from the first position to the second position.
  • the mobile robot includes a position measurement device, the position measurement device of the mobile robot is used to obtain movement information of the mobile robot, and the first position and the second position are measured according to the movement information Relative location information.
  • the position measuring device includes but is not limited to a displacement sensor, a distance measuring sensor, a cliff sensor, an angle sensor, a gyroscope, a binocular camera device, a speed sensor, etc., which are provided on the mobile robot.
  • the position measuring device continuously detects the movement information and provides it to the processing device.
  • the displacement sensor, gyroscope, speed sensor, etc. may be integrated in one or more chips.
  • the distance measuring sensor and the cliff sensor may be provided on the body side of the mobile robot. For example, the ranging sensor in the sweeping robot is set at the edge of the housing; the cliff sensor in the sweeping robot is set at the bottom of the mobile robot.
  • the movement information that the processing device can acquire includes but is not limited to: displacement information, angle information, distance information from obstacles, speed information, traveling direction information, and the like.
  • the position measuring device is a counting sensor provided on the motor of the mobile robot, and counts by the number of turns of the motor to obtain the relative displacement of the mobile robot from the first position to the second position, and uses the motor Obtain posture information etc. from the angle of operation.
  • the mapping relationship between the unit grid length and the actual displacement is predetermined, and according to the movement information obtained by the mobile robot during the movement, the mobile robot is determined from The number of grids moved from the first position to the second position to obtain the relative position information of the two positions.
  • the mapping relationship between the length of the unit vector and the actual displacement is determined in advance, and the mobile robot is determined from the first position to the first position according to the movement information obtained during the movement of the mobile robot.
  • the vector length of the two positions moves to obtain the relative position information of the two positions.
  • This vector length can be calculated in pixels of the image.
  • the position of the static target in the second frame image is shifted by the vector length corresponding to the relative position information relative to the position in the first frame image, so the static target captured in the second frame image is relative to
  • the movement of the static target captured in the first frame image can be determined according to the relative position information of the mobile robot, and has a deterministic change attribute.
  • the movement of the moving object in the overlapping field of view in the selected two frames of images does not conform to the above-mentioned deterministic change attribute.
  • FIG. 2 is a schematic diagram of two frames of images selected in a specific embodiment of the present application.
  • the main optical axis of the imaging device is set to be perpendicular to the traveling plane. Therefore, the plane where the two-dimensional image captured by the imaging device is located has a parallel relationship with the traveling plane of the mobile robot.
  • the angle of the moving direction of the mobile robot represents the angle of the position of the entity object projected on the traveling plane of the mobile robot relative to the moving direction of the mobile robot.
  • the two frames of images have an image overlapping area, and the overlapping field of view of the camera device has a static target O as shown in FIG.
  • the position of the static target O in the second frame of the image is relatively
  • the position in the first frame image is changed deterministically, and the deterministic change in the position of the static target O in the two frame images is related to the movement of the mobile robot at the first position P1 and the second position P2
  • the information is related.
  • the movement information is, for example, the movement distance of the mobile robot from the first position P1 to the second position P2.
  • the mobile robot includes a position measurement device, and the position measurement device of the mobile robot is used to acquire movement information of the mobile robot.
  • the position measuring device measures the traveling speed of the mobile robot, and calculates the relative displacement from the first position to the second position using the traveling speed and the travel time.
  • the position measuring device is a GPS (Global Positioning System, Global Positioning System), and the first position P1 and the second position are obtained according to the positioning information of the GPS at the first position and the second position Relative position information between P2.
  • GPS Global Positioning System, Global Positioning System
  • the projection of the static target O in the first frame image is the static target projection O1
  • the position of the static target O in the second frame image is the static target projection O2, and it can be clearly seen from FIG.
  • the static target projection O1 in the first frame image has changed to the position of the static target projection O2 in the second frame image, the position of the two in the image has changed, and the static target projection O2 relative to the static target projection O1 in the image
  • the change distance has a certain ratio with the relative displacement between the first position P1 and the second position P2, and the static target projection O2 relative to the static target projection can be deterministically obtained by the ratio of the pixels in the image corresponding to the unit actual distance
  • the change distance of O1 in the image Therefore, the movement of the static target captured in the second frame image relative to the static target captured in the first frame image can be determined according to the relative position information of the mobile robot, and has a deterministic change attribute. However, the movement of the moving object in the overlapping field of view in the selected two frames of images does not conform to the above-mentioned deterministic change attribute.
  • the position measuring device is a device based on measuring wireless signals, for example, the position measuring device is a Bluetooth (or WiFi) positioning device; the position measuring device is based on the first position P1 and the second The position P2 measures the power of the received wireless positioning signal to determine the relative position of each position relative to the preset wireless positioning signal transmitting device, thereby obtaining the position between the first position P1 and the second position P2 Relative location information.
  • the position measuring device is a device based on measuring wireless signals
  • the position measuring device is a Bluetooth (or WiFi) positioning device
  • the position measuring device is based on the first position P1 and the second
  • the position P2 measures the power of the received wireless positioning signal to determine the relative position of each position relative to the preset wireless positioning signal transmitting device, thereby obtaining the position between the first position P1 and the second position P2 Relative location information.
  • FIG. 3 is a schematic diagram of two frames of images selected in a specific embodiment of the present application.
  • the main optical axis of the imaging device is set to be perpendicular to the traveling plane. Therefore, the plane where the two-dimensional image captured by the imaging device is located has a parallel relationship with the traveling plane of the mobile robot.
  • the angle of the moving direction of the mobile robot represents the angle of the position of the entity object projected on the traveling plane of the mobile robot relative to the moving direction of the mobile robot.
  • the two frame images selected in FIG. 3 are the first frame image and the second frame image, respectively, and the position corresponding to the first frame image of the mobile robot is the first position P1′, and the mobile robot corresponds to the second frame image. Is the second position P2'.
  • the mobile robot only changes the distance from the first position P1' to the second position P2', but does not change the posture.
  • the relative position information of the mobile robot at the first position P1' and the second position P2' can be obtained only by measuring the relative displacement of the two.
  • the two frames of images have an image overlapping area, and the overlapping field of view of the camera has a moving target Q as shown in FIG. 3, and the mobile robot moves from the first position to the second position,
  • the moving target Q has moved and becomes a moving target Q′ at a new position, and the position of the moving target Q in the second frame image is changed indefinitely relative to the position in the first frame image, that is, moving
  • the magnitude of the change in the position of the target Q in the two frames of images has no correlation with the movement information of the mobile robot at the first position P1' and the second position P2'.
  • the mobile robot moves at the first position P1' and the second position
  • the movement information of P2' cannot calculate the change of the position of the moving target Q in the two frames of images.
  • the movement information is, for example, the mobile robot from the first position P1' to the first The moving distance of the second position P2'.
  • the mobile robot includes a position measurement device, and the position measurement device of the mobile robot is used to acquire movement information of the mobile robot.
  • the position measuring device measures the traveling speed of the mobile robot, and calculates the relative displacement from the first position to the second position using the traveling speed and the travel time.
  • the position measuring device is a GPS system or a device based on measuring wireless signals, based on the positioning information of the GPS system or the device based on measuring wireless signals at first and second positions, Acquire relative position information between the first position P1' and the second position P2'. As shown in FIG.
  • the projection of the moving target Q in the first frame image is the moving target projection Q1
  • the position of the moving target Q′ in the second frame image is the moving target projection O2
  • the moving target Q is a
  • its projection in the second frame image should be the projection Q2', that is, the moving target projection Q2' is the moving target projection Q1 that occurs when the mobile robot moves from the first position P1' to the second position P2'
  • the projected position of the image after the deterministic change but in this embodiment, it cannot be estimated that the moving target Q is in the two frames of the image based on the movement information of the mobile robot at the first position P1' and the second position P2'
  • the moving target Q has the attribute of uncertain change during the movement of the mobile robot.
  • FIG. 4 shows a schematic flowchart of selecting at least two frames of images from multiple frames for comparison in a specific embodiment of the present application.
  • the comparison according to at least two frames of images selected from the multiple frames of images further includes the following steps S210 and S220.
  • the processing device detects a suspect target based on the comparison of the at least two frames of images.
  • the suspected target is a target with an uncertain change attribute in the first frame image and the second frame image, and the suspected target is within an overlapping area of the images of the first frame image and the second frame image Has moved on the static target.
  • FIG. 5 is a schematic flowchart of detecting a suspicious target based on a comparison of at least two frames of images in a specific embodiment of the present application. That is, according to step S211 and step S212 in FIG. 5, a step of detecting a suspect target based on the comparison of the at least two frames of images is implemented.
  • step S211 the processing device performs image compensation on the at least two frames of images based on the movement information of the mobile device in the time between the at least two frames of images; in some embodiments, the movement During the movement of the robot from the first position to the second position, movement information is generated due to the movement.
  • the movement information is the relative displacement and relative posture change of the mobile robot from the first position to the second position
  • the movement information can be measured according to the position measuring device, and according to the ratio between the unit length and the actual length in the image captured by the camera device, to obtain the image overlapping area of the second frame image and the first frame image
  • the deterministic relative displacement of the position of the projected image of the internal static target, the relative posture change of the mobile robot is acquired according to the posture detection device of the mobile robot, and then the first frame of the image or the second frame is determined according to the movement information Image compensation. For example, image compensation is performed on the first frame image according to movement information or image compensation is performed on the second frame image according to movement information.
  • the processing device performs subtraction processing on the at least two frames after image compensation to form a difference image, that is, performs subtraction processing on the compensated second frame image and the original first frame image
  • the difference image is formed, or the compensated first frame image is subtracted from the original second frame image to form a difference image.
  • the compensated image subtraction result should be zero.
  • the difference image of the overlapping area of the frame image and the second frame image should not contain any features, that is, the compensated second frame image is the same as the original first frame image overlap area, or the compensated first The image overlapping area of the frame image and the original second frame image is the same.
  • the compensated image subtraction result is not zero, regarding the first frame
  • the difference image of the image overlap area of the image and the second frame image contains a difference feature, that is, the compensated second frame image is not the same as the original first frame image overlap area, and there are parts that cannot overlap, or
  • the overlapped area of the first frame image after compensation and the original second frame image is not the same, and there are parts that cannot be overlapped.
  • the table lamp will be misjudged as the suspected target. Therefore, when it is satisfied that the difference image is not zero, it is necessary to obtain from the difference image that an object has the first movement trajectory during the time when the first frame image and the second frame image are captured by the camera device.
  • the object is the suspected target. That is, there is a suspect target corresponding to the first movement trajectory in the overlapping field of view.
  • FIG. 6 is a schematic diagram of an image of a first frame image selected in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an image of a second frame image selected in an embodiment of the present application.
  • FIG. 6 shows the first frame image captured by the camera device of the mobile robot at the first position
  • FIG. 7 shows the second frame image captured by the mobile robot when moving from the first position to the second position Image, according to the mobile robot's position measuring device, the mobile robot's movement information from the first position to the second position is measured.
  • the first frame image and the second frame image have an image overlapping area as shown by a dotted frame in FIG. 6 and FIG.
  • the image overlapping area corresponds to the camera device at the first position and the second position Overlapping fields of view.
  • the overlapping field of view of the camera device includes a plurality of static objects, such as chairs, windows, bookshelves, clocks, sofas, and beds.
  • the butterfly A has moved about the static target in the overlapping field of view in FIGS. 6 and 7, and the butterfly A is in the first frame of image
  • a static target in the overlapping field of view is now selected to clarify the movement of the butterfly A.
  • the static target can be selected from the chair, window, bookshelf, clock, sofa And any of the beds, because the images of the bell in Figures 6 and 7 are relatively complete, and the shape of the bell is regular and easy to identify, and its size can better indicate the movement of the butterfly A, so, here
  • the clocks in FIGS. 6 and 7 are selected as the static targets for clarifying the movement of the butterfly A. Obviously, from FIG. 6, it can be seen that butterfly A is located on the left side of the clock, while in FIG. 7, butterfly A is located on the right side of the clock.
  • the processing device subtracts the second frame image after image compensation and the original first frame image to form a differential image, and the butterfly A exists in the differential image
  • a static target in a relatively overlapping field of view (the static target is an example of a clock) has a movement behavior (moving from the left side of the clock to the right side of the clock), and the butterfly A is in the first frame
  • the image and the position presented in the second frame image have an attribute of uncertain change.
  • the first frame of the image at the first position as shown in FIG. 6 is captured by the camera device, and when the mobile robot moves to the second position, it is captured by the camera device
  • the first frame image shown in FIG. 7 at the second position, the first frame image and the second frame image have an image overlapping area as shown in FIGS. 6 and 7, and the image overlapping area corresponds to the camera device Overlapping fields of view at the first position and the second position.
  • the overlapping field of view of the camera device includes a plurality of static objects, such as chairs, windows, bookshelves, clocks, sofas, and beds.
  • a moving butterfly A which has moved with respect to the static target in the overlapping field of view in FIGS. 6 and 7, and the butterfly A in the first frame image is, for example, Is at the position of the end of the bed, within the image overlapping area, and the butterfly A is, for example, at the head of the bed and outside the image overlapping area in the second frame image, at this time, according to
  • the movement information measured by the position measuring device compensates the second frame image
  • the processing device subtracts the second frame image after image compensation from the original first frame image to form A differential image, in which there is a butterfly A, a special feature that exists in the first frame image and the second frame image at the same time, and cannot be eliminated by subtraction, that is, the butterfly A is judged in the mobile robot from During the movement from the first position to the second position, a moving behavior (moving from the end of the bed to the head of the bed) of the static target (the distance of the static target is the bed) in the relatively overlapping field of view occurred, and
  • step S220 needs to be further executed, that is, tracking the suspected target to confirm the suspected target
  • the special case is, for example, due to the effect of wind, some hanging ornaments or chandeliers will swing more regularly with a certain amplitude. These oscillations will generally only be regular back and forth movements or small irregularities within a small range
  • the movement usually cannot form a continuous movement.
  • the object that swings due to wind will form a difference feature in the difference image, and there is a movement trajectory.
  • the object that swings due to wind will be affected by It is determined to be a suspected target. If only the method shown in FIG. 5 is used to determine that the suspected target is a moving target, these objects that swing with a certain amplitude due to wind force will be misjudged as a moving target.
  • FIG. 8 shows a schematic flowchart of tracking a suspected target to confirm that the suspected target is a moving target in a specific embodiment of the present application.
  • the method of tracking the suspected target to confirm the moving target refer to step S221 and step S222 shown in FIG. 8.
  • step S221 the processing device obtains the movement trajectory of the suspected target according to the tracking of the suspected target; in step S222, if the movement trajectory of the suspected target is continuous, the suspected target is confirmed as a moving target.
  • the third frame image in the range of the field of view captured by the mobile robot when moving from the second position to the third position is continuously acquired.
  • the one frame image, the second frame image and the third frame image are sequentially acquired images, the second frame image and the third frame image have an image overlapping area, and are also according to step S211 and step S212
  • the second frame image and the third frame image are compared and detected, and when the second frame image and the compensated third frame image are subtracted, the difference image of the overlapping area of the image is not zero, that is When there is a difference feature in the difference image, and the difference feature exists in the second frame image and the third frame image at the same time, the second frame image and When the first movement trajectory and the second movement trajectory are continuous in the time of the third frame image, the suspected target is confirmed as a movement target.
  • more frames of images of the camera in a relatively continuous time can be acquired in sequence, and the newly acquired image and the adjacent image are compared and detected according to the steps S211 and S212
  • the steps S211 and S212 In order to obtain more moving trajectories about the suspected target, to determine whether the suspected target is a moving target, and to ensure the accuracy of the judgment result.
  • butterfly A in FIGS. 6 and 7 after acquiring the third frame of image, butterfly A moves to the head of the bed, and according to step S211 and step S212, the second frame of image shown in FIG. 7 and the The third frame image is compared and detected.
  • the image features include preset graphic features corresponding to the suspect target, or image features obtained by the image processing algorithm on the suspect target.
  • the image processing algorithm includes, but is not limited to, at least one of the following: grayscale processing, sharpening processing, contour extraction, angle extraction, line extraction, and image processing algorithms obtained by machine learning.
  • the image processing algorithms obtained through machine learning include but are not limited to: neural network algorithms, clustering algorithms, etc.
  • the multi-frame images captured by the camera device continue to acquire the third frame image within the field of view range captured by the mobile robot when moving from the second position to the third position, the first frame image, the The second frame image and the third frame image are sequentially acquired images, the second frame image and the third frame image have an image overlapping area, and the third frame image is based on the image characteristics of the suspected target Searching for a suspicious target, the camera device has a static target within the overlapping field of view of the second frame image and the third frame image, and according to the mobile robot at the second position and location
  • the relative position information of the third position, and the position change of the suspect target in the second frame image and the third frame image with respect to an identical static target, acquiring the suspect target captured by the camera device The second moving track in the time of the second frame image and the third frame image.
  • the suspected target is confirmed as a moving target.
  • more frames of images can be acquired, and the newly acquired image and the adjacent image both track the suspect target according to the image characteristics of the suspect target, and then obtain information about the suspect More moving trajectories of the target to judge whether the suspected target is a moving target, to ensure the accuracy of the judgment result.
  • FIG. 9 shows a schematic flowchart of a comparison based on at least two frames of images selected from multiple frames of images in a specific embodiment of the present application.
  • the comparison based on at least two frames of images selected from the multiple frames of images includes step S210' and step S220' shown in FIG.
  • the processing device detects a suspect target according to the matching operation of the corresponding feature information in the at least two frames of images.
  • the feature information includes at least one of the following: feature points, feature lines, feature colors, and so on.
  • FIG. 10 shows a schematic flowchart of detecting a suspect target according to a matching operation of corresponding feature information in at least two frames of images in a specific embodiment of the present application.
  • the step S210' is realized by step S211' and step S212'.
  • the processing device separately extracts each feature point in the at least two frames of images, and matches each feature point in the extracted at least two frames of images on a reference three-dimensional coordinate system; wherein
  • the reference three-dimensional coordinate system is formed by three-dimensional modeling of the moving space, and the coordinates of each feature point in all static targets in the moving space are identified on the reference three-dimensional coordinate system.
  • the feature points include corner points, end points, and inflection points corresponding to the corresponding solid objects.
  • the set of feature points corresponding to a static target can form the outer contour of the static target, that is, the corresponding static target can be identified by the set of some feature points.
  • the user can perform image recognition on all static targets in the mobile space of the mobile robot in advance through the recognition conditions to obtain feature points about each of the static targets, and mark the coordinates of each feature point on the reference three-dimensional coordinate system .
  • the user can also manually upload the coordinates of the feature points of each static target according to a certain format and mark them on the reference three-dimensional coordinate system.
  • the processing device detects a feature point set composed of corresponding feature points in the at least two frames of images that are not matched on the reference three-dimensional coordinate system as a suspect target.
  • a feature point set composed of corresponding feature points in the at least two frames of images that are not matched on the reference three-dimensional coordinate system as a suspect target.
  • the static object on the three-dimensional coordinate system needs to be matched according to the two frames of images to determine that the same feature point set composed of the feature points that have not been matched with the corresponding feature points on the reference three-dimensional coordinate system has moved.
  • the coordinate of each feature point of all the static targets in the moving space is identified on the reference three-dimensional coordinate system, when there are features in the first frame image and the second frame image that do not correspond to the reference three-dimensional coordinate system
  • the set of feature points corresponding to the feature points of the two images that do not match the feature points in the reference system are the same or similar.
  • the feature point set has moved with respect to the static target during the time that the camera captures the first frame image and the second frame image, thereby forming a first movement trajectory about the feature point set, Then the feature point set is detected as a suspicious target.
  • static objects such as chairs, windows, bookshelves, clocks, sofas, and beds can be extracted in advance to identify feature points and be identified in the reference three-dimensional coordinate system
  • butterfly A in FIGS. 6 and 7 is new
  • the added object is not identified in the reference three-dimensional coordinate system, so matching the extracted feature points in the first frame image and the second frame image on the reference three-dimensional coordinate system can result in failure
  • the set of feature points can be displayed as features about Butterfly A, for example, the set of feature points is displayed as the outline feature of Butterfly A.
  • butterfly A is located on the left side of the clock, and in the second frame image shown in FIG.
  • butterfly A is located on the right side of the clock, that is, through the first frame
  • the matching of the image and the second frame image with the feature points identified on the reference three-dimensional coordinate system can obtain a first movement trajectory for the butterfly A moving from the left side of the clock to the right side of the clock.
  • step S220' needs to be further executed, that is, tracking the suspect target to confirm the suspect
  • the target is a moving target.
  • the special case is, for example, due to the effect of wind, some hanging ornaments or chandeliers will swing more regularly with a certain amplitude. These oscillations will generally only be regular back and forth movements or small irregularities within a small range
  • the movement usually does not form a continuous movement.
  • the object that swings due to wind will form a difference feature in the difference image, and there is a movement trajectory. According to the method shown in FIG.
  • FIG. 11 shows a schematic flowchart of tracking a suspected target to confirm a moving target in a specific embodiment of the present application. For the method of tracking the suspected target to confirm the moving target, refer to step S221' and step S222' shown in FIG.
  • step S221' the processing device obtains the movement trajectory of the suspected target according to the tracking of the suspected target; in step S222', if the movement trajectory of the suspected target is continuous, the suspected target is confirmed as a moving target .
  • the third frame image of the mobile robot at the third position, the first frame image, the second frame image, and all The third frame image is an image acquired in sequence, the second frame image and the third frame image have an image overlapping area, and the second frame image and the third frame are also processed according to steps S211 and S212
  • the image is compared and detected, and when the difference image formed by subtracting the second frame image and the compensated third frame image is not zero, that is, the difference image has a difference feature, and the difference feature is also present in the The second frame image and the third frame image, the second movement trajectory of the suspect target in the time when the second frame image and the third frame image are taken by the camera device according to the difference image, when When the first moving trajectory and the second moving trajectory are
  • the image features include preset graphic features corresponding to the suspect target, or image features obtained by the image processing algorithm on the suspect target.
  • the image processing algorithm includes, but is not limited to, at least one of the following: grayscale processing, sharpening processing, contour extraction, angle extraction, line extraction, and image processing algorithms obtained by machine learning.
  • the image processing algorithms obtained through machine learning include but are not limited to: neural network algorithms, clustering algorithms, etc.
  • the second frame image and the third frame image have an image overlapping area, and search for the suspect target in the third frame image according to the image characteristics of the suspect target.
  • a static target exists within the overlapping field of view of the second frame image and the third frame image, and according to the relative position information of the mobile robot at the second position and the third position, and the The position of the suspect target in the second frame image and the third frame image changes with respect to a same static target, acquiring the suspect frame and capturing the second frame image and the third frame image in the camera device The second moving track in the time.
  • the suspected target is confirmed as a moving target.
  • more frames of images can be acquired, and the newly acquired image and the adjacent image both track the suspect target according to the image characteristics of the suspect target, and then obtain information about the suspect More moving trajectories of the target to judge whether the suspected target is a moving target, to ensure the accuracy of the judgment result.
  • FIG. 12 is a schematic diagram of an object recognition process in a specific embodiment of the present application.
  • the monitoring method further includes step S300 and step S400; in the step S300, the processing device performs object recognition on the moving target in the captured image; object recognition is a method for matching target objects through feature matching or model recognition To identify.
  • the steps of the object recognition method based on feature matching are generally: first extract the image features of the object, then describe the extracted features, and finally perform feature matching on the described object.
  • the image features include graphic features corresponding to the moving target, or image features obtained through image processing algorithms.
  • the image processing algorithm includes, but is not limited to, at least one of the following: grayscale processing, sharpening processing, contour extraction, angle extraction, line extraction, and image processing algorithms obtained through machine learning.
  • the moving target includes, for example, a moving person or a moving small animal.
  • the object recognition is completed by an object recognizer trained by a neural network; in some embodiments, the neural network model may be a convolutional neural network, and the network structure includes an input layer, at least one layer Hidden layer and at least one output layer.
  • the input layer is used to receive the captured image or the pre-processed image;
  • the hidden layer includes a convolution layer and an activation function layer, and may even include a normalization layer, a pooling layer, and a fusion layer At least one of; etc.;
  • the output layer is used to output an image labeled with an object type label.
  • the connection method is determined according to the connection relationship of each layer in the neural network model. For example, the connection relationship between the front and back layers set based on data transmission, the connection relationship with the data of the front layer based on the size of the convolution kernel in each hidden layer, and the full connection.
  • the characteristics and advantages of artificial neural networks are mainly manifested in three aspects: first, it has self-learning function. Second, it has Lenovo storage function. Third, it has the ability to find optimized solutions at high speed.
  • the processing device outputs the monitoring information according to the result of object recognition.
  • the monitoring information includes: one or more of image information, video information, audio information, and text information.
  • the monitoring information may be an image photo containing the moving target, or may be a prompt message sent to a preset communication address, such as the reminder message of the APP, SMS, email, voice broadcast, alarm, etc. .
  • the prompt information includes keywords about the moving target.
  • the prompt information may be reminder information of the APP including the keyword "person”, a short message, Mail, voice announcements, alarms, etc., for example, text or voice "someone broke in” information.
  • the preset communication address includes at least one of the following: a phone number bound to the mobile robot, an instant messaging account (WeChat account, QQ account, or facebook account, etc.), email address, and network platform, etc. .
  • FIG. 13 shows a schematic flowchart of object recognition in a specific embodiment of the present application.
  • the method further includes step S500 and step S600;
  • step S500 the processing device uploads the captured image or the video containing the image to a cloud server to perform object recognition on the moving target in the image;
  • the cloud server includes an object recognizer trained by a neural network;
  • the processing device receives the object recognition result of the cloud server and outputs the monitoring information.
  • the monitoring information includes: one or more of image information, video information, audio information, and text information.
  • the monitoring information may be an image photo containing the moving target, or may be a prompt message sent to a preset communication address, such as the reminder message of the APP, SMS, email, voice broadcast, alarm, etc. .
  • the prompt information includes keywords about the moving target. When the keyword of the moving target is "person”, the prompt information may be reminder information of the APP including the keyword "person”, a short message, Mail, voice announcements, alarms, etc., for example, text or voice "someone broke in” information.
  • Putting the detection and recognition operations of moving objects on the cloud server can reduce the operating pressure of the local mobile robot, reduce the hardware requirements of the mobile robot, and improve the execution efficiency of the mobile robot, and can make full use of the cloud server
  • the powerful processing function of the method makes the execution of the method faster and more accurate.
  • the mobile frame robot After the mobile robot captures an image through a camera device, and selects the first frame image and the second frame image according to the captured image, the mobile frame robot The second frame of image is uploaded to the cloud server for image comparison, and the result of the object recognition fed back by the cloud server to the mobile robot is received.
  • the mobile robot directly uploads the captured image to the cloud server after capturing the image through the camera device, and selects two frames of images in the cloud server according to the monitoring method of the moving target, and Performing image comparison on the selected two frames of images, and receiving the result of the object recognition fed back by the cloud server to the mobile robot.
  • the monitoring method of the moving target refer to FIG. 1 and related descriptions regarding FIG. 1, and details are not described here.
  • the method for monitoring a moving target of a mobile robot is based on multi-frame images acquired by a camera device while the mobile robot is moving in a monitoring area, and at least two frames of images with overlapping areas of images are selected from the multi-frame images , And compare the selected images according to the image compensation method or feature matching method, and output the monitoring information of the moving target containing the movement behavior of the relatively static target according to the comparison result.
  • the position of the moving target in the at least two frames of images has an attribute of uncertain change.
  • the application can accurately identify the moving target in the monitoring area during the movement of the mobile robot, and generate monitoring information about the moving target to make corresponding reminders, and effectively guarantee the security of the monitoring area.
  • FIG. 14 shows a schematic diagram of the composition of a monitoring device applied to a mobile target of a mobile robot in a specific embodiment of the present application.
  • the mobile robot includes a mobile device and a camera device.
  • the camera device is provided in the mobile robot, and is used to capture a solid object in the field of view at the location of the mobile robot and project it onto the traveling plane of the mobile robot to obtain a projected image; the camera device includes but is not limited to: Fisheye camera module, wide-angle (or non-wide-angle) camera module, depth camera module, camera module with integrated optical system or CCD chip, camera module with integrated optical system and CMOS chip, etc.
  • the mobile robots include, but are not limited to: family companion mobile robots, cleaning robots, patrol mobile robots, glass-wiping robots, and the like.
  • the power supply system of the camera device can be controlled by the power supply system of the mobile robot.
  • the camera device starts to take images.
  • the mobile robot includes at least one camera device. The camera device captures images within the field of view at the location of the mobile robot.
  • the mobile robot includes a camera device that is disposed on the top, shoulder, or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot, or the main optical axis is consistent with the traveling direction of the mobile robot In other embodiments, the main optical axis may also be set at an angle (for example, an angle between 50° and 86°) with the traveling plane where the mobile robot is located to obtain more Large camera range.
  • the main optical axis of the camera device can also be set in many other ways, for example, the camera device can rotate in a certain regular or random manner, at this time, the optical axis of the camera device and the traveling direction of the mobile robot The angle of is changing from moment to moment, so the installation method of the imaging device and the state of the main optical axis of the imaging device are not limited to the enumeration in this embodiment.
  • the mobile robot includes two or more camera devices, for example, a binocular camera device or a multi-eye camera device larger than two.
  • the main optical axis of one camera device is perpendicular to the traveling plane of the mobile robot, or the main optical axis is consistent with the traveling direction of the mobile robot.
  • the main optical axis may also be set at an angle with the traveling direction of the mobile robot in a direction perpendicular to the traveling plane to obtain a larger imaging range.
  • the main optical axis of the camera device can also be set in many other ways, for example, the camera device can rotate in a certain regular or random manner, at this time, the optical axis of the camera device and the traveling direction of the mobile robot The angle of is changing from moment to moment, so the installation method of the imaging device and the state of the main optical axis of the imaging device are not limited to the enumeration in this embodiment.
  • the mobile device of the mobile robot may include a walking mechanism and a walking driving mechanism, wherein the walking mechanism may be disposed at the bottom of the robot body, and the walking driving mechanism Built into the robot body.
  • the walking mechanism may include, for example, a combination of two straight traveling wheels and at least one auxiliary steering wheel, the two straight traveling wheels are respectively disposed on opposite sides of the bottom of the robot body, and the two straight traveling wheels may be respectively composed of
  • the corresponding two traveling driving mechanisms are independently driven, that is, the left straight traveling wheel is driven by the left traveling driving mechanism, and the right straight traveling wheel is driven by the right traveling driving mechanism.
  • the universal walking wheel or the straight walking wheel may have an offset drop type suspension system, which is fastened in a movable manner, for example, rotatably mounted on the robot body, and receives a spring biased downward and away from the robot body Offset.
  • the spring bias allows the universal walking wheel or the straight traveling wheel to maintain contact and traction with the ground with a certain landing force.
  • the two straight running wheels are mainly used for forward and backward, while in the at least one auxiliary steering wheel participating and traveling with the two straight When the walking wheels cooperate, the steering and rotation can be achieved.
  • the walking driving mechanism may include a driving motor and a control circuit that controls the driving motor, and the driving motor may be used to drive a walking wheel in the walking mechanism to achieve movement.
  • the drive motor may be, for example, a reversible drive motor, and a speed change mechanism may be provided between the drive motor and the axle of the walking wheel.
  • the walking driving mechanism can be detachably installed on the robot body, which is convenient for disassembly and maintenance.
  • the mobile object monitoring device 700 includes at least one processor 710 and at least one memory 720.
  • the processor 710 is an electronic device capable of performing numerical operations, logical operations, and data analysis, including but not limited to: CPU, GPU, FPGA, and the like.
  • the memory 720 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory may also include memory remote from one or more processors, such as network-attached memory accessed via RF circuits or external ports and a communication network, where the communication network may be the Internet, one or more Intranet, local area network (LAN), wide area network (WLAN), storage area network (SAN), etc., or an appropriate combination thereof.
  • the memory controller can control other components of the device, such as the CPU and peripheral interface, to access the memory.
  • the memory 720 is used to store images captured by the camera device in the working state of the mobile device, and at least one program is stored in the at least one memory 720 and configured to be stored by the at least one
  • the processor 710 executes instructions, and the execution of the at least one processor 710 causes the monitoring device 700 to execute and implement a method for monitoring a moving target.
  • a method for monitoring a moving target For the method for monitoring a moving target, refer to FIG. 1 and related descriptions regarding FIG. 1 , Will not repeat them here.
  • the mobile robot 800 includes a mobile device 810, a camera device 820, and a monitoring device 830.
  • the camera device 820 is provided in the mobile robot 800, and is used to capture a solid object within the field of view at the location of the mobile robot 800 and project it onto the traveling plane of the mobile robot to obtain a projected image; the camera device 820 includes But not limited to: fisheye camera module, wide-angle (or non-wide-angle) camera module, depth camera module, camera module with integrated optical system or CCD chip, camera module with integrated optical system and CMOS chip, etc.
  • the mobile robot 800 includes, but is not limited to: a family companion mobile robot, a cleaning robot, a patrol mobile robot, a glass-wiping robot, and the like.
  • the power supply system of the camera device 820 can be controlled by the power supply system of the mobile robot 800.
  • the mobile robot 800 includes at least one camera 820.
  • the camera 820 takes an image in the field of view at the location where the mobile robot 800 is located.
  • the mobile robot 800 includes a camera 820, which is disposed on the top, shoulder, or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot 800, or the main optical axis and the mobile robot 800 The direction of travel is the same.
  • the main optical axis may also be set at an angle (such as the angle between 80° and 86°) with the travel plane where the mobile robot is located. To get a larger camera range.
  • the mobile robot includes two or more camera devices 820, for example, a binocular camera device or a multi-eye camera device larger than two.
  • the main optical axis of one camera device 820 is perpendicular to the traveling plane of the mobile robot, or the main optical axis is consistent with the traveling direction of the mobile robot, in other embodiments
  • the main optical axis can also be set at an angle with the traveling direction of the mobile robot in a direction perpendicular to the traveling plane to obtain a larger imaging range.
  • there may be many other ways to set the main optical axis of the camera 820 for example, the camera 820 may rotate in a certain regular or random manner. At this time, the optical axis of the camera 820 and the movement The angle of the traveling direction of the robot 800 is constantly changing. Therefore, the installation method of the imaging device 820 and the state of the main optical axis of the imaging device 820 are not limited to those listed in this embodiment.
  • the mobile device 810 of the mobile robot 800 may include a walking mechanism and a walking driving mechanism, wherein the walking mechanism may be disposed at the bottom of the robot body, the The walking driving mechanism is built into the robot body.
  • the walking mechanism may include, for example, a combination of two straight traveling wheels and at least one auxiliary steering wheel, the two straight traveling wheels are respectively disposed on opposite sides of the bottom of the robot body, and the two straight traveling wheels may be respectively composed of
  • the corresponding two traveling driving mechanisms are independently driven, that is, the left straight traveling wheel is driven by the left traveling driving mechanism, and the right straight traveling wheel is driven by the right traveling driving mechanism.
  • the universal walking wheel or the straight walking wheel may have an offset drop type suspension system, which is fastened in a movable manner, for example, rotatably mounted on the robot body, and receives a spring biased downward and away from the robot body Offset.
  • the spring bias allows the universal walking wheel or the straight traveling wheel to maintain contact and traction with the ground with a certain landing force.
  • the two straight running wheels are mainly used for forward and backward, while in the at least one auxiliary steering wheel participating and traveling with the two straight When the walking wheels cooperate, the steering and rotation can be achieved.
  • the walking driving mechanism may include a driving motor and a control circuit that controls the driving motor, and the driving motor may be used to drive a walking wheel in the walking mechanism to achieve movement.
  • the drive motor may be, for example, a reversible drive motor, and a speed change mechanism may be provided between the drive motor and the axle of the walking wheel.
  • the walking driving mechanism can be detachably installed on the robot body, which is convenient for disassembly and maintenance.
  • the monitoring device 830 is in communication with the mobile device 810 and the camera device 820.
  • the monitoring device 830 includes an image acquisition unit 831, a moving target detection unit 832, and an information output unit 833.
  • the image acquisition unit 831 is communicatively connected to both the mobile device 810 and the camera device 820, and the image acquisition unit 831 acquires the multi-frame images captured by the camera device 820 in the working state of the mobile device 810.
  • the multi-frame image is, for example, a multi-frame image acquired in a continuous time period, or a multi-frame image acquired in two or more intermittent time periods.
  • the moving target detection unit 832 is used to compare at least two frames selected from the multi-frame images to detect moving targets; the at least two frames of images are taken by the camera 820 in a partially overlapping field of view image. That is, the moving object detection unit 832 determines that the basis for selecting the first frame image and the second frame image is that the two frames of images contain the image overlapping area, and the overlapping field of view includes the static target, so as to compare The moving target of the moving behavior of the static target is monitored.
  • the ratio of the image overlap area of the first frame image and the second frame image can also be set, for example, the image overlap areas are set separately
  • the ratio of the first frame image and the second frame image is at least 50% (but not limited to this, different ratios of the first frame image and the second frame image can be set according to the actual situation) .
  • the selection of the first frame image and the second frame image should have a certain continuity. While ensuring that the two have a certain proportion of the image overlapping area, it can also be performed according to the continuity of the acquired image to the moving trajectory of the moving target Judgment.
  • the position of the moving target in the at least two frames of images has an attribute of uncertain change.
  • the information output unit 833 is configured to output the monitoring information of the moving target including the movement behavior of the relatively static target according to the comparison of the at least two frames of images.
  • the static targets are exemplified but not limited to: balls, shoes, walls, flower pots, coats, roofs, lights, trees, tables, chairs, refrigerators, TVs, sofas, socks, tiled objects, cups Wait.
  • the tiled objects include, but are not limited to, floor mats, floor tile maps laid on the floor, and tapestries and paintings hanging on the wall.
  • the monitoring information includes one or more of image information, video information, audio information, and text information.
  • the monitoring information may be an image photo containing the moving target, or may be a prompt message sent to a preset communication address, such as the reminder message of the APP, SMS, email, voice broadcast, alarm, etc. .
  • the prompt information includes keywords about the moving target.
  • the prompt information may be reminder information of the APP including the keyword "person”, a short message, Mail, voice announcements, alarms, etc., for example, text or voice "someone broke in” information.
  • the preset communication address includes at least one of the following: a phone number bound to the mobile robot, an instant messaging account (WeChat account, QQ account, or facebook account, etc.), email address, and network platform, etc. .
  • the moving target detection unit 832 further includes a comparison module and a tracking module.
  • the comparison module is used to detect a suspect target according to the comparison of the at least two frames of images; in some embodiments, the comparison module detecting the suspect target according to the comparison of the at least two frames of images includes:
  • the moving target The detection unit 832 is also communicatively connected with the mobile device 810 to obtain movement information of the mobile device 810 in the time between the at least two frames of images, and perform image compensation on the at least two frames of images.
  • the tracking module is used to track the suspected target to confirm the moving target.
  • the tracking module tracking the suspected target to confirm the moving target includes:
  • the suspected target is confirmed as a moving target.
  • the moving target detection unit 832 may not perform the communication connection as shown in FIG. 8 with the mobile device 810, and may also recognize the moving target.
  • the moving target detection unit 832 includes a matching module and a tracking module.
  • the matching module is used to detect a suspect target according to the matching operation of the corresponding feature information in the at least two frames of images.
  • the matching module detecting the suspected target according to the matching operation of the corresponding feature information in the at least two frames of images includes:
  • the reference three-dimensional coordinate is through three-dimensional modeling of the mobile space Formed, the reference three-dimensional coordinates identify the coordinates of each feature point in all static targets in the moving space;
  • a feature point set composed of corresponding feature points in the at least two frames of images that are not matched on the reference three-dimensional coordinates is detected as a suspect target.
  • the tracking module is used to track the suspected target to confirm the moving target.
  • the tracking module tracking the suspected target to confirm the moving target includes:
  • the suspected target is confirmed as a moving target.
  • the monitoring device 830 of the moving target further includes an object recognition unit for recognizing the moving target in the captured image for the information output unit to recognize according to the object
  • the structure outputs the monitoring information; the object recognition unit is formed by training through a neural network.
  • Object recognition is to identify target objects through feature matching or model recognition.
  • the steps of the object recognition method based on feature matching are generally: first extract the image features of the object, then describe the extracted features, and finally perform feature matching on the described object.
  • the image features include graphic features corresponding to the moving target, or image features obtained through image processing algorithms.
  • the image processing algorithm includes, but is not limited to, at least one of the following: grayscale processing, sharpening processing, contour extraction, angle extraction, line extraction, and image processing algorithms obtained through machine learning.
  • the moving target includes, for example, a moving person or a moving small animal.
  • the object recognition is formed by neural network training; in some embodiments, the neural network model may be a convolutional neural network, and the network structure includes an input layer, at least one hidden layer, and at least one Layer output layer.
  • the input layer is used to receive the captured image or the pre-processed image;
  • the hidden layer includes a convolution layer and an activation function layer, and may even include a normalization layer, a pooling layer, and a fusion layer At least one of; etc.;
  • the output layer is used to output an image labeled with an object type label.
  • the connection method is determined according to the connection relationship of each layer in the neural network model.
  • connection relationship between the front and back layers set based on data transmission the connection relationship with the data of the front layer based on the size of the convolution kernel in each hidden layer, and the full connection.
  • the characteristics and advantages of artificial neural networks are mainly manifested in three aspects: First, it has self-learning function. Second, it has Lenovo storage function. Third, it has the ability to find optimized solutions at high speed.
  • the monitoring information includes one or more of image information, video information, audio information, and text information.
  • the monitoring information may be an image photo containing the moving target, or may be a prompt message sent to a preset communication address, such as the reminder message of the APP, SMS, email, voice broadcast, alarm, etc. .
  • the prompt information includes keywords about the moving target.
  • the prompt information may be reminder information of the APP including the keyword "person”, a short message, Mail, voice announcements, alarms, etc., for example, text or voice "someone broke in” information.
  • the preset communication address includes at least one of the following: a phone number bound to the mobile robot, an instant messaging account (WeChat account, QQ account, or facebook account, etc.), email address, and network platform, etc. .
  • the mobile target monitoring device 830 further includes a transceiving unit for uploading the captured image or the video containing the image to a cloud server to perform object recognition on the moving target in the image and receive the cloud
  • the object recognition result of the server is used by the information output unit to output the monitoring information;
  • the cloud server includes an object recognizer trained by a neural network.
  • Putting the detection and recognition operations of moving objects on the cloud server can reduce the operating pressure of the local mobile robot, reduce the hardware requirements of the mobile robot, and improve the execution efficiency of the mobile robot, and can make full use of the cloud server
  • the powerful processing function of the method makes the execution of the method faster and more accurate.
  • the mobile frame 800 after the mobile robot 800 captures an image through a camera device, and selects the first frame image and the second frame image according to the captured image, the mobile frame 800 combines the first frame image and all The second frame image is uploaded to the cloud server for image comparison, and receives the object recognition result fed back by the cloud server to the mobile robot.
  • the uploaded image is directly uploaded to the cloud server, and the content of the mobile target detection unit 832 is run in the cloud server to select Two frames of images, image comparison of the selected two frames of images, and receiving the result of the object recognition fed back by the cloud server to the mobile robot.
  • the hardware requirements of the mobile robot itself will be further reduced.
  • the operating program needs to be revised and updated, it can be more convenient to directly modify and update the operating program in the cloud to improve the efficiency and flexibility of the system update.
  • the technical solution of the mobile target monitoring device 830 in the embodiment of FIG. 15 corresponds to the monitoring method of the mobile target.
  • the monitoring method of the mobile target refer to FIG. 1 and the related description about FIG.
  • the description of the monitoring method can be applied to the related embodiments of the mobile target monitoring device 830, which is not repeated here.
  • the division of each module of the device in the embodiment of FIG. 15 is only a division of logical functions. In actual implementation, it may be integrated into a physical entity in whole or in part, or may be physically separated. And these modules can all be implemented in the form of software calling through processing elements; they can also be implemented in the form of hardware; some modules can also be implemented through processing elements calling software, and some modules can be implemented in hardware.
  • each module may be a separately established processing element, or it may be implemented by being integrated in a chip of the above-mentioned device, or it may be stored in the memory of the above-mentioned device in the form of program code, and processed by any of the above-mentioned devices.
  • the component calls and executes the functions of the above receiving module.
  • the implementation of other modules is similar.
  • all or part of these modules can be integrated together or can be implemented independently.
  • the processing element described here may be an integrated circuit with signal processing capabilities.
  • each step of the above method or each of the above modules may be completed by an integrated logic circuit of hardware in the processor element or instructions in the form of software.
  • the above modules may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), or one or more microprocessors (digitalsingnal processor, short for DSP), or, one or more Field Programmable Gate Array (FieldProgrammableGateArray, FPGA for short), etc.
  • ASIC Application Specific Integrated Circuit
  • microprocessors digital processor, short for DSP
  • Field Programmable Gate Array Field ProgrammableGateArray
  • FPGA FieldProgrammableGateArray
  • the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU for short) or another processor that can call program code.
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC for short).
  • the present application also provides a computer storage medium that stores at least one program, and when the program is called, the program executes any of the foregoing mobile target monitoring methods, the mobile target For the monitoring method of FIG. 1, please refer to FIG. 1 and the related description about FIG. 1, which will not be repeated here.
  • the computer program code may be in a source code form, an object code form, an executable file, or some intermediate form.
  • the technical solution of the present application can be embodied in the form of a software product in essence or part that contributes to the existing technology, and the computer software product can include one or more machine executable instructions stored thereon A machine-readable medium.
  • the instructions When these instructions are executed by one or more machines, such as a computer, a computer network, or other electronic devices, the instructions may cause the one or more machines to perform operations according to the embodiments of the present application.
  • the machine-readable medium may include, but is not limited to, any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a floppy disk, an optical disk, a CD-ROM (compact To disk-read only memory), magneto-optical disk, ROM (read only memory), RAM (random access memory), EPROM (erasable programmable read only memory), EEPROM (electrically erasable programmable read only memory) , Magnetic or optical cards, flash memory, electrical carrier signals, telecommunications signals, and software distribution media or other types of media/machine-readable media suitable for storing machine-executable instructions.
  • computer-readable media Does not include electrical carrier signals and telecommunications signals.
  • the storage medium may be located in a mobile robot or a third-party server, such as a server that provides an application store. There are no restrictions on specific application stores, such as Huawei App Store, Apple App Store, etc.
  • FIG. 16 shows a schematic diagram of the composition of the monitoring system of this application in a specific embodiment.
  • the monitoring system 900 includes a cloud server 910 and a mobile robot 920, and the mobile robot 920 is connected to the cloud server 910.
  • the mobile robot 920 includes a camera device and a mobile device.
  • the mobile robot 920 moves in the three-dimensional space shown in FIG. 16, and the moving space shown in FIG. 16 has the moving object Butterfly A.
  • the mobile robot 920 takes multiple frames of images while moving through the mobile device, and selects two frames of images from the multiple frames of images for comparison, and outputs a moving target containing a moving behavior of a relatively static target.
  • the selected two frames of images are, for example, the first frame image shown in FIG.
  • the first frame image and the second frame image have The image overlapping area indicated by the dotted frame corresponds to the overlapping field of view of the camera device at the first position and the second position.
  • the overlapping field of view of the camera device includes a plurality of static objects, such as chairs, windows, bookshelves, clocks, sofas, and beds.
  • butterfly A is located on the left side of the clock
  • butterfly A is located on the right side of the clock
  • the first frame image and the second frame image are compared to obtain A suspected target where the static target moves.
  • the mobile device of the mobile robot 920 acquires the image obtained during the process of capturing the first frame image and the second frame image by the camera device.
  • the movement information of the mobile robot 920 compensates the first frame image or the second frame image according to the movement information, and the compensated image is differentially subtracted from the original image of another frame to obtain A suspected target (butterfly A) of a regional movement trajectory (moving from the left side of the clock to the right side of the clock)
  • a suspected target a regional movement trajectory
  • the method of image comparison for example, referring to the feature comparison method shown in FIG. 10, extract the feature points in the first frame image and the second frame image, and extract the features in the extracted two frame images The points are matched on a reference three-dimensional coordinate system, which is formed after modeling the moving space shown in FIG.
  • the processing device of the mobile robot 920 detects a feature point set composed of corresponding feature points in the two frames of images that have not been matched on the reference three-dimensional coordinate system as a suspect target. And according to the successively acquired multiple frames of images, the suspected target is tracked to obtain a continuous movement trajectory about the suspected target, and it is confirmed that the suspected target (Butterfly A) is a moving target (Butterfly A). Further, according to the method steps shown in FIG. 8 or FIG.
  • the suspected target is tracked to obtain the movement trajectory of the suspected target, and when the movement trajectory of the suspected target is continuous, the suspected target is confirmed as moving Target, and in this embodiment, by tracking the butterfly A as a suspect target, a continuous movement trajectory about the butterfly A is obtained, for example, moving from the left side of the clock to the right side of the clock, and then to the head of the bed.
  • the mobile robot 920 uploads the image or video including the mobile target to the cloud server 910, and the mobile robot 920 receives the image of the mobile target from the cloud server 910
  • the result of object recognition outputs monitoring information.
  • the object recognition process is, for example, recognizing images and videos containing moving objects through preset image features, such as image point features, image line features, or image color features.
  • the contour of the butterfly A can be detected to identify the moving target as the butterfly A.
  • the mobile robot 920 may receive the object recognition result fed back by the cloud server 910, and output monitoring information to the specified client according to the object recognition result.
  • the client is, for example, an electronic device with a smart data processing function such as a smart phone, a tablet computer, a smart watch.
  • the cloud server 910 recognizes the object to obtain the result of the object recognition, directly outputs the monitoring information to the designated client according to the result of the object recognition.
  • the cloud server 910 performs object recognition on the received image or video containing the image, and feeds back the object recognition result to the mobile robot 920.
  • Putting the detection and recognition operations of moving objects on the cloud server can reduce the operating pressure of the local mobile robot, reduce the hardware requirements of the mobile robot, and improve the execution efficiency of the mobile robot, and can make full use of the cloud server
  • the powerful processing function of the method makes the execution of the method faster and more accurate.
  • the mobile robot 920 acquires multiple frames of images while moving and uploads the multiple frames of images to the cloud server 910; and the cloud server 910 selects from the multiple frames of images Select two frames of images for comparison.
  • the selected two frames of images are, for example, the first frame image shown in FIG. 6 and the second frame image shown in FIG. 7.
  • the first frame image and the second frame image have As shown in the dotted frame in FIG. 6 and FIG.
  • the image overlapping area corresponds to the overlapping field of view of the camera at the first position and the second position.
  • the overlapping field of view of the camera device includes a plurality of static objects, such as chairs, windows, bookshelves, clocks, sofas, and beds.
  • butterfly A is located on the left side of the clock
  • butterfly A is located on the right side of the clock, and the first frame image and the second frame image are compared to obtain A suspected target where the static target moves.
  • the mobile device of the mobile robot 920 acquires the image obtained during the process of capturing the first frame image and the second frame image by the camera device.
  • the movement information of the mobile robot 920 compensates the first frame image or the second frame image according to the movement information, and the compensated image is differentially subtracted from the original image of another frame to obtain A suspected target (butterfly A) of the regional movement trajectory (moving from the left side of the clock to the right side of the clock).
  • a suspected target (butterfly A) of the regional movement trajectory (moving from the left side of the clock to the right side of the clock).
  • a suspected target moving from the left side of the clock to the right side of the clock.
  • extract the feature points in the first frame image and the second frame image extract the features in the extracted two frame images
  • the points are matched on a reference three-dimensional coordinate system, which is formed after modeling the moving space shown in FIG. 16.
  • the processing device of the mobile robot 920 detects a feature point set composed of corresponding feature points in the two frames of images that have not been matched on the reference three-dimensional coordinate system as a suspect target. And according to the successively acquired multiple frames of images, the suspected target is tracked to obtain a continuous movement trajectory about the suspected target, and it is confirmed that the suspected target (Butterfly A) is a moving target (Butterfly A).
  • the frame image and the second frame image are compared to obtain a suspected target (butterfly A) that has moved about the static target (for example, a clock), and then according to the method steps shown in FIG. 8 or FIG.
  • the suspected target is tracked to obtain the movement trajectory of the suspected target, and when the movement trajectory of the suspected target is continuous, the suspected target is confirmed as a moving target, and in this embodiment, the butterfly
  • the tracking of A obtains the continuous movement trajectory of Butterfly A, for example, from the left side of the clock to the right side of the clock, and then to the head of the bed.
  • the cloud server 910 performs object recognition on the moving target, the recognition result is fed back to the mobile robot 920.
  • the mobile robot 920 may receive the object recognition result fed back by the cloud server 910, and output monitoring information to the specified client according to the object recognition result.
  • the client is, for example, an electronic device with a smart data processing function such as a smart phone, a tablet computer, a smart watch.
  • the cloud server 910 recognizes the object to obtain the result of the object recognition, directly outputs the monitoring information to the designated client according to the result of the object recognition.
  • the mobile robot 920 also communicates with a designated client via a mobile network, such as a smart phone, tablet computer, smart watch, or other electronic device with intelligent data processing functions.
  • a mobile network such as a smart phone, tablet computer, smart watch, or other electronic device with intelligent data processing functions.
  • the monitoring method, device, monitoring system and mobile robot of the mobile object of the present application are based on the multi-frame images acquired by the camera device while the mobile robot is moving in the monitoring area, and at least at least the areas where the image overlaps are selected from the multi-frame images Two frames of images, and compare the selected images according to the image compensation method or feature matching method, and output the monitoring information of the moving target containing the movement behavior of the relatively static target according to the comparison result.
  • the position of the moving target in the at least two frames of images has an attribute of uncertain change.
  • the application can accurately identify the moving target in the monitoring area during the movement of the mobile robot, and generate monitoring information about the moving target to make corresponding reminders, and effectively guarantee the security of the monitoring area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Signal Processing (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

一种移动目标的监控方法、监控装置(830)、监控系统(900)及移动机器人(920),根据移动机器人(920)在监控区域中移动的状态下通过摄像装置(820)获取的多帧图像中,且从多帧图像中选取存在图像重叠区域的至少两帧图像,并根据图像补偿法或特征匹配法对选取的图像进行比对,并根据比对结果输出包含有相对静态目标发生移动行为的移动目标的监控信息。移动目标在至少两帧图像中呈现的位置具有不确定变化属性。该方法可以在移动机器人(920)移动过程中精确地识别监控区域中的移动目标,并生成关于该移动目标的监控信息以进行相应的提醒,有效地保证监控区域的安全性。

Description

移动目标的监控方法、装置、监控系统及移动机器人 技术领域
本申请涉及智能移动机器人领域,特别是涉及一种移动目标的监控方法、装置、监控系统及移动机器人。
背景技术
当前我国已经进入改革发展、社会转型跃迁的关键时期,经济体制的深刻变革,社会结构的快速转变,利益格局在加速调整,思想观念飞速变化,流动人口的大量增加,这种空前的社会变革,给我国的发展进步带来巨大活力的同时,各种安全隐患也随之出现,社会治安高度变化为社会安防带来巨大的压力。
为了保证家庭环境的安全,现在很多人会考虑在家中安装防盗系统,目前的防盗手段大多是靠人防和物防(如防盗门、铁护栏等)。在一些情况下,一些家庭还会采用以下的安防方式:安装红外线防盗报警装置、安装电磁密码门锁或在家中安装监控摄像头等。以上方式的防盗手段比较固定且明显,监控装置不能在移动的过程中准确的检测非法入侵目标,很容易让非法闯入的人躲过这些防盗装置的监控,不能提供较为可靠、有效器且灵活的安防。
发明内容
鉴于以上所述现有技术的缺点,本申请的目的在于提供一种移动目标的监控方法、装置、监控系统及移动机器人,用于解决现有技术中不能在机器人移动的过程中有效准确的检测移动目标的问题。
为实现上述目的及其他相关目的,本申请的第一方面提供一种应用于移动机器人的移动目标的监控方法,所述移动机器人包括移动装置和摄像装置,所述移动目标的监控方法包括以下步骤:在移动装置的工作状态下获取摄像装置所摄取的多帧图像;根据自所述多帧图像中选取的至少两帧图像的比对输出包含有相对静态目标发生移动行为的移动目标的监控信息;其中,所述至少两帧图像为摄像装置在部分重叠视场内所摄取的图像,所述移动目标在所述至少两帧图像中呈现的位置具有不确定变化属性。
在本申请的第一方面的某些实施方式中,所述根据自所述多帧图像中选取的至少两帧图像的比对包括以下步骤:根据所述至少两帧图像的比对检测出疑似目标;跟踪所述疑似目标以确认移动目标。
在本申请的第一方面的某些实施方式中,所述根据所述至少两帧图像的比对检测出疑似 目标包括以下步骤:基于所述移动装置在所述至少两帧图像之间的时间内的移动信息,对所述至少两帧图像进行图像补偿;将经图像补偿后的所述至少两帧图像作相减处理形成差分图像,从所述差分图像中检测出疑似目标。
在本申请的第一方面的某些实施方式中,所述根据自所述多帧图像中选取的至少两帧图像的比对包括以下步骤:根据所述至少两帧图像中对应的特征信息的匹配操作检测出疑似目标;跟踪所述疑似目标以确认移动目标。
在本申请的第一方面的某些实施方式中,所述根据所述至少两帧图像中对应的特征信息的匹配操作检测出疑似目标包括以下步骤:分别提取所述至少两帧图像中的各个特征点,将提取的所述至少两帧图像中各个特征点在一参考三维坐标系上进行匹配;所述参考三维坐标系是通过对移动空间进行三维建模形成的,所述参考三维坐标系上标识有移动空间内所有静态目标中各个特征点的坐标;将所述至少两帧图像中未在所述参考三维坐标系上实现匹配的对应特征点所组成的特征点集合检测为疑似目标。
在本申请的第一方面的某些实施方式中,所述跟踪所述疑似目标以确认移动目标包括以下步骤:根据疑似目标的跟踪获得疑似目标的移动轨迹;若所述疑似目标的移动轨迹为连续时,则将所述疑似目标确认为移动目标。
在本申请的第一方面的某些实施方式中,还包括以下步骤:对摄取的图像中的移动目标进行物体识别;所述物体识别是通过经神经网络训练的物体识别器完成的;根据物体识别的结果输出所述监控信息。
在本申请的第一方面的某些实施方式中,还包括以下步骤:将摄取的图像或包含图像的视频上传至云端服务器以对图像中的移动目标进行物体识别;所述云端服务器中包括经神经网络训练的物体识别器;接收所述云端服务器的物体识别的结果并输出所述监控信息。
在本申请的第一方面的某些实施方式中,所述监控信息包括:图像信息、视频信息、音频信息、文字信息中的一种或多种。
为实现上述目的及其他相关目的,本申请的第二方面提供一种应用于移动机器人的移动目标的监控装置,所述移动机器人包括移动装置和摄像装置,所述移动目标的监控装置包括:至少一个处理器;至少一个存储器,用于存储由所述摄像装置在所述移动装置的工作状态下摄取的图像;至少一个程序,其中,所述至少一个程序被存储在所述至少一个存储器中并被配置为由所述至少一个处理器执行指令,所述至少一个处理器执行所述执行指令使得所述监控装置执行并实现如上任一项所述的移动目标的监控方法。
为实现上述目的及其他相关目的,本申请的第三方面提供一种应用于移动机器人的移动目标的监控装置,所述移动机器人包括移动装置和摄像装置,所述监控装置包括:图像获取 单元,用于在移动装置的工作状态下获取摄像装置所摄取的多帧图像;移动目标检测单元,用于对自所述多帧图像中选取的至少两帧图像进行比对以检测移动目标;其中,所述至少两帧图像为摄像装置在部分重叠视场内所摄取的图像,所述移动目标在所述至少两帧图像中呈现的位置具有不确定变化属性;信息输出单元,用于根据所述至少两帧图像的比对输出包含有相对静态目标发生移动行为的移动目标的监控信息。
在本申请的第三方面的某些实施方式中,所述移动目标检测单元包括:比对模块,用于根据所述至少两帧图像的比对检测出疑似目标;跟踪模块,用于跟踪所述疑似目标以确认移动目标。
在本申请的第三方面的某些实施方式中,所述比对模块根据所述至少两帧图像的比对检测出疑似目标包括:基于所述移动装置在所述至少两帧图像之间的时间内的移动信息,对所述至少两帧图像进行图像补偿;将经图像补偿后的所述至少两帧图像作相减处理形成差分图像,从所述差分图像中检测出疑似目标。
在本申请的第三方面的某些实施方式中,所述跟踪模块跟踪所述疑似目标以确认移动目标包括:根据疑似目标的跟踪获得疑似目标的移动轨迹;若所述疑似目标的移动轨迹为连续时,则将所述疑似目标确认为移动目标。
在本申请的第三方面的某些实施方式中,所述移动目标检测单元包括:匹配模块,用于根据所述至少两帧图像中对应的特征信息的匹配操作检测出疑似目标;跟踪模块,用于跟踪所述疑似目标以确认移动目标。
在本申请的第三方面的某些实施方式中,所述匹配模块根据所述至少两帧图像中对应的特征信息的匹配操作检测出疑似目标包括:分别提取所述至少两帧图像中的各个特征点,将提取的所述至少两帧图像中各个特征点在一参考三维坐标系上进行匹配;所述参考三维坐标系是通过对移动空间进行三维建模形成的,所述参考三维坐标系上标识有移动空间内所有静态目标中各个特征点的坐标;将所述至少两帧图像中未在所述参考三维坐标系上实现匹配的对应特征点所组成的特征点集合检测为疑似目标。
在本申请的第三方面的某些实施方式中,所述跟踪模块跟踪所述疑似目标以确认移动目标包括:根据疑似目标的跟踪获得疑似目标的移动轨迹;若所述疑似目标的移动轨迹为连续时,则将所述疑似目标确认为移动目标。
在本申请的第三方面的某些实施方式中,还包括:物体识别单元,用于对摄取的图像中的移动目标进行物体识别,以供所述信息输出单元根据物体识别的结构输出所述监控信息;所述物体识别单元是通过经神经网络训练形成的。
在本申请的第三方面的某些实施方式中,还包括:收发单元,用于将摄取的图像或包含 图像的视频上传至云端服务器以对图像中的移动目标进行物体识别以及接收所述云端服务器的物体识别的结果以供所述信息输出单元输出所述监控信息;所述云端服务器中包括经神经网络训练的物体识别器。
在本申请的第三方面的某些实施方式中,所述监控信息包括:图像信息、视频信息、音频信息、文字信息中的一种或多种。
为实现上述目的及其他相关目的,本申请的第四方面提供一种移动机器人,包括:移动装置,用于按照所接收的控制指令控制移动机器人移动;摄像装置,用于在移动装置的工作状态下摄取多帧图像;如上任一项所述的监控装置。
为实现上述目的及其他相关目的,本申请的第五方面提供一种监控系统,包括:云端服务器;移动机器人,与所述云端服务器连接;其中,所述移动机器人执行以下步骤:在移动状态下获取多帧图像;根据自所述多帧图像中选取的至少两帧图像的比对输出包含有相对静态目标发生移动行为的移动目标的检测信息;根据所述检测信息将摄取的图像或包含图像的视频上传至所述云端服务器;根据接收自所述云端服务器的物体识别的结果输出监控信息。
为实现上述目的及其他相关目的,本申请的第六方面提供一种监控系统,包括:云端服务器;移动机器人,与所述云端服务器连接;其中,所述移动机器人执行以下步骤:在移动状态下获取多帧图像并将所述多帧图像上传至云端服务器;所述云端服务器执行以下步骤:根据自所述多帧图像中选取的至少两帧图像的比对输出包含有相对静态目标发生移动行为的移动目标的检测信息;根据所述多帧图像中移动目标的识别输出移动目标的物体识别的结果至所述移动机器人,以供所述移动机器人输出监控信息。
本申请的移动目标的监控方法、装置、监控系统及移动机器人,根据移动机器人在监控区域中移动的状态下通过摄像装置获取的多帧图像中,且从多帧图像中选取存在图像重叠区域的至少两帧图像,并根据图像补偿法或特征匹配法对选取的图像进行比对,并根据比对结果输出包含有相对静态目标发生移动行为的移动目标的监控信息。所述移动目标在所述至少两帧图像中呈现的位置具有不确定变化属性。本申请可以在移动机器人移动过程中精确的识别监控区域中的移动目标,并生成关于该移动目标的监控信息以进行相应的提醒,有效的保证监控区域的安全性。
附图说明
图1显示为本申请的移动目标的监控方法在一具体实施例中的流程示意图。
图2显示为本申请的一具体实施例中选取的两帧图像的图像示意图。
图3显示为本申请的一具体实施例中选取的两帧图像的图像示意图。
图4显示为本申请的一具体实施例中自多帧图像中选取至少两帧图像进行对比的流程示意图。
图5显示为本申请的一具体实施例中根据至少两帧图像的比对检测出疑似目标的流程示意图。
图6显示为本申请的一实施例中选取的第一帧图像的图像示意图。
图7显示为本申请的一实施例中选取的第二帧图像的图像示意图。
图8显示为本申请的一具体实施例中跟踪疑似目标以确认疑似目标为移动目标的流程示意图。
图9显示为本申请的一具体实施例中根据自多帧图像中选取的至少两帧图像的比对的流程示意图。
图10显示为本申请一具体实施例中根据至少两帧图像中对应的特征信息的匹配操作检测出疑似目标的流程示意图。
图11显示为本申请一具体实施例中跟踪疑似目标以确认移动目标的流程示意图。
图12显示为本申请的一具体实施例中物体识别的流程示意图。
图13显示为本申请的一具体实施例中物体识别的流程示意图。
图14显示为本申请的应用于移动机器人的移动目标的监控装置在一具体实施例中的组成示意图。
图15显示为本申请的移动机器人在一具体实施例中的组成示意图。
图16显示为本申请的监控系统在一具体实施例中的组成示意图。
具体实施方式
以下由特定的具体实施例说明本申请的实施方式,熟悉此技术的人士可由本说明书所揭露的内容轻易地了解本申请的其他优点及功效。
在下述描述中,参考附图,附图描述了本申请的若干实施例。应当理解,还可使用其他实施例,并且可以在不背离本申请的精神和范围的情况下进行机械组成、结构、电气以及操作上的改变。下面的详细描述不应该被认为是限制性的,并且本申请的实施例的范围仅由公布的专利的权利要求书所限定。这里使用的术语仅是为了描述特定实施例,而并非旨在限制本申请。空间相关的术语,例如“上”、“下”、“左”、“右”、“下面”、“下方”、“下部”、“上方”、“上部”等,可在文中使用以便于说明图中所示的一个元件或特征与另一元件或特征的关系。
再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除 非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。此处使用的术语“或”和“和/或”被解释为包括性的,或意味着任一个或任何组合。因此,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A、B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。
为了保证家庭环境的安全,现在很多人会考虑在家中安装防盗系统,目前的防盗手段大多是靠人防和物防(如防盗门、铁护栏等)。在一些情况下,一些家庭还会采用以下的安防方式:安装红外线防盗报警装置、安装电磁密码门锁或在家中安装监控摄像头等。以上方式的防盗手段比较固定且明显,很容易让非法闯入的人躲过这些防盗装置的监控,不能提供较为可靠有效的安防。
移动机器人基于导航控制技术执行移动操作。其中,受移动机器人所应用的场景影响,当移动机器人处于未知环境的未知位置时,利用VSLAM(Visual Simultaneous Localization and Mapping,基于视觉的即时定位与地图构建)技术可以帮助移动机器人构建地图并执行导航操作。具体地,移动机器人通过视觉传感器所提供的视觉信息以及位置测量装置所提供的移动信息来构建地图,并根据所构建的地图为移动机器人提供导航能力,使得移动机器人能自主移动。其中,所述视觉传感器举例包括摄像装置。所述位置测量装置举例包括速度传感器、里程计传感器、距离传感器、悬崖传感器等。所述移动机器人在行进平面进行移动,预先获取并存储关于所述行进平面的投影图像。所述摄像装置在移动机器人所在位置摄取视场范围内实体对象并投影至所述移动机器人的行进平面,以得到投影图像。所述实体对象例如包含:电视机、空调、椅子、鞋子、皮球等。在现有的实际应用中,移动机器人结合位置测量装置所提供的位置信息确定当前位置,以及通过识别摄像装置所摄取的图像所包含的图像特征来帮助定位当前位置,以便于移动机器人将其在当前位置所拍摄的图像特征,与所存储的相匹配图像特征在地图中的位置进行对应,由此实现快速定位。
所述移动机器人例如为安防机器人,所述安防机器人在启动后,可以以一个确定的或者随机的路线遍历需要安防的区域,现有的安防机器人通常是将获取的所有图像都上传至监控中心,不能有针对性的对摄取的图像中的可疑对象进行提醒,智能性较差。
为此,本申请提供一种应用于移动机器人的移动目标的监控方法,所述移动机器人包括移动装置和摄像装置,所述监控方法可由移动机器人包括的处理装置来执行。其中,所述处理装置为一种能够进行数值运算、逻辑运算及数据分析的电子设备,其包括但不限于:CPU、GPU、FPGA等,以及用于暂存运算期间所产生的中间数据的易失性存储器等。所述 监控方法通过对摄像装置所摄取的多帧图像中的至少两帧图像的比对,输出包含有相对静态目标发生移动行为的移动目标的监控信息,其中,所述多帧图像为所述摄像装置在所述移动装置处于工作状态下所摄取的。所述静态目标举例但不限于:球、鞋、墙壁、花盆、衣帽、屋顶、灯、树、桌子、椅子、冰箱、电视、沙发、袜、平铺物体、杯子等。其中,平铺物体包括但不限于平铺在地板上的地垫、地砖贴图,以及挂在墙壁上的挂毯、挂画等。所述移动机器人例如为特定的安防机器人,该安防机器人根据本申请的移动目标的监控方法实现对监控区域的监控。在另一些实施例中,所述移动机器人还可以为搭载有应用本申请的移动目标的监控方法的模块的其他移动机器人,所述其他移动机器人例如为扫地机器人、家庭陪伴式移动机器人或者擦玻璃的机器人等。所述移动机器人例如为扫地机器人时,所述移动机器人可以根据预先通过VSLAM技术所构建的地图以及结合扫地机器人搭载的摄像装置,控制扫地机器人遍历整个待清扫区域。并在令所述扫地机器人开启扫地工作的同时启动所述扫地机器人所具有的搭载有应用本申请的移动目标的监控方法的模块,在扫地的同时实现安防监控。所述移动装置举例包括滚轮和滚轮的驱动器,其中驱动器举例如为马达等。所述移动装置用于驱动机器人按照规划的移动轨迹进行前后往复运动、旋转运动或曲线运动等,或者驱动所述移动机器人进行姿态的调整。
在此,所述移动机器人至少包括一个摄像装置。所述摄像装置在移动机器人所在位置摄取视场范围内的图像。例如,移动机器人包含一个摄像装置,其设置于所述移动机器人顶部、肩部或背部,且主光轴垂直于所述移动机器人的行进平面、或主光轴与所述移动机器人的行进方向一致,在另一些实施例中,所述主光轴还可与所述移动机器人所在的所述行进平面呈一定的夹角(比如50°至86°之间的夹角)的设置,以获得更大的摄像范围。在其他实施例中,所述摄像装置的主光轴的设置还可以有很多其他的方式,例如摄像装置可以以一定的规律或者随机的进行转动,此时摄像装置的光轴与移动机器人行进方向的角度则处于时刻变化的状态,所以所述摄像装置的安装方式以及摄像装置的主光轴的状态不以本实施例中的列举为限。又例如所述移动机器人包含二个或更多个摄像装置,例如,双目摄像装置或大于两个的多目摄像装置。在二个或更多个摄像装置中,其中一个摄像装置的主光轴垂直于所述移动机器人的行进平面、或主光轴与所述移动机器人的行进方向一致,在另一些实施例中,所述主光轴还可与所述移动机器人的行进方向在垂直于所述行进平面的方向上呈一定的夹角的设置,以获得更大的摄像范围。在其他实施例中,所述摄像装置的主光轴的设置还可以有很多其他的方式,例如摄像装置可以以一定的规律或者随机的进行转动,此时摄像装置的光轴与移动机器人行进方向的角度则处于时刻变化的状态,所以所述摄像装置的安装方式以及摄像装置的主光轴的状态不以本实施例中的列举为限。所述摄像装置包括但不限于:鱼眼摄 像模块、广角(或非广角)摄像模块、深度摄像模块、集成有光学系统或CCD芯片的摄像模块、集成有光学系统和CMOS芯片的摄像模块等。
所述摄像装置的供电系统可受移动机器人的供电系统控制,当移动机器人上电移动期间,所述摄像装置即开始摄取图像。
参阅图1,图1显示为本申请的移动目标的监控方法在一具体实施例中的流程示意图。如图1所示,所述移动目标的监控方法包括以下步骤:
步骤S100:在移动装置的工作状态下获取摄像装置所摄取的多帧图像;在所述移动机器人为扫地机器人的实施例中,所述移动机器人的移动装置可包括行走机构和行走驱动机构,其中,所述行走机构可设置于所述机器人本体的底部,所述行走驱动机构内置于所述机器人本体内。所述行走机构可例如包括两个直行行走轮和至少一个辅助转向轮的组合,所述两个直行行走轮分别设于机器人本体的底部的相对两侧,所述两个直行行走轮可分别由对应的两个行走驱动机构实现独立驱动,即,左直行行走轮由左行走驱动机构驱动,右直行行走轮由右行走驱动机构驱动。所述的万向行走轮或直行行走轮可具有偏置下落式悬挂系统,以可移动方式紧固,例如以可旋转方式安装到机器人本体上,且接收向下及远离机器人本体偏置的弹簧偏置。所述弹簧偏置允许万向行走轮或直行行走轮以一定的着地力维持与地面的接触及牵引。在实际的应用中,所述至少一个辅助转向轮未参与的情形下,所述两个直行行走轮主要用于前进和后退,而在所述至少一个辅助转向轮参与并与所述两个直行行走轮配合的情形下,就可实现转向和旋转等移动。所述行走驱动机构可包括驱动电机和控制所述驱动电机的控制电路,利用所述驱动电机可驱动所述行走机构中的行走轮实现移动。在具体实现上,所述驱动电机可例如为可逆驱动电机,且所述驱动电机与所述行走轮的轮轴之间还可设置有变速机构。所述行走驱动机构可以可拆卸地安装到机器人本体上,方便拆装和维修。在本实施例中,为扫地机器人的移动机器人在行走的状态下拍摄多帧图像,换言之,在步骤S100中,由处理装置在移动装置的工作状态下获取摄像装置所摄取的多帧图像。在实施例中,所述多帧图像例如为在一个连续的时间段内获取的多帧图像,或者在两个或者多个间断的时间段内获取的多帧图像。
步骤S200:根据自所述多帧图像中选取的至少两帧图像的比对输出包含有相对静态目标发生移动行为的移动目标的监控信息;所述至少两帧图像为摄像装置在部分重叠视场内所摄取的图像。在本实施例中,所述移动机器人的处理装置根据自所述多帧图像中选取的至少两帧图像的比对输出包含有相对静态目标发生移动行为的移动目标的监控信息。
在一些实施例中,所述静态目标举例但不限于:球、鞋、墙壁、花盆、衣帽、屋顶、灯、树、桌子、椅子、冰箱、电视、沙发、袜、平铺物体、杯子等。其中,平铺物体包括但 不限于平铺在地板上的地垫、地砖贴图,以及挂在墙壁上的挂毯、挂画等。
其中,在步骤S200中,处理装置所选取的两帧图像应为所述摄像装置在部分重叠视场内所摄取的图像。即处理装置确定选取第一帧图像和第二帧图像的依据是两帧图像中包含图像重叠区域,且该重叠视场内包含静态目标,以对该重叠视场中相对静态目标发生移动行为的移动目标进行监控。为了保证选取的两帧图像的比对结果的有效性,还可对所述第一帧图像和所述第二帧图像的图像重叠区域的占比进行设定,例如设置所述图像重叠区域分别占所述第一帧图像和所述第二帧图像的比例至少为50%(但并不局限于此,依据实际情况可以设定不同的第一帧图像和所述第二帧图像的比例)。所述第一帧图像和第二帧图像的选取应具有一定的连续性,在保证两者具有一定比例的图像重叠区域的同时,还可根据获取的图像对移动目标的移动轨迹的连续性进行判断。下面将例举几种选取所述图像的方式,该例举中描述的图像选取方法只是一些特定的方式,在实际应用中选取所述第一帧图像和所述第二帧图像的方式并不以此为限,其他可以保证选取的该两帧图像为相对连续的图像且具有设定比例的图像重叠区域的图像选取方式均可应用于本申请中。
在一些实施方式中,处理装置依据摄像装置的视场范围在具有重叠视场的第一位置和第二位置处分别选取第一帧图像和第二帧图像。
例如,摄像装置可以拍摄视频,由于视频是由图像帧构成的,在移动机器人移动期间,处理装置连续或不连续地采集所获取的视频中的图像帧以获得多帧图像,且根据预设的间隔帧数选取第一帧图像和第二帧图像,两帧图像具有部分重叠区域,然后处理装置根据选用的两帧图像进行图像比对。
又如,在移动机器人移动期间,摄像装置连续获取处理装置可以预先设定摄像装置拍摄图像的时间间隔获取经摄像装置拍摄的不同时刻下的多帧图像;在多帧图像其中,选取两帧图像进行比对,所述时间间隔应至少小于移动机器人移动一个视场范围所花费的时长,以保证该两帧图像中选取的两帧图像之间存在部分重叠的部分。
又如,所述摄像装置以预设的时间周期令所述移动机器人摄取其视场范围内的图像,然后处理装置获取经摄像装置以预设时间周期摄取的不同时刻下的图像,且选取其中两张图像作为第一帧图像和第二帧图像,该两帧图像之间存在部分重叠的部分。其中,所述时间周期可由时间单位表示,或者所述时间周期由图像帧的间隔数量来表示。
再如,所述移动机器人与智能终端通信,所述智能终端可以通过特定的APP(应用程序)对所述时间周期进行修改。例如在打开所述APP后,在所述智能终端的触摸屏上显示所述时间周期的修改界面,通过对所述修改界面的触摸操作,完成对所述时间周期的修改;又或者直接向所述移动机器人发送时间周期修改指令以对所述时间周期进行修改,所述时间周期修 改指令,例如为包括修改指令的语音,所述语音例如为“周期修改为3秒”。又如,所述语音为“图像帧间隔修改为5幅”。
在步骤S200中,所述移动目标在所述至少两帧图像中呈现的位置具有不确定变化属性。在一些实施例中,所述移动机器人通过所述移动装置在预先构建的地图中进行移动,且摄像装置摄取移动过程中的多帧图像,处理装置自所述多帧图像中选取两帧图像的进行比对,按照图像选取的次序,选取的两帧图像分别为第一帧图像和第二帧图像,且所述移动机器人在第一帧图像对应的位置为第一位置,所述移动机器人在第二帧图像对应的位置为第二位置,该两帧图像具有图像重叠区域,且摄像装置的重叠视场内具有静态目标,由于所述移动机器人处于移动的状态,静态目标在第二帧图像中的位置相对于第一帧图像中的位置进行了确定性的变化,且静态目标在该两帧图像中的位置的确定性的变化幅度与所述移动机器人在第一位置和第二位置的移动信息相关,该移动信息例如为所述移动机器人从所述第一位置到所述第二位置的移动距离和姿态变化信息。在一些实施例中,所述移动机器人包括位置测量装置,利用所述移动机器人的位置测量装置获取所述移动机器人的移动信息,且根据所述移动信息测量所述第一位置和第二位置之间的相对位置信息。
所述位置测量装置包括但不限于设置在移动机器人上的位移传感器、测距传感器、悬崖传感器、角度传感器、陀螺仪、双目摄像装置、速度传感器等。在移动机器人移动期间,位置测量装置不断侦测移动信息并提供给处理装置。所述位移传感器、陀螺仪、速度传感器等可被集成在一个或多个芯片中。所述测距传感器和悬崖传感器可设置在移动机器人的体侧。例如,扫地机器人中的测距传感器被设置在壳体的边缘;扫地机器人中的悬崖传感器被设置在移动机器人底部。根据移动机器人所布置的传感器的类型和数量,处理装置所能获取的移动信息包括但不限于:位移信息、角度信息、与障碍物之间的距离信息、速度信息、行进方向信息等。例如,所述位置测量装置为设置于所述移动机器人的马达的计数传感器,利用马达运转的圈数进行计数以获得所述移动机器人自第一位置移动至第二位置的相对位移,以及利用马达运转的角度获取姿态信息等。
在另一些实施中,以所述地图为一种栅格地图为例,预先确定单位栅格长度与实际位移之间的映射关系,按照移动机器人在移动期间所得到的移动信息,确定移动机器人从第一位置到第二位置移动的栅格数,进而获得该两位置的相对位置信息。
以所述预先构建的地图为一种矢量地图为例,预先确定单位矢量长度与实际位移之间的映射关系,按照移动机器人在移动期间所得到的移动信息,确定移动机器人从第一位置到第二位置移动的矢量长度,进而获得该两位置的相对位置信息。该矢量长度可以图像的像素为单位进行计算。且所述静态目标在第二帧图像中的位置相对于第一帧图像中的位置进行了与 所述相对位置信息对应的矢量长度的偏移,所以第二帧图像中摄取的静态目标相对于第一帧图像中摄取的静态目标的移动是可以根据所述移动机器人的相对位置信息确定的,具有确定性的变化属性。而所述重叠视场内的所述移动目标在所选取的两帧图像中的移动不符合上述的确定性的变化属性。
请参阅图2,图2显示为本申请的一具体实施例中选取的两帧图像的图像示意图。在本实施例中,设定所述摄像装置的主光轴垂直于行进平面,因此,摄像装置所摄取的二维图像所在平面与移动机器人的行进平面具有平行关系。以此设置方式,我们用摄像装置所摄取的实体对象在投影图像中的位置来表示该实体对象投影至所述移动机器人的行进平面的位置,且利用所述实体对象在投影图像中的位置相对于所述移动机器人移动方向的角度来表征该实体对象投影至所述移动机器人的行进平面的位置相对于所述移动机器人移动方向的角度。在图2中选取的两帧图像分别为第一帧图像和第二帧图像,且所述移动机器人在第一帧图像对应的位置为第一位置P1,所述移动机器人在第二帧图像对应的位置为第二位置P2,该实施例中,所述移动机器人从所述第一位置P1到所述第二位置P2只发生了距离的变化,而未发生姿态的变化,在此只需要测量两者的相对位移即可获得移动机器人在第一位置P1和第二位置P2的相对位置信息。该两帧图像具有图像重叠区域,且摄像装置的重叠视场内具有如图2所示的静态目标O,由于所述移动机器人处于移动的状态,静态目标O在第二帧图像中的位置相对于第一帧图像中的位置进行了确定性的变化,且静态目标O在该两帧图像中的位置的确定性的变化幅度与所述移动机器人在第一位置P1和第二位置P2的移动信息相关,该移动信息在该实施例中例如为所述移动机器人从所述第一位置P1到所述第二位置P2的移动距离。在该实施例中,所述移动机器人包括位置测量装置,利用所述移动机器人的位置测量装置获取所述移动机器人的移动信息。又如,所述位置测量装置测量移动机器人的行进速度,并利用行进速度及所行进的时长计算自第一位置移动至第二位置的相对位移。在另一些实施例中,所述位置测量装置为GPS(Global Positioning System,全球定位系统),根据该GPS在第一位置和第二位置的定位信息,获取所述第一位置P1和第二位置P2之间的相对位置信息。如图2所示,静态目标O在第一帧图像中的投影为静态目标投影O1,静态目标O在第二帧图像中的位置为静态目标投影O2,且从图2中可以清楚地看到在第一帧图像中的静态目标投影O1变化到了第二帧图像中静态目标投影O2的位置,两者在图像中的位置发生了变化,且静态目标投影O2相对静态目标投影O1在图像中的变化距离与所述第一位置P1和第二位置P2之间的相对位移呈一确定的比例,可以通过单位实际距离对应图像中的像素的比例而确定性的获得静态目标投影O2相对静态目标投影O1在图像中的变化距离。所以第二帧图像中摄取的静态目标相对于第一帧图像中摄取的静态目标的移动是可以根据所述移动机器人的相 对位置信息确定的,具有确定性的变化属性。而所述重叠视场内的所述移动目标在所选取的两帧图像中的移动不符合上述的确定性的变化属性。
在又一些实施例中,所述位置测量装置为基于测量无线信号而定位的装置,例如,所述位置测量装置为蓝牙(或WiFi)定位装置;位置测量装置根据在第一位置P1和第二位置P2各自对所接收的无线定位信号的功率进行测量,来确定各位置相对于预设无线定位信号发射装置的相对位置,藉此以获取所述第一位置P1和第二位置P2之间的相对位置信息。
而当移动机器人在移动期间,在摄取的第一帧图像和第二帧图像中存在移动目标时,则该移动目标的移动具有不确定性的变化属性。请参阅图3,图3显示为本申请的一具体实施例中选取的两帧图像的图像示意图。在本实施例中,设定所述摄像装置的主光轴垂直于行进平面,因此,摄像装置所摄取的二维图像所在平面与移动机器人的行进平面具有平行关系。以此设置方式,我们用摄像装置所摄取的实体对象在投影图像中的位置来表示该实体对象投影至所述移动机器人的行进平面的位置,且利用所述实体对象在投影图像中的位置相对于所述移动机器人移动方向的角度来表征该实体对象投影至所述移动机器人的行进平面的位置相对于所述移动机器人移动方向的角度。图3中选取的两帧图像分别为第一帧图像和第二帧图像,且所述移动机器人在第一帧图像对应的位置为第一位置P1',所述移动机器人在第二帧图像对应的位置为第二位置P2',该实施例中,所述移动机器人从所述第一位置P1'到所述第二位置P2'只发生了距离的变化,而未发生姿态的变化,在此只需要测量两者的相对位移即可获得移动机器人在第一位置P1'和第二位置P2'的相对位置信息。该两帧图像具有图像重叠区域,且摄像装置的重叠视场内具有如图3所示的移动目标Q,且所述移动机器人从所述第一位置移动至所述第二位置的过程中,所述移动目标Q进行了移动且成为了新位置上的移动目标Q',移动目标Q在第二帧图像中的位置相对于第一帧图像中的位置进行了不确定性的变化,即移动目标Q在该两帧图像中的位置的变化幅度与所述移动机器人在第一位置P1'和第二位置P2'的移动信息没有相关性,通过移动机器人在第一位置P1'和第二位置P2'的移动信息并不能推算出所述移动目标Q在该两帧图像中位置的变化,该移动信息在该实施例中例如为所述移动机器人从所述第一位置P1'到所述第二位置P2'的移动距离。在该实施例中,所述移动机器人包括位置测量装置,利用所述移动机器人的位置测量装置获取所述移动机器人的移动信息。又如,所述位置测量装置测量移动机器人的行进速度,并利用行进速度及所行进的时长计算自第一位置移动至第二位置的相对位移。在另一些实施例中,所述位置测量装置为GPS系统或基于测量无线信号而定位的装置,根据该GPS系统或基于测量无线信号而定位的装置在第一位置和第二位置的定位信息,获取所述第一位置P1'和第二位置P2'之间的相对位置信息。如图3所示,移动目标Q在第一帧图像中的投影为移动目标投影Q1,移动目标Q'在第 二帧图像中的位置为移动目标投影O2,而当所述移动目标Q是一个静态目标时,其在第二帧图像中的投影应为投影Q2',即移动目标投影Q2'是移动目标投影Q1在移动机器人从第一位置P1'移动至第二位置P2'的过程中发生确定性的变化后的图像投影位置,而本实施例中,并不能根据所述移动机器人在第一位置P1'和第二位置P2'的移动信息推算出所述移动目标Q在该两帧图像中位置的变化,该移动目标Q在移动机器人的移动过程中具有不确定变化属性。
参阅图4,图4显示为本申请的一具体实施例中自多帧图像中选取至少两帧图像进行对比的流程示意图。所述根据自所述多帧图像中选取的至少两帧图像的比对还包括以下的步骤S210和步骤S220。
在所述步骤S210中,所述处理装置根据所述至少两帧图像的比对检测出疑似目标。其中,所述疑似目标为在所述第一帧图像和第二帧图像中具有不确定变化属性的目标,且所述疑似目标相对所述第一帧图像和第二帧图像的图像重叠区域内的静态目标发生了移动行为。且在一些实施例中,参阅图5,图5显示为本申请的一具体实施例中根据至少两帧图像的比对检测出疑似目标的流程示意图。即根据图5中的步骤S211和步骤S212实现根据所述至少两帧图像的比对检测出疑似目标的步骤。
在步骤S211中,所述处理装置基于所述移动装置在所述至少两帧图像之间的时间内的移动信息,对所述至少两帧图像进行图像补偿;在一些实施例中,所述移动机器人从第一位置到第二位置的移动过程中,由于移动产生了移动信息,此处,所述移动信息为所述移动机器人从第一位置到所述第二位置的相对位移和相对姿态变化,根据所述位置测量装置可测得所述移动信息,且根据所述摄像装置摄取的图像中的单位长度与实际长度的比例关系,以获得第二帧图像和第一帧图像的图像重叠区域内静态目标的投影图像的位置的确定性的相对位移,根据所述移动机器人具有的姿态检测装置获取所述移动机器人的相对姿态变化,进而根据移动信息对所述第一帧图像或第二帧图像进行图像补偿。例如,根据移动信息对所述第一帧图像进行图像补偿或根据移动信息对所述第二帧图像进行图像补偿。
在所述步骤S212中,所述处理装置将经图像补偿后的所述至少两帧图像作相减处理形成差分图像,即将补偿后的第二帧图像与原始的第一帧图像作相减处理形成差分图像,或将补偿后的第一帧图像与原始的第二帧图像作相减处理形成差分图像。当所述第一帧图像和所述第二帧图像的图像重叠区域中,不具有相对静态目标发生移动行为的移动目标时,经补偿后的图像相减结果应为零,关于所述第一帧图像和所述第二帧图像的图像重叠区域的差分图像中应不包含任何特征,即补偿后的第二帧图像与原始的第一帧图像的图像重叠区域相同,或补偿后的第一帧图像与原始的第二帧图像的图像重叠区域相同。当所述第一帧图像和所述第二帧图像的图像重叠区域中,具有相对静态目标发生移动行为的移动目标时,经补偿后的图 像相减结果不为零,关于所述第一帧图像和所述第二帧图像的图像重叠区域的差分图像中包含差异特征,即补偿后的第二帧图像与原始的第一帧图像的图像重叠区域并不相同,存在不能重合的部分,或补偿后的第一帧图像与原始的第二帧图像的图像重叠区域并不相同,存在不能重合的部分。如果仅当经补偿后的两帧图像的图像重叠区域的差分图像中存在所述差异特征,或经补偿后的两帧图像的图像重叠区域不能完全重合时,即判断存在疑似目标,会造成误判。例如,当所述移动机器人处于所述第一位置处,所述重叠视场内存在一个关闭的台灯,而当所述移动机器人处于所述第二位置处,所述重叠视场内的所述台灯被点亮时,按照上述步骤,由此摄取的第一帧图像和第二帧图像的图像重叠区域中存在特异的特征,图像经补偿后的图像重叠区域的差分结果并不能为零,即图像经补偿后的图像重叠区域不能完全重合,因此,仅采用上述方式进行疑似目标的判断,该台灯会被误判为疑似目标。所以,在满足差分图像不为零的情况下,还需根据该差分图像可获得一物体在所述摄像装置摄取第一帧图像和第二帧图像的时间内具有以第一移动轨迹时,判断所述物体为所述疑似目标。即在所述重叠视场内存在对应所述第一移动轨迹的疑似目标。
在一些实施方式中,参阅图6,图6显示为本申请的一实施例中选取的第一帧图像的图像示意图。参阅图7,图7显示为本申请的一实施例中选取的第二帧图像的图像示意图。其中图6显示为所述移动机器人的摄像装置在第一位置摄取的第一帧图像,图7显示为所述移动机器人从所述第一位置移动到所述第二位置时摄取的第二帧图像,根据所述移动机器人的位置测量装置测得所述移动机器人从所述第一位置移动至所述第二位置的移动信息。所述第一帧图像和所述第二帧图像具有如图6和图7中的虚线框所示的图像重叠区域,该图像重叠区域对应摄像装置在所述第一位置和所述第二位置的重叠的视场。且所述摄像装置的重叠视场内包括多个静态目标,例如椅子、窗子、书架、钟、沙发以及床等。且在图6和图7中存在移动的蝴蝶A,所述蝴蝶A关于图6和图7中的重叠视场内的静态目标发生了移动,且所述蝴蝶A在处于所述第一帧图像和所述第二帧图像的图像重叠区域内,现选取所述重叠视场内的一个静态目标来明确所述蝴蝶A的移动,该静态目标可以选择所述椅子、窗子、书架、钟、沙发以及床中的任何一种,由于图6和图7中钟的图像比较完整,且钟的形状规正,容易辨识,且其大小可以较好的表明所述蝴蝶A的移动,所以,在此选择图6和图7中的钟为明确所述蝴蝶A的移动的静态目标。很明显的,从图6,可以看出,蝴蝶A位于所述钟的左侧,而在图7中,蝴蝶A位于钟的右侧,根据所述位置测量装置测得的所述移动信息,对所述第二帧图像进行补偿,所述处理装置将经图像补偿后的所述第二帧图像与原始的第一帧图像作相减处理形成差分图像,该差分图像中存在蝴蝶A这个同时存在于所述第一帧图像和所述第二帧图像,且不能通过相减被消除的特异特征,即判断该蝴蝶A在所述移动机器人从所 述第一位置移动至所述第二位置的过程中,相对重叠视场内的静态目标(该静态目标举例为钟)发生了移动行为(从钟的左侧移动至了钟的右侧),且所述蝴蝶A在所述第一帧图像和所述第二帧图像中呈现的位置具有不确定变化属性。
在另一具体实施例中,移动机器人在第一位置时,通过摄像装置摄取该第一位置处的如图6所示的第一帧图像,移动机器人移动至第二位置时,通过摄像装置摄取该第二位置处的如图7所示的第一帧图像,该第一帧图像和所述第二帧图像具有如图6和图7所示的图像重叠区域,该图像重叠区域对应摄像装置在所述第一位置和所述第二位置的重叠的视场。且所述摄像装置的重叠视场内包括多个静态目标,例如椅子、窗子、书架、钟、沙发以及床等。且在本实施例中存在移动的蝴蝶A,所述蝴蝶A关于图6和图7中的重叠视场内的静态目标发生了移动,且所述蝴蝶A在所述第一帧图像中例如为处于所述床尾的位置,处于所述图像重叠区域内,且所述蝴蝶A在所述第二帧图像中例如处于所述床头且处于所述图像重叠区域外的位置,此时,根据所述位置测量装置测得的所述移动信息,对所述第二帧图像进行补偿,所述处理装置将经图像补偿后的所述第二帧图像与原始的第一帧图像作相减处理形成差分图像,该差分图像中存在蝴蝶A这个同时存在于所述第一帧图像和所述第二帧图像,且不能通过相减被消除的特异特征,即判断该蝴蝶A在所述移动机器人从所述第一位置移动至所述第二位置的过程中,相对重叠视场内的静态目标(该静态目标距离为床)发生了移动行为(从床尾移动至床头),且所述蝴蝶A在所述第一帧图像和所述第二帧图像中呈现的位置具有不确定变化属性。
移动目标的移动一般是连续性的移动,为了防止对一些特殊情况的误判,提高系统的准确性和有效性,在此,需要进一步执行步骤S220,即跟踪所述疑似目标以确认该疑似目标为移动目标。该特殊情况例如为,由于风力的作用,一些悬挂的装饰品或者吊灯会进行一定幅度的较为规律的摆动,这些摆动一般只会是在小范围内的具有规律的来回移动或小幅度的不规律的移动,该移动通常不能形成连续的移动,该由于风力而摆动的物体会在该差分图像中形成差异特征,且存在移动轨迹,根据图5所示方法,该由于风力而摆动的物体会被判断为疑似目标,如果仅采用图5所示方法而确定所述疑似目标即为移动目标,这些由于风力而进行一定幅度摆动的物体会被误判为移动目标。
参阅图8,图8显示为本申请的一具体实施例中跟踪疑似目标以确认疑似目标为移动目标的流程示意图。在一些实施例中,跟踪所述疑似目标以确认移动目标的方法参考图8所示步骤S221和步骤S222。
在步骤S221中,所述处理装置根据疑似目标的跟踪获得疑似目标的移动轨迹;在步骤S222中,若所述疑似目标的移动轨迹为连续时,则将所述疑似目标确认为移动目标。在一些 实施例中,在所述摄像装置摄取的多帧图像中,继续获取所述移动机器人在从第二位置移动至第三位置时摄取的视场范围内的第三帧图像,所述第一帧图像、所述第二帧图像以及所述第三帧图像为依次获取的图像,所述第二帧图像与所述第三帧图像具有图像重叠区域,且同样根据步骤S211和步骤S212对所述第二帧图像和所述第三帧图像进行比对检测,且当第二帧图像和经补偿后的第三帧图像作相减处理的图像重叠区域部分的差分图像不为零,即差分图像存在差异特征时,且该差异特征同时存在于所述第二帧图像和所述第三帧图像,根据该差分图像获得所述疑似目标在所述摄像装置摄取所述第二帧图像和所述第三帧图像的时间中的第二移动轨迹,当所述第一移动轨迹和所述第二移动轨迹为连续时,则将所述疑似目标确认为移动目标。为了保证对移动目标鉴别的准确性,可以依次获取摄像装置在相对连续的时间上的更多帧图像,且新获取的图像与相邻的图像均根据所述步骤S211和步骤S212进行比对检测,进而获得关于所述疑似目标的更多的移动轨迹,以进行疑似目标是否为移动目标的判断,保证判断结果的准确性。例如,对于图6和图7中的蝴蝶A,在获取第三帧图像后,蝴蝶A移动至了床头位置,且根据步骤S211和步骤S212对图7所示的第二帧图像和所述第三帧图像进行比对检测,图7所示的第二帧图像和经补偿后的第三帧图像作相减处理的图像重叠区域部分的差分图像不为零,且存在蝴蝶A这个差异特征,且根据差分图像可获得摄像装置摄取第二帧图像和第三帧图像的时间中关于所述蝴蝶A的第二移动轨迹。即可获得关于所述疑似目标(蝴蝶A)在从钟的左侧移动至钟的右侧,再移动至床头的连续的移动轨迹,即判断所述蝴蝶A为移动目标。
在另一些实施例中,还用以根据所述疑似目标的图像特征对所述疑似目标进行跟踪。其中,所述图像特征包括预设的对应疑似目标的图形特征,或者对疑似目标经图像处理算法而得到的图像特征。其中,所述图像处理算法包括但不限于以下至少一种:灰度处理、锐化处理、轮廓提取、角提取、线提取以及利用经机器学习得到的图像处理算法。利用经机器学习而得到的图像处理算法包括但不限于:神经网络算法、聚类算法等。在所述摄像装置摄取的多帧图像中,继续获取所述移动机器人在从第二位置移动至第三位置时摄取的视场范围内的第三帧图像,所述第一帧图像、所述第二帧图像以及所述第三帧图像为依次获取的图像,所述第二帧图像与所述第三帧图像具有图像重叠区域,根据所述疑似目标的图像特征在所述第三帧图像中进行疑似目标的查找,所述摄像装置在摄取所述第二帧图像和所述第三帧图像的重叠视场范围内存在静态目标,且根据所述移动机器人在所述第二位置和所述第三位置的相对位置信息,以及所述疑似目标在所述第二帧图像和所述第三帧图像中关于一相同静态目标的位置变化,获取所述疑似目标在所述摄像装置摄取所述第二帧图像和所述第三帧图像的时间内的第二移动轨。当所述第一移动轨迹和所述第二移动轨迹为连续时,则将所述疑似目标 确认为移动目标。为了保证对移动目标鉴别的准确性,可以获取更多帧图像,且新获取的图像与相邻的图像均根据所述疑似目标的图像特征对所述疑似目标进行跟踪,进而获得关于所述疑似目标的更多的移动轨迹,以进行疑似目标是否为移动目标的判断,保证判断结果的准确性。
在一些实施方式中,参阅图9,图9显示为本申请的一具体实施例中根据自多帧图像中选取的至少两帧图像的比对的流程示意图。所述根据自所述多帧图像中选取的至少两帧图像的比对包括图9所示的步骤S210'和步骤S220'。
在所述步骤S210'中,所述处理装置根据所述至少两帧图像中对应的特征信息的匹配操作检测出疑似目标。该特征信息包括以下至少一种:特征点、特征线、特征颜色等。参阅图10,图10显示为本申请一具体实施例中根据至少两帧图像中对应的特征信息的匹配操作检测出疑似目标的流程示意图。且通过步骤S211'以及步骤S212'实现所述步骤S210'。
在所述步骤S211'中,所述处理装置分别提取所述至少两帧图像中的各个特征点,将提取的所述至少两帧图像中各个特征点在一参考三维坐标系上进行匹配;其中,所述参考三维坐标系是通过对移动空间进行三维建模形成的,所述参考三维坐标系上标识有移动空间内所有静态目标中各个特征点的坐标。所述特征点举例包括与相应的实体对象对应的角点、端点、拐点等。在一些实施例中,对应一静态目标的特征点的集合可形成该静态目标的外部轮廓,即可通过一些特征点的集合识别出对应的静态目标。用户可预先通过识别条件对移动机器人的移动空间中的所有静态目标进行图像识别,以分别获得关于各所述静态目标的特征点,并将各特征点的坐标标识在所述参考三维坐标系上。用户还可根据一定的格式手动上传各所述静态目标的特征点的坐标,并将其标识在所述参考三维坐标系上。
在所述步骤S212'中,所述处理装置将所述至少两帧图像中未在所述参考三维坐标系上实现匹配的对应特征点所组成的特征点集合检测为疑似目标。当在单一的图像中查找到未与所述参考三维坐标系上对应特征点实现匹配的特征点所组成的相同的特征点集合时,只能说明该特征点集合不属于已标识于所述参考三维坐标系上的任何一个所述静态目标,并不能将该特征点集合作为所述疑似目标,此时有可能是移动空间中新增添的且未预先将其特征点的坐标标识在所述参考三维坐标系上的静态物体,所以需要根据两帧图像的匹配,以确定该未与所述参考三维坐标系上对应特征点实现匹配的特征点所组成的相同的特征点集合发生了移动。所述参考三维坐标系上标识有移动空间中所有所述静态目标的各个特征点的坐标,当所述第一帧图像和第二帧图像中均存在未与所述参考三维坐标系上对应特征点实现匹配的特征点所组成的特征点集合。在一些实施例中,对应该两个图像的未匹配到参考系中特征点的特征点组成的特征点集合为相同或者相似。该特征点集合在所述摄像装置摄取所述第一帧图像 和所述第二帧图像的时间内相对所述静态目标发生了移动行为,进而形成了关于该特征点集合的第一移动轨迹,则该特征点集合检测为疑似目标。
例如在图6和图7中,椅子、窗子、书架、钟、沙发以及床等静态目标都可提前提取特征点标识于所述参考三维坐标系中,而图6和图7的蝴蝶A是新增加的物体,并未标识于所述参考三维坐标系中,所以将提取的所述第一帧图像和第二帧图像中各个特征点在所述参考三维坐标系上进行匹配,可以得到未能与参考三维坐标系上标识的特征点,且该特征点的集合可显示为关于蝴蝶A的特征,例如该特征点的集合显示为蝴蝶A的轮廓特征。且在图6中显示的第一帧图像中,蝴蝶A位于所述钟的左侧,在图7显示的第二帧图像中,蝴蝶A位于所述钟的右侧,即分别通过第一帧图像和第二帧图像与参考三维坐标系上标识的特征点的匹配,可获得关于所述蝴蝶A从钟的左侧移动至钟的右侧的第一移动轨迹。
移动目标的移动一般是连续性的移动,为了防止对一些特殊情况的误判,提高系统的准确性和有效性,在此,需要进一步执行步骤S220',即跟踪所述疑似目标以确认该疑似目标为移动目标。该特殊情况例如为,由于风力的作用,一些悬挂的装饰品或者吊灯会进行一定幅度的较为规律的摆动,这些摆动一般只会是在小范围内的具有规律的来回移动或小幅度的不规律的移动,该移动通常不能形成连续的移动,该由于风力而摆动的物体会在该差分图像中形成差异特征,且存在移动轨迹,根据图10所示的方法,该由于风力而摆动的物体会被判断为疑似目标,如果仅采用图10所示的方法而确定所述疑似目标即为移动目标,这些由于风力而进行一定幅度摆动的物体会被误判为移动目标。在一些实施例中,参阅图11,图11显示为本申请一具体实施例中跟踪疑似目标以确认移动目标的流程示意图。所述跟踪所述疑似目标以确认移动目标的方法参考图11所示的步骤S221'和步骤S222'。
在步骤S221'中,所述处理装置根据疑似目标的跟踪获得疑似目标的移动轨迹;在步骤S222'中,若所述疑似目标的移动轨迹为连续时,则将所述疑似目标确认为移动目标。在一些实施例中,在所述摄像装置摄取的多帧图像中,继续获取所述移动机器人在第三位置处的第三帧图像,所述第一帧图像、所述第二帧图像以及所述第三帧图像为依次获取的图像,所述第二帧图像与所述第三帧图像具有图像重叠区域,且同样根据步骤S211和步骤S212对所述第二帧图像和所述第三帧图像进行比对检测,且当第二帧图像和经补偿后的第三帧图像作相减处理形成的差分图像不为零,即差分图像存在差异特征时,且该差异特征同时存在于所述第二帧图像和所述第三帧图像,根据该差分图像获得所述疑似目标在所述摄像装置摄取所述第二帧图像和所述第三帧图像的时间中的第二移动轨迹,当所述第一移动轨迹和所述第二移动轨迹为连续时,则将所述疑似目标确认为移动目标。为了保证对移动目标鉴别的准确性,可以获取更多帧图像,且新获取的图像与相邻的图像均根据所述步骤S211和步骤S212进行 比对检测,进而获得关于所述疑似目标的更多的移动轨迹,以进行疑似目标是否为移动目标的判断,保证判断结果的准确性。例如,对于图4和图5中的蝴蝶A,在获取第三帧图像后,蝴蝶A移动至了床头位置,且根据步骤S211和步骤S212对图5所示的第二帧图像和所述第三帧图像进行比对检测,图5所示的第二帧图像和经补偿后的第三帧图像作相减处理的图像重叠区域部分的差分图像不为零,且存在蝴蝶A这个差异特征,且根据差分图像可获得摄像装置摄取第二帧图像和第三帧图像的时间中关于所述蝴蝶A的第二移动轨迹。即可获得关于所述疑似目标(蝴蝶A)在从钟的左侧移动至钟的右侧,再移动至床头的连续的移动轨迹,即判断所述蝴蝶A为移动目标。
在另一些实施例中,还用以根据所述疑似目标的图像特征对所述疑似目标进行跟踪。其中,所述图像特征包括预设的对应疑似目标的图形特征,或者对疑似目标经图像处理算法而得到的图像特征。其中,所述图像处理算法包括但不限于以下至少一种:灰度处理、锐化处理、轮廓提取、角提取、线提取以及利用经机器学习得到的图像处理算法。利用经机器学习而得到的图像处理算法包括但不限于:神经网络算法、聚类算法等。在所述摄像装置摄取的多帧图像中,继续获取所述移动机器人在第三位置处的第三帧图像,所述第一帧图像、所述第二帧图像以及所述第三帧图像为依次获取的图像,所述第二帧图像与所述第三帧图像具有图像重叠区域,根据所述疑似目标的图像特征在所述第三帧图像中进行疑似目标的查找,所述摄像装置在摄取所述第二帧图像和所述第三帧图像的重叠视场范围内存在静态目标,且根据所述移动机器人在所述第二位置和所述第三位置的相对位置信息,以及所述疑似目标在所述第二帧图像和所述第三帧图像中关于一相同静态目标的位置变化,获取所述疑似目标在所述摄像装置摄取所述第二帧图像和所述第三帧图像的时间内的第二移动轨。当所述第一移动轨迹和所述第二移动轨迹为连续时,则将所述疑似目标确认为移动目标。为了保证对移动目标鉴别的准确性,可以获取更多帧图像,且新获取的图像与相邻的图像均根据所述疑似目标的图像特征对所述疑似目标进行跟踪,进而获得关于所述疑似目标的更多的移动轨迹,以进行疑似目标是否为移动目标的判断,保证判断结果的准确性。
在一些实施例中,参阅图12,图12显示为本申请的一具体实施例中物体识别流程示意图。所述监控方法还包括步骤S300和步骤S400;在所述步骤S300中,所述处理装置对摄取的图像中的移动目标进行物体识别;物体识别是通过特征匹配或模型识别的方法,对目标物体进行识别。基于特征匹配的物体识别方法的步骤一般为,首先提取物体的图像特征,然后对提取到的特征进行描述,最后对被描述的物体进行特征匹配。所述图像特征包括对应移动目标的图形特征,或者经图像处理算法而得到的图像特征。其中,所述图像处理算法包括但不限于以下至少一种:灰度处理、锐化处理、轮廓提取、角提取、线提取以及利用经机器学 习而得到的图像处理算法。所述移动目标例如包括移动的人或移动的小动物等。在此,所述物体识别是通过经神经网络训练的物体识别器完成的;在某些实施例中,所述神经网络模型可以为卷积神经网络,所述网络结构包括输入层、至少一层隐藏层和至少一层输出层。其中,所述输入层用于接收所拍摄的图像或者经预处理后的图像;所述隐藏层包含卷积层和激活函数层,甚至还可以包含归一化层、池化层、融合层中的至少一种等;所述输出层用于输出标记有物体种类标签的图像。所述连接方式根据各层在神经网络模型中的连接关系而确定。例如,基于数据传输而设置的前后层连接关系,基于各隐藏层中卷积核尺寸而设置与前层数据的连接关系,以及全连接等。人工神经网络的特点和优越性,主要表现在三个方面:第一,具有自学习功能。第二,具有联想存储功能。第三,具有高速寻找优化解的能力。
在所述步骤S400中,所述处理装置根据物体识别的结果输出所述监控信息。所述监控信息包括:图像信息、视频信息、音频信息、文字信息中的一种或多种。所述监控信息可以为包含有所述移动目标的图像照片,也可以是发送至预先设定的通信地址的提示信息,该提示信息例如为APP的提醒信息、短信、邮件、语音播报、警报等。所述提示信息中包含关于所述移动目标的关键词,当所述移动目标的关键词为“人”时,所述提示信息可以为包含“人”这个关键词的APP的提醒信息、短信、邮件、语音播报、警报等,例如为文字或语音的“有人闯入”的信息。且所述预先设定的通信地址至少包括以下中的一种:与所述移动机器人绑定的电话号码、即时通讯账号(微信账号、QQ账号、或facebook账号等)、邮箱地址以及网络平台等。
由于所述物体识别器的训练以及根据该物体识别器进行物体识别都是很复杂的计算过程,需要很大的计算量,对搭载运行的设备的硬件要求非常高,所以,在一些实施例中,参阅图13,图13显示为本申请的一具体实施例中物体识别的流程示意图。所述方法还包括步骤S500和步骤S600;
在步骤S500中,所述处理装置将摄取的图像或包含图像的视频上传至云端服务器以对图像中的移动目标进行物体识别;所述云端服务器中包括经神经网络训练的物体识别器;
在步骤S600中,所述处理装置接收所述云端服务器的物体识别的结果并输出所述监控信息。所述监控信息包括:图像信息、视频信息、音频信息、文字信息中的一种或多种。所述监控信息可以为包含有所述移动目标的图像照片,也可以是发送至预先设定的通信地址的提示信息,该提示信息例如为APP的提醒信息、短信、邮件、语音播报、警报等。所述提示信息中包含关于所述移动目标的关键词,当所述移动目标的关键词为“人”时,所述提示信息可以为包含“人”这个关键词的APP的提醒信息、短信、邮件、语音播报、警报等,例如为文字或语音的“有人闯入”的信息。
将移动目标的检测和识别操作放在所述云端服务器上执行,可以减小本地移动机器人的运行压力,降低对移动机器人的硬件的需求,且提高移动机器人的执行效率,且可充分利用云端服务器的强大的处理功能使方法的执行更为快速、更为精确。
在另一些实施例中,所述移动机器人在通过摄像装置摄取到图像,并根据摄取的图像选取所述第一帧图像和所述第二帧图像后,将所述第一帧图像和所述第二帧图像上传至所述云端服务器进行图像比对,并接收所述云端服务器向所述移动机器人反馈的所述物体识别的结果。又或者,所述移动机器人在通过摄像装置摄取到图像后,直接将摄取到的所述图像上传至所述云端服务器,且在所述云端服务器中根据移动目标的监控方法选取两帧图像,并对选取的两帧图像进行图像比对,以及接收所述云端服务器向所述移动机器人反馈的所述物体识别的结果。所述移动目标的监控方法参阅图1及关于图1的相关描述,在此不加赘述。当将更多的数据处理程序放入云端执行的时候,对所述移动机器人本身的硬件要求将会进一步降低。且当运行程序需要修订和更新的时候,可以较方便的直接对云端中的运行程序进行修订和更新,提高系统更新的效率和灵活性。
本申请的应用于移动机器人的移动目标的监控方法,根据移动机器人在监控区域中移动的状态下通过摄像装置获取的多帧图像中,且从多帧图像中选取存在图像重叠区域的至少两帧图像,并根据图像补偿法或特征匹配法对选取的图像进行比对,并根据比对结果输出包含有相对静态目标发生移动行为的移动目标的监控信息。所述移动目标在所述至少两帧图像中呈现的位置具有不确定变化属性。本申请可以在移动机器人移动过程中精确的识别监控区域中的移动目标,并生成关于该移动目标的监控信息以进行相应的提醒,有效的保证监控区域的安全性。
请参阅图14,显示为本申请的应用于移动机器人的移动目标的监控装置在一具体实施例中的组成示意图。所述移动机器人包括移动装置和摄像装置。所述摄像装置设置于所述移动机器人,用于在移动机器人所在位置摄取视场范围内实体对象并投影至所述移动机器人的行进平面,以得到投影图像;所述摄像装置包括但不限于:鱼眼摄像模块、广角(或非广角)摄像模块、深度摄像模块、集成有光学系统或CCD芯片的摄像模块、集成有光学系统和CMOS芯片的摄像模块等。所述移动机器人包括但不限于:家庭陪伴式移动机器人、清洁机器人、巡逻式移动机器人、擦玻璃的机器人等。所述摄像装置的供电系统可受移动机器人的供电系统控制,当移动机器人上电移动期间,所述摄像装置即开始摄取图像。所述移动机器人至少包括一个摄像装置。所述摄像装置在移动机器人所在位置摄取视场范围内的图像。例如,移动机器人包含一个摄像装置,其设置于所述移动机器人顶部、肩部或背部,且主光轴垂直于所述移动机器人的行进平面、或主光轴与所述移动机器人的行进方向一致,在另一些 实施例中,所述主光轴还可与所述移动机器人所在的所述行进平面呈一定的夹角(比如50°至86°之间的夹角)的设置,以获得更大的摄像范围。在其他实施例中,所述摄像装置的主光轴的设置还可以有很多其他的方式,例如摄像装置可以以一定的规律或者随机的进行转动,此时摄像装置的光轴与移动机器人行进方向的角度则处于时刻变化的状态,所以所述摄像装置的安装方式以及摄像装置的主光轴的状态不以本实施例中的列举为限。又例如所述移动机器人包含二个或更多个摄像装置,例如,双目摄像装置或大于两个的多目摄像装置。在二个或更多个摄像装置中,其中一个摄像装置的主光轴垂直于所述移动机器人的行进平面、或主光轴与所述移动机器人的行进方向一致,在另一些实施例中,所述主光轴还可与所述移动机器人的行进方向在垂直于所述行进平面的方向上呈一定的夹角的设置,以获得更大的摄像范围。在其他实施例中,所述摄像装置的主光轴的设置还可以有很多其他的方式,例如摄像装置可以以一定的规律或者随机的进行转动,此时摄像装置的光轴与移动机器人行进方向的角度则处于时刻变化的状态,所以所述摄像装置的安装方式以及摄像装置的主光轴的状态不以本实施例中的列举为限。
在所述移动机器人为扫地机器人的实施例中,所述移动机器人的移动装置可包括行走机构和行走驱动机构,其中,所述行走机构可设置于所述机器人本体的底部,所述行走驱动机构内置于所述机器人本体内。所述行走机构可例如包括两个直行行走轮和至少一个辅助转向轮的组合,所述两个直行行走轮分别设于机器人本体的底部的相对两侧,所述两个直行行走轮可分别由对应的两个行走驱动机构实现独立驱动,即,左直行行走轮由左行走驱动机构驱动,右直行行走轮由右行走驱动机构驱动。所述的万向行走轮或直行行走轮可具有偏置下落式悬挂系统,以可移动方式紧固,例如以可旋转方式安装到机器人本体上,且接收向下及远离机器人本体偏置的弹簧偏置。所述弹簧偏置允许万向行走轮或直行行走轮以一定的着地力维持与地面的接触及牵引。在实际的应用中,所述至少一个辅助转向轮未参与的情形下,所述两个直行行走轮主要用于前进和后退,而在所述至少一个辅助转向轮参与并与所述两个直行行走轮配合的情形下,就可实现转向和旋转等移动。所述行走驱动机构可包括驱动电机和控制所述驱动电机的控制电路,利用所述驱动电机可驱动所述行走机构中的行走轮实现移动。在具体实现上,所述驱动电机可例如为可逆驱动电机,且所述驱动电机与所述行走轮的轮轴之间还可设置有变速机构。所述行走驱动机构可以可拆卸地安装到机器人本体上,方便拆装和维修。
在本实施例中,所述移动目标的监控装置700包括:至少一个处理器710以及至少一个存储器720。所述处理器710为一种能够进行数值运算、逻辑运算及数据分析的电子设备,其包括但不限于:CPU、GPU、FPGA等。所述存储器720可包括高速随机存取存储器,并 且还可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。在某些实施例中,存储器还可以包括远离一个或多个处理器的存储器,例如经由RF电路或外部端口以及通信网络访问的网络附加存储器,其中所述通信网络可以是因特网、一个或多个内部网、局域网(LAN)、广域网(WLAN)、存储局域网(SAN)等,或其适当组合。存储器控制器可控制设备的诸如CPU和外设接口之类的其他组件对存储器的访问。
其中,所述存储器720用于存储由所述摄像装置在所述移动装置的工作状态下摄取的图像,且至少一个程序被存储在所述至少一个存储器720中并被配置为由所述至少一个处理器710执行指令,所述至少一个处理器710执行所述执行指令使得所述监控装置700执行并实现移动目标的监控方法,所述移动目标的监控方法参阅图1及关于图1的相关描述,在此不加赘述。
请参阅图15,显示为本申请的移动机器人在一具体实施例中的组成示意图。所述移动机器人800包括移动装置810、摄像装置820以及监控装置830。
所述摄像装置820设置于所述移动机器人800,用于在移动机器人800所在位置摄取视场范围内实体对象并投影至所述移动机器人的行进平面,以得到投影图像;所述摄像装置820包括但不限于:鱼眼摄像模块、广角(或非广角)摄像模块、深度摄像模块、集成有光学系统或CCD芯片的摄像模块、集成有光学系统和CMOS芯片的摄像模块等。所述移动机器人800包括但不限于:家庭陪伴式移动机器人、清洁机器人、巡逻式移动机器人、擦玻璃的机器人等。所述摄像装置820的供电系统可受移动机器人800的供电系统控制,当移动机器人上电移动期间,所述摄像装置820即开始摄取图像。所述移动机器人800至少包括一个摄像装置820。所述摄像装置820在移动机器人800所在位置摄取视场范围内的图像。例如,移动机器人800包含一个摄像装置820,其设置于所述移动机器人顶部、肩部或背部,且主光轴垂直于所述移动机器人800的行进平面、或主光轴与所述移动机器人800的行进方向一致,在另一些实施例中,所述主光轴还可与所述移动机器人所在的所述行进平面呈一定的夹角(比如80°至86°之间的夹角)的设置,以获得更大的摄像范围。在其他实施例中,所述摄像装置820的主光轴的设置还可以有很多其他的方式,例如摄像装置820可以以一定的规律或者随机的进行转动,此时摄像装置820的光轴与移动机器人行进方向的角度则处于时刻变化的状态,所以所述摄像装置820的安装方式以及摄像装置820的主光轴的状态不以本实施例中的列举为限。又例如所述移动机器人包含二个或更多个摄像装置820,例如,双目摄像装置或大于两个的多目摄像装置。在二个或更多个摄像装置820中,其中一个摄像装置820的主光轴垂直于所述移动机器人的行进平面、或主光轴与所述移动机器人的行进方向一致, 在另一些实施例中,所述主光轴还可与所述移动机器人的行进方向在垂直于所述行进平面的方向上呈一定的夹角的设置,以获得更大的摄像范围。在其他实施例中,所述摄像装置820的主光轴的设置还可以有很多其他的方式,例如摄像装置820可以以一定的规律或者随机的进行转动,此时摄像装置820的光轴与移动机器人800行进方向的角度则处于时刻变化的状态,所以所述摄像装置820的安装方式以及摄像装置820的主光轴的状态不以本实施例中的列举为限。
在所述移动机器人800为扫地机器人的实施例中,所述移动机器人800的移动装置810可包括行走机构和行走驱动机构,其中,所述行走机构可设置于所述机器人本体的底部,所述行走驱动机构内置于所述机器人本体内。所述行走机构可例如包括两个直行行走轮和至少一个辅助转向轮的组合,所述两个直行行走轮分别设于机器人本体的底部的相对两侧,所述两个直行行走轮可分别由对应的两个行走驱动机构实现独立驱动,即,左直行行走轮由左行走驱动机构驱动,右直行行走轮由右行走驱动机构驱动。所述的万向行走轮或直行行走轮可具有偏置下落式悬挂系统,以可移动方式紧固,例如以可旋转方式安装到机器人本体上,且接收向下及远离机器人本体偏置的弹簧偏置。所述弹簧偏置允许万向行走轮或直行行走轮以一定的着地力维持与地面的接触及牵引。在实际的应用中,所述至少一个辅助转向轮未参与的情形下,所述两个直行行走轮主要用于前进和后退,而在所述至少一个辅助转向轮参与并与所述两个直行行走轮配合的情形下,就可实现转向和旋转等移动。所述行走驱动机构可包括驱动电机和控制所述驱动电机的控制电路,利用所述驱动电机可驱动所述行走机构中的行走轮实现移动。在具体实现上,所述驱动电机可例如为可逆驱动电机,且所述驱动电机与所述行走轮的轮轴之间还可设置有变速机构。所述行走驱动机构可以可拆卸地安装到机器人本体上,方便拆装和维修。
所述监控装置830与所述移动装置810和所述摄像装置820通信连接,所述监控装置830包括图像获取单元831、移动目标检测单元832以及信息输出单元833。
所述图像获取单元831与所述移动装置810和所述摄像装置820均通信连接,且所述图像获取单元831在移动装置810的工作状态下获取摄像装置820所摄取的多帧图像。在一些实施例中,所述多帧图像例如为在一个连续的时间段内获取的多帧图像,或者在两个或者多个间断的时间段内获取的多帧图像。
所述移动目标检测单元832用于对自所述多帧图像中选取的至少两帧图像进行比对以检测移动目标;所述至少两帧图像为摄像装置820在部分重叠视场内所摄取的图像。即移动目标检测单元832确定选取第一帧图像和第二帧图像的依据是两帧图像中包含图像重叠区域,且该重叠视场内包含所述静态目标,以对摄像装置视场范围内相对静态目标发生移动行为的 移动目标进行监控。为了保证选取的两帧图像的比对结果的有效性,还可对所述第一帧图像和所述第二帧图像的图像重叠区域的占比进行设定,例如设置所述图像重叠区域分别占所述第一帧图像和所述第二帧图像的比例至少为50%(但并不局限于此,依据实际情况可以设定不同的第一帧图像和所述第二帧图像的比例)。所述第一帧图像和第二帧图像的选取应具有一定的连续性,在保证两者具有一定比例的图像重叠区域的同时,还可根据获取的图像对移动目标的移动轨迹的连续性进行判断。所述移动目标在所述至少两帧图像中呈现的位置具有不确定变化属性。
所述信息输出单元833,用于根据所述至少两帧图像的比对输出包含有相对静态目标发生移动行为的移动目标的监控信息。在一些实施例中,所述静态目标举例但不限于:球、鞋、墙壁、花盆、衣帽、屋顶、灯、树、桌子、椅子、冰箱、电视、沙发、袜、平铺物体、杯子等。其中,平铺物体包括但不限于平铺在地板上的地垫、地砖贴图,以及挂在墙壁上的挂毯、挂画等。在一些实施例中,所述监控信息包括:图像信息、视频信息、音频信息、文字信息中的一种或多种。所述监控信息可以为包含有所述移动目标的图像照片,也可以是发送至预先设定的通信地址的提示信息,该提示信息例如为APP的提醒信息、短信、邮件、语音播报、警报等。所述提示信息中包含关于所述移动目标的关键词,当所述移动目标的关键词为“人”时,所述提示信息可以为包含“人”这个关键词的APP的提醒信息、短信、邮件、语音播报、警报等,例如为文字或语音的“有人闯入”的信息。且所述预先设定的通信地址至少包括以下中的一种:与所述移动机器人绑定的电话号码、即时通讯账号(微信账号、QQ账号、或facebook账号等)、邮箱地址以及网络平台等。
在一些实施例中,所述移动目标检测单元832还包括比对模块和跟踪模块。
所述比对模块用于根据所述至少两帧图像的比对检测出疑似目标;在一些实施例中,所述比对模块根据所述至少两帧图像的比对检测出疑似目标包括:
基于所述移动装置810在所述至少两帧图像之间的时间内的移动信息,对所述至少两帧图像进行图像补偿;即在此实施例中,如图8所示,所述移动目标检测单元832还与所述移动装置810通信连接,以获取所述移动装置810在所述至少两帧图像之间的时间内的移动信息,对所述至少两帧图像进行图像补偿。
将经图像补偿后的所述至少两帧图像作相减处理形成差分图像,从所述差分图像中检测出疑似目标。
所述跟踪模块用于跟踪所述疑似目标以确认移动目标。在一些实施例中,所述跟踪模块跟踪所述疑似目标以确认移动目标包括:
根据对所述比对模块检测出的所述疑似目标的跟踪获得疑似目标的移动轨迹;
若所述疑似目标的移动轨迹为连续时,则将所述疑似目标确认为移动目标。
在另一些实施例中,所述移动目标检测单元832可不与所述移动装置810进行如图8所示的通信连接,也可对所述移动目标进行识别。在此实施例中,所述移动目标检测单元832包括匹配模块和跟踪模块。
所述匹配模块用于根据所述至少两帧图像中对应的特征信息的匹配操作检测出疑似目标。在一些实施例中,所述匹配模块根据所述至少两帧图像中对应的特征信息的匹配操作检测出疑似目标包括:
分别提取所述至少两帧图像中的各个特征点,将提取的所述至少两帧图像中各个特征点在一参考三维坐标上进行匹配;所述参考三维坐标是通过对移动空间进行三维建模形成的,所述参考三维坐标上标识有移动空间内所有静态目标中各个特征点的坐标;
将所述至少两帧图像中未在所述参考三维坐标上实现匹配的对应特征点所组成的特征点集合检测为疑似目标。
所述跟踪模块用于跟踪所述疑似目标以确认移动目标。在一些实施例中,所述跟踪模块跟踪所述疑似目标以确认移动目标包括:
根据对所述匹配模块检测的所述疑似目标的跟踪获得疑似目标的移动轨迹;
若所述疑似目标的移动轨迹为连续时,则将所述疑似目标确认为移动目标。
在一些实施例中,所述移动目标的监控装置830还包括物体识别单元,所述物体识别单元用于对摄取的图像中的移动目标进行物体识别,以供所述信息输出单元根据物体识别的结构输出所述监控信息;所述物体识别单元是通过经神经网络训练形成的。物体识别是通过特征匹配或模型识别的方法,对目标物体进行识别。基于特征匹配的物体识别方法的步骤一般为,首先提取物体的图像特征,然后对提取到的特征进行描述,最后对被描述的物体进行特征匹配。所述图像特征包括对应移动目标的图形特征,或者经图像处理算法而得到的图像特征。其中,所述图像处理算法包括但不限于以下至少一种:灰度处理、锐化处理、轮廓提取、角提取、线提取以及利用经机器学习而得到的图像处理算法。所述移动目标例如包括移动的人或移动的小动物等。在此,所述物体识别是通过神经网络训练形成的;在某些实施例中,所述神经网络模型可以为卷积神经网络,所述网络结构包括输入层、至少一层隐藏层和至少一层输出层。其中,所述输入层用于接收所拍摄的图像或者经预处理后的图像;所述隐藏层包含卷积层和激活函数层,甚至还可以包含归一化层、池化层、融合层中的至少一种等;所述输出层用于输出标记有物体种类标签的图像。所述连接方式根据各层在神经网络模型中的连接关系而确定。例如,基于数据传输而设置的前后层连接关系,基于各隐藏层中卷积核尺寸而设置与前层数据的连接关系,以及全连接等。人工神经网络的特点和优越性,主 要表现在三个方面:第一,具有自学习功能。第二,具有联想存储功能。第三,具有高速寻找优化解的能力。
在一些实施例中,所述监控信息包括:图像信息、视频信息、音频信息、文字信息中的一种或多种。所述监控信息可以为包含有所述移动目标的图像照片,也可以是发送至预先设定的通信地址的提示信息,该提示信息例如为APP的提醒信息、短信、邮件、语音播报、警报等。所述提示信息中包含关于所述移动目标的关键词,当所述移动目标的关键词为“人”时,所述提示信息可以为包含“人”这个关键词的APP的提醒信息、短信、邮件、语音播报、警报等,例如为文字或语音的“有人闯入”的信息。且所述预先设定的通信地址至少包括以下中的一种:与所述移动机器人绑定的电话号码、即时通讯账号(微信账号、QQ账号、或facebook账号等)、邮箱地址以及网络平台等。
由于所述物体识别单元的训练以及根据该物体识别单元进行物体识别都是很复杂的计算过程,需要很大的计算量,对搭载运行的设备的硬件要求非常高,所以,在一些实施例中,所述移动目标的监控装置830还包括收发单元,所述收发单元所述用于将摄取的图像或包含图像的视频上传至云端服务器以对图像中的移动目标进行物体识别以及接收所述云端服务器的物体识别的结果以供所述信息输出单元输出所述监控信息;所述云端服务器中包括经神经网络训练的物体识别器。
将移动目标的检测和识别操作放在所述云端服务器上执行,可以减小本地移动机器人的运行压力,降低对移动机器人的硬件的需求,且提高移动机器人的执行效率,且可充分利用云端服务器的强大的处理功能使方法的执行更为快速、更为精确。
在另一些实施例中,所述移动机器人800在通过摄像装置摄取到图像,并根据摄取的图像选取所述第一帧图像和所述第二帧图像后,将所述第一帧图像和所述第二帧图像上传至所述云端服务器进行图像比对,并接收所述云端服务器向所述移动机器人反馈的所述物体识别的结果。又或者,所述移动机器人在通过摄像装置摄取到图像后,直接将摄取到的所述图像上传至所述云端服务器,且在所述云端服务器中运行所述移动目标检测单元832的内容以选取两帧图像,并对选取的两帧图像进行图像比对,以及接收所述云端服务器向所述移动机器人反馈的所述物体识别的结果。当将更多的数据处理程序放入云端执行的时候,对所述移动机器人本身的硬件要求将会进一步降低。且当运行程序需要修订和更新的时候,可以较方便的直接对云端中的运行程序进行修订和更新,提高系统更新的效率和灵活性。
图15实施例中的移动目标的监控装置830的技术方案与移动目标的监控方法相对应,所述移动目标的监控方法参阅图1及关于图1的相关描述,且所有关于所述移动目标的监控方法的描述均可应用于移动目标的监控装置830的相关实施例中,在此不加赘述。需要说明的 是,应理解图15实施例中装置的各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且这些模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分模块通过处理元件调用软件的形式实现,部分模块通过硬件的形式实现。例如,每个模块可以为单独设立的处理元件,也可以集成在上述装置的某一个芯片中实现,此外,也可以以程序代码的形式存储于上述装置的存储器中,由上述装置的某一个处理元件调用并执行以上接收模块的功能。其它模块的实现与之类似。此外这些模块全部或部分可以集成在一起,也可以独立实现。这里所述的处理元件可以是一种集成电路,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。
例如,以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(ApplicationSpecificIntegratedCircuit,简称ASIC),或,一个或多个微处理器(digitalsingnalprocessor,简称DSP),或,一个或者多个现场可编程门阵列(FieldProgrammableGateArray,简称FPGA)等。再如,当以上某个模块通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(CentralProcessingUnit,简称CPU)或其它可以调用程序代码的处理器。再如,这些模块可以集成在一起,以片上系统(system-on-a-chip,简称SOC)的形式实现。
另外需要说明的是,通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本申请的部分或全部可借助软件并结合必需的通用硬件平台来实现。基于这样的理解,本申请还提供一种计算机存储介质,所述存储介质存储有至少一个程序,所述程序在被调用时执行前述的任一所述的移动目标的监控方法,所述移动目标的监控方法参阅图1及关于图1的相关描述,在此不加赘述。需说明的是,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。
基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可包括其上存储有机器可执行指令的一个或多个机器可读介质,这些指令在由诸如计算机、计算机网络或其他电子设备等一个或多个机器执行时可使得该一个或多个机器根据本申请的实施例来执行操作。机器可读介质可包括,但不限于,能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、软盘、光盘、CD-ROM(紧致盘-只读存储器)、磁光盘、ROM(只读存储器)、RAM(随机存取存储器)、EPROM(可擦除可编程只读存储器)、EEPROM(电可擦除可编程只读存储器)、磁卡或光卡、闪存、电载波信号、电信信号以及软件分发 介质或适于存储机器可执行指令的其他类型的介质/机器可读介质。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。其中,所述存储介质可位于移动机器人也可位于第三方服务器中,如位于提供某应用商城的服务器中。在此对具体应用商城不做限制,如小米应用商城、华为应用商城、苹果应用商城等。
请参阅图16,显示为本申请的监控系统在一具体实施例中的组成示意图。所述监控系统900包括云端服务器910和移动机器人920,所述移动机器人920与所述云端服务器910连接。所述移动机器人920包括摄像装置和移动装置。所述移动机器人920在图16所示的三维空间中进行移动,且图16所示的移动空间中具有蝴蝶A这一移动目标。所述移动机器人920在通过移动装置移动的过程中,拍摄多帧图像,且自所述多帧图像中选取两帧图像进行比对,输出包含有相对静态目标发生移动行为的移动目标。该选取的两帧图像例如为图6所示的第一帧图像和图7所示的第二帧图像,所述第一帧图像和所述第二帧图像具有如图6和图7中的虚线框所示的图像重叠区域,该图像重叠区域对应摄像装置在所述第一位置和所述第二位置的重叠的视场。且所述摄像装置的重叠视场内包括多个静态目标,例如椅子、窗子、书架、钟、沙发以及床等。且在图6中,蝴蝶A位于钟的左侧,在图7中,蝴蝶A位于钟的右侧,且将所述第一帧图像和所述第二帧图像进行图像比对,以获得关于所述静态目标移动的疑似目标。且该图像比对的方法例如参阅图5及关于图5的相关描述,即所述移动机器人920的移动装置在所述摄像装置摄取第一帧图像和第二帧图像的过程中获取的所述移动机器人920的移动信息,根据所述移动信息对所述第一帧图像或所述第二帧图像进行补偿,且经补偿后的图像与另一帧的原图像进行差分相减,以获得具有地域移动轨迹(从钟的左侧移动至钟的右侧)的疑似目标(蝴蝶A)。且该图像比对的方法又例如参阅图10所示的特征比对的方式,提取所述第一帧图像和第二帧图像中的各个特征点,将提取的所述两帧图像中各个特征点在参考三维坐标系上进行匹配,所述参考三维坐标系为对图16所示的移动空间进行建模后形成的。所述移动机器人920的处理装置将所述两帧图像中未在所述参考三维坐标系上实现匹配的对应特征点所组成的特征点集合检测为疑似目标。且根据后续连续获取的多帧图像,对所述疑似目标进行跟踪,以获得关于所述疑似目标的连续的移动轨迹,且确认该疑似目标(蝴蝶A)为移动目标(蝴蝶A)。进而根据图8或图11所示的方法步骤,对所述疑似目标进行跟踪,以获得疑似目标的移动轨迹,且所述疑似目标的移动轨迹为连续时,则将所述疑似目标确认为移动目标,且在该实施例中,通过对作为疑似目标的蝴蝶A的跟踪,获得关于蝴蝶A的连续的移动轨迹,例如从钟的左侧移动至钟的右侧,进而移动至床头处。
由于所述物体识别器的训练以及根据该物体识别器进行物体识别都是很复杂的计算过程,需要很大的计算量,对搭载运行的设备的硬件要求非常高。在一些实施例中,所述移动机器人920将包含所述移动目标的图像或视频上传至所述云端服务器910,且所述移动机器人920根据接收自所述云端服务器910的对所述移动目标的物体识别的结果输出监控信息。所述物体识别过程,例如为通过预设的图像特征对包含移动目标的图像和视频进行识别,所述图像特征例如为图像点特征、图像线特征或图像颜色特征等。在本实施例中,可以通过对所述蝴蝶A的轮廓的检测,识别所述移动目标为蝴蝶A。所述移动机器人920可接收所述云端服务器910反馈的物体识别结果,并根据该物体识别的结果向指定的客户端输出监控信息。所述客户端例如为智能手机、平板电脑、智能手表等具有智能数据处理功能的电子设备。又或者,所述云端服务器910在对物体识别以获得物体识别的结果后,直接根据该物体识别的结果,向所述指定的客户端输出所述监控信息。本实施例中,所述云端服务器910对接收的所述图像或包含图像的视频进行物体识别,且将物体识别结果反馈至所述移动机器人920。
将移动目标的检测和识别操作放在所述云端服务器上执行,可以减小本地移动机器人的运行压力,降低对移动机器人的硬件的需求,且提高移动机器人的执行效率,且可充分利用云端服务器的强大的处理功能使方法的执行更为快速、更为精确。
当将更多的数据处理程序放入云端执行的时候,对所述移动机器人本身的硬件要求将会进一步降低。且当运行程序需要修订和更新的时候,可以较方便的直接对云端中的运行程序进行修订和更新,提高系统更新的效率和灵活性。所以,在另一些实施例中,所述移动机器人920在移动状态下获取多帧图像并将所述多帧图像上传至所述云端服务器910;且所述云端服务器910自所述多帧图像中选取两帧图像进行比对,该选取的两帧图像例如为图6所示的第一帧图像和图7所示的第二帧图像,所述第一帧图像和所述第二帧图像具有如图6和图7中的虚线框所示的图像重叠区域,该图像重叠区域对应摄像装置在所述第一位置和所述第二位置的重叠的视场。且所述摄像装置的重叠视场内包括多个静态目标,例如椅子、窗子、书架、钟、沙发以及床等。且在图6中,蝴蝶A位于钟的左侧,在图7中,蝴蝶A位于钟的右侧,且将所述第一帧图像和所述第二帧图像进行图像比对,以获得关于所述静态目标移动的疑似目标。且该图像比对的方法例如参阅图5及关于图5的相关描述,即所述移动机器人920的移动装置在所述摄像装置摄取第一帧图像和第二帧图像的过程中获取的所述移动机器人920的移动信息,根据所述移动信息对所述第一帧图像或所述第二帧图像进行补偿,且经补偿后的图像与另一帧的原图像进行差分相减,以获得具有地域移动轨迹(从钟的左侧移动至钟的右侧)的疑似目标(蝴蝶A)。且该图像比对的方法又例如参阅图10所示的特征比对的方式,提取所述第一帧图像和第二帧图像中的各个特征点,将提取的所述两帧图像中各个 特征点在参考三维坐标系上进行匹配,所述参考三维坐标系为对图16所示的移动空间进行建模后形成的。所述移动机器人920的处理装置将所述两帧图像中未在所述参考三维坐标系上实现匹配的对应特征点所组成的特征点集合检测为疑似目标。且根据后续连续获取的多帧图像,对所述疑似目标进行跟踪,以获得关于所述疑似目标的连续的移动轨迹,且确认该疑似目标(蝴蝶A)为移动目标(蝴蝶A)。图5所示的对经补偿后的图像进行差分相减的方式或根据图10所示的特征比对的方式将所述第一帧图像和第二帧图像进行比对,将所述第一帧图像和第二帧图像进行比对,以获得关于所述静态目标(举例为钟)发生了移动的疑似目标(蝴蝶A),进而根据图8或图11所示的方法步骤,对所述疑似目标进行跟踪,以获得疑似目标的移动轨迹,且所述疑似目标的移动轨迹为连续时,则将所述疑似目标确认为移动目标,且在该实施例中,通过对作为疑似目标的蝴蝶A的跟踪,获得关于蝴蝶A的连续的移动轨迹,例如从钟的左侧移动至钟的右侧,进而移动至床头处。且所述云端服务器910对所述移动目标进行物体识别后,将识别结果反馈至所述移动机器人920。
所述移动机器人920可接收所述云端服务器910反馈的物体识别结果,并根据该物体识别的结果向指定的客户端输出监控信息。所述客户端例如为智能手机、平板电脑、智能手表等具有智能数据处理功能的电子设备。又或者,所述云端服务器910在对物体识别以获得物体识别的结果后,直接根据该物体识别的结果,向所述指定的客户端输出所述监控信息。
在另一些实施例中,所述移动机器人920还通过移动网络与指定的客户端通信,所述客户端例如为智能手机、平板电脑、智能手表等具有智能数据处理功能的电子设备。
本申请的移动目标的监控方法、装置、监控系统及移动机器人,根据移动机器人在监控区域中移动的状态下通过摄像装置获取的多帧图像中,且从多帧图像中选取存在图像重叠区域的至少两帧图像,并根据图像补偿法或特征匹配法对选取的图像进行比对,并根据比对结果输出包含有相对静态目标发生移动行为的移动目标的监控信息。所述移动目标在所述至少两帧图像中呈现的位置具有不确定变化属性。本申请可以在移动机器人移动过程中精确的识别监控区域中的移动目标,并生成关于该移动目标的监控信息以进行相应的提醒,有效的保证监控区域的安全性。
上述实施例仅例示性说明本申请的原理及其功效,而非用于限制本申请。任何熟悉此技术的人士皆可在不违背本申请的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本申请所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本申请的权利要求所涵盖。

Claims (23)

  1. 一种应用于移动机器人的移动目标的监控方法,所述移动机器人包括移动装置和摄像装置,其特征在于,所述移动目标的监控方法包括以下步骤:
    在移动装置的工作状态下获取摄像装置所摄取的多帧图像;
    根据自所述多帧图像中选取的至少两帧图像的比对输出包含有相对静态目标发生移动行为的移动目标的监控信息;其中,所述至少两帧图像为摄像装置在部分重叠视场内所摄取的图像,所述移动目标在所述至少两帧图像中呈现的位置具有不确定变化属性。
  2. 根据权利要求1所述的移动目标的监控方法,其特征在于,所述根据自所述多帧图像中选取的至少两帧图像的比对包括以下步骤:
    根据所述至少两帧图像的比对检测出疑似目标;
    跟踪所述疑似目标以确认移动目标。
  3. 根据权利要求2所述的移动目标的监控方法,其特征在于,所述根据所述至少两帧图像的比对检测出疑似目标包括以下步骤:
    基于所述移动装置在所述至少两帧图像之间的时间内的移动信息,对所述至少两帧图像进行图像补偿;
    将经图像补偿后的所述至少两帧图像作相减处理形成差分图像,从所述差分图像中检测出疑似目标。
  4. 根据权利要求1所述的移动目标的监控方法,其特征在于,所述根据自所述多帧图像中选取的至少两帧图像的比对包括以下步骤:
    根据所述至少两帧图像中对应的特征信息的匹配操作检测出疑似目标;
    跟踪所述疑似目标以确认移动目标。
  5. 根据权利要求4所述的移动目标的监控方法,其特征在于,所述根据所述至少两帧图像中对应的特征信息的匹配操作检测出疑似目标包括以下步骤:
    分别提取所述至少两帧图像中的各个特征点,将提取的所述至少两帧图像中各个特征点在一参考三维坐标系上进行匹配;所述参考三维坐标系是通过对移动空间进行三维建模形成的,所述参考三维坐标系上标识有移动空间内所有静态目标中各个特征点的坐标;
    将所述至少两帧图像中未在所述参考三维坐标系上实现匹配的对应特征点所组成的特征点集合检测为疑似目标。
  6. 根据权利要求2或4所述的移动目标的监控方法,其特征在于,所述跟踪所述疑似目标以确认移动目标包括以下步骤:
    根据疑似目标的跟踪获得疑似目标的移动轨迹;
    若所述疑似目标的移动轨迹为连续时,则将所述疑似目标确认为移动目标。
  7. 根据权利要求1所述的移动目标的监控方法,其特征在于,还包括以下步骤:
    对摄取的图像中的移动目标进行物体识别;所述物体识别是通过经神经网络训练的物体识别器完成的;
    根据物体识别的结果输出所述监控信息。
  8. 根据权利要求1所述的移动目标的监控方法,其特征在于,还包括以下步骤:
    将摄取的图像或包含图像的视频上传至云端服务器以对图像中的移动目标进行物体识别;所述云端服务器中包括经神经网络训练的物体识别器;
    接收所述云端服务器的物体识别的结果并输出所述监控信息。
  9. 根据权利要求1所述的移动目标的监控方法,其特征在于,所述监控信息包括:图像信息、视频信息、音频信息、文字信息中的一种或多种。
  10. 一种应用于移动机器人的移动目标的监控装置,所述移动机器人包括移动装置和摄像装置,其特征在于,所述移动目标的监控装置包括:
    至少一个处理器;
    至少一个存储器,用于存储由所述摄像装置在所述移动装置的工作状态下摄取的图像;
    至少一个程序,其中,所述至少一个程序被存储在所述至少一个存储器中并被配置为由所述至少一个处理器执行指令,所述至少一个处理器执行所述执行指令使得所述监控装置执行并实现如权利要求1至9中任一项所述的移动目标的监控方法。
  11. 一种应用于移动机器人的移动目标的监控装置,所述移动机器人包括移动装置和摄像装置,其特征在于,所述监控装置包括:
    图像获取单元,用于在移动装置的工作状态下获取摄像装置所摄取的多帧图像;
    移动目标检测单元,用于对自所述多帧图像中选取的至少两帧图像进行比对以检测移 动目标;其中,所述至少两帧图像为摄像装置在部分重叠视场内所摄取的图像,所述移动目标在所述至少两帧图像中呈现的位置具有不确定变化属性;
    信息输出单元,用于根据所述至少两帧图像的比对输出包含有相对静态目标发生移动行为的移动目标的监控信息。
  12. 根据权利要求11所述的移动目标的监控装置,其特征在于,所述移动目标检测单元包括:
    比对模块,用于根据所述至少两帧图像的比对检测出疑似目标;
    跟踪模块,用于跟踪所述疑似目标以确认移动目标。
  13. 根据权利要求12所述的移动目标的监控装置,其特征在于,所述比对模块根据所述至少两帧图像的比对检测出疑似目标包括:
    基于所述移动装置在所述至少两帧图像之间的时间内的移动信息,对所述至少两帧图像进行图像补偿;
    将经图像补偿后的所述至少两帧图像作相减处理形成差分图像,从所述差分图像中检测出疑似目标。
  14. 根据权利要求12所述的移动目标的监控装置,其特征在于,所述跟踪模块跟踪所述疑似目标以确认移动目标包括:
    根据疑似目标的跟踪获得疑似目标的移动轨迹;
    若所述疑似目标的移动轨迹为连续时,则将所述疑似目标确认为移动目标。
  15. 根据权利要求11所述的移动目标的监控装置,其特征在于,所述移动目标检测单元包括:
    匹配模块,用于根据所述至少两帧图像中对应的特征信息的匹配操作检测出疑似目标;
    跟踪模块,用于跟踪所述疑似目标以确认移动目标。
  16. 根据权利要求15所述的移动目标的监控装置,其特征在于,所述匹配模块根据所述至少两帧图像中对应的特征信息的匹配操作检测出疑似目标包括:
    分别提取所述至少两帧图像中的各个特征点,将提取的所述至少两帧图像中各个特征点在一参考三维坐标系上进行匹配;所述参考三维坐标系是通过对移动空间进行三维建模形成的,所述参考三维坐标系上标识有移动空间内所有静态目标中各个特征点的坐标;
    将所述至少两帧图像中未在所述参考三维坐标系上实现匹配的对应特征点所组成的特征点集合检测为疑似目标。
  17. 根据权利要求15所述的移动目标的监控装置,其特征在于,所述跟踪模块跟踪所述疑似目标以确认移动目标包括:
    根据疑似目标的跟踪获得疑似目标的移动轨迹;
    若所述疑似目标的移动轨迹为连续时,则将所述疑似目标确认为移动目标。
  18. 根据权利要求11所述的移动目标的监控装置,其特征在于,还包括:物体识别单元,用于对摄取的图像中的移动目标进行物体识别,以供所述信息输出单元根据物体识别的结构输出所述监控信息;所述物体识别单元是通过经神经网络训练形成的。
  19. 根据权利要求11所述的移动目标的监控装置,其特征在于,还包括:收发单元,用于将摄取的图像或包含图像的视频上传至云端服务器以对图像中的移动目标进行物体识别以及接收所述云端服务器的物体识别的结果以供所述信息输出单元输出所述监控信息;所述云端服务器中包括经神经网络训练的物体识别器。
  20. 根据权利要求11所述的移动目标的监控装置,其特征在于,所述监控信息包括:图像信息、视频信息、音频信息、文字信息中的一种或多种。
  21. 一种移动机器人,其特征在于,包括:
    移动装置,用于按照所接收的控制指令控制移动机器人移动;
    摄像装置,用于在移动装置的工作状态下摄取多帧图像;
    如权利要求11至20中任一项所述的监控装置。
  22. 一种监控系统,其特征在于,包括:
    云端服务器;
    移动机器人,与所述云端服务器连接;
    其中,所述移动机器人执行以下步骤:在移动状态下获取多帧图像;根据自所述多帧图像中选取的至少两帧图像的比对输出包含有相对静态目标发生移动行为的移动目标的检测信息;根据所述检测信息将摄取的图像或包含图像的视频上传至所述云端服务器;根 据接收自所述云端服务器的物体识别的结果输出监控信息。
  23. 一种监控系统,其特征在于,包括:
    云端服务器;
    移动机器人,与所述云端服务器连接;
    其中,所述移动机器人执行以下步骤:在移动状态下获取多帧图像并将所述多帧图像上传至云端服务器;
    所述云端服务器执行以下步骤:根据自所述多帧图像中选取的至少两帧图像的比对输出包含有相对静态目标发生移动行为的移动目标的检测信息;根据所述多帧图像中移动目标的识别输出移动目标的物体识别的结果至所述移动机器人,以供所述移动机器人输出监控信息。
PCT/CN2018/119293 2018-12-05 2018-12-05 移动目标的监控方法、装置、监控系统及移动机器人 WO2020113452A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/CN2018/119293 WO2020113452A1 (zh) 2018-12-05 2018-12-05 移动目标的监控方法、装置、监控系统及移动机器人
CN202210665232.3A CN115086606A (zh) 2018-12-05 2018-12-05 移动目标的监控方法、装置、系统、存储介质及机器人
CN201880002424.8A CN109691090A (zh) 2018-12-05 2018-12-05 移动目标的监控方法、装置、监控系统及移动机器人
US16/522,717 US10970859B2 (en) 2018-12-05 2019-07-26 Monitoring method and device for mobile target, monitoring system and mobile robot
US17/184,833 US20210201509A1 (en) 2018-12-05 2021-02-25 Monitoring method and device for mobile target, monitoring system and mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/119293 WO2020113452A1 (zh) 2018-12-05 2018-12-05 移动目标的监控方法、装置、监控系统及移动机器人

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/522,717 Continuation US10970859B2 (en) 2018-12-05 2019-07-26 Monitoring method and device for mobile target, monitoring system and mobile robot

Publications (1)

Publication Number Publication Date
WO2020113452A1 true WO2020113452A1 (zh) 2020-06-11

Family

ID=66191860

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/119293 WO2020113452A1 (zh) 2018-12-05 2018-12-05 移动目标的监控方法、装置、监控系统及移动机器人

Country Status (3)

Country Link
US (2) US10970859B2 (zh)
CN (2) CN115086606A (zh)
WO (1) WO2020113452A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001296A (zh) * 2020-08-20 2020-11-27 广东电网有限责任公司清远供电局 变电站立体化安全监控方法、装置,服务器及存储介质
CN112738204A (zh) * 2020-12-25 2021-04-30 国网湖南省电力有限公司 一种变电站二次屏门安全措施自动布置系统与方法
CN112819770A (zh) * 2021-01-26 2021-05-18 中国人民解放军陆军军医大学第一附属医院 碘对比剂过敏监测方法及系统

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686981B (zh) * 2019-10-17 2024-04-12 华为终端有限公司 画面渲染方法、装置、电子设备及存储介质
CN110930455B (zh) * 2019-11-29 2023-12-29 深圳市优必选科技股份有限公司 定位方法、装置、终端设备及存储介质
US11023730B1 (en) * 2020-01-02 2021-06-01 International Business Machines Corporation Fine-grained visual recognition in mobile augmented reality
CN111152266B (zh) * 2020-01-09 2021-07-30 安徽宇润道路保洁服务有限公司 一种清洁机器人的控制方法及系统
CN111152226B (zh) * 2020-01-19 2021-09-07 吉利汽车研究院(宁波)有限公司 一种机器人工作轨迹规划方法及系统
CN113763416A (zh) * 2020-06-02 2021-12-07 璞洛泰珂(上海)智能科技有限公司 基于目标检测的自动贴标与跟踪方法、装置、设备和介质
CN111738134A (zh) * 2020-06-18 2020-10-02 北京市商汤科技开发有限公司 获取客流数据的方法、装置、设备及介质
CN111797728A (zh) * 2020-06-19 2020-10-20 浙江大华技术股份有限公司 一种运动物体的检测方法、装置、计算设备及存储介质
CN111783892B (zh) * 2020-07-06 2021-10-01 广东工业大学 一种机器人指令识别方法、装置及电子设备和存储介质
CN111862154B (zh) * 2020-07-13 2024-03-01 中移(杭州)信息技术有限公司 机器人视觉跟踪方法、装置、机器人及存储介质
EP4200792A1 (en) 2020-08-21 2023-06-28 Mobeus Industries, Inc. Integrating overlaid digital content into displayed data via graphics processing circuitry
CN112215871B (zh) * 2020-09-29 2023-04-21 武汉联影智融医疗科技有限公司 一种基于机器人视觉的移动目标追踪方法及装置
CN112287794B (zh) * 2020-10-22 2022-09-16 中国电子科技集团公司第三十八研究所 一种视频图像自动识别目标的编号一致性管理方法
CN113066050B (zh) * 2021-03-10 2022-10-21 天津理工大学 一种基于视觉的空投货台航向姿态解算方法
CN113098948B (zh) * 2021-03-26 2023-04-28 华南理工大学广州学院 一种检测人脸口罩的消毒控制方法及系统
US11481933B1 (en) * 2021-04-08 2022-10-25 Mobeus Industries, Inc. Determining a change in position of displayed digital content in subsequent frames via graphics processing circuitry
US11586835B2 (en) 2021-04-30 2023-02-21 Mobeus Industries, Inc. Integrating overlaid textual digital content into displayed data via graphics processing circuitry using a frame buffer
US11601276B2 (en) 2021-04-30 2023-03-07 Mobeus Industries, Inc. Integrating and detecting visual data security token in displayed data via graphics processing circuitry using a frame buffer
US11483156B1 (en) 2021-04-30 2022-10-25 Mobeus Industries, Inc. Integrating digital content into displayed data on an application layer via processing circuitry of a server
US11682101B2 (en) 2021-04-30 2023-06-20 Mobeus Industries, Inc. Overlaying displayed digital content transmitted over a communication network via graphics processing circuitry using a frame buffer
US11475610B1 (en) 2021-04-30 2022-10-18 Mobeus Industries, Inc. Controlling interactivity of digital content overlaid onto displayed data via graphics processing circuitry using a frame buffer
US11477020B1 (en) 2021-04-30 2022-10-18 Mobeus Industries, Inc. Generating a secure random number by determining a change in parameters of digital content in subsequent frames via graphics processing circuitry
US11562153B1 (en) 2021-07-16 2023-01-24 Mobeus Industries, Inc. Systems and methods for recognizability of objects in a multi-layer display
CN114513608A (zh) * 2022-02-21 2022-05-17 深圳市美科星通信技术有限公司 移动侦测方法、装置及电子设备
CN115100595A (zh) * 2022-06-27 2022-09-23 深圳市神州云海智能科技有限公司 一种安全隐患检测方法、系统、计算机设备和存储介质
CN115633321B (zh) * 2022-12-05 2023-05-05 北京数字众智科技有限公司 一种无线通信网络监控方法及系统
CN116703975B (zh) * 2023-06-13 2023-12-15 武汉天进科技有限公司 一种用于无人机的智能化目标图像跟踪方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303732A (zh) * 2008-04-11 2008-11-12 西安交通大学 基于车载单目相机的运动目标感知与告警方法
WO2009031751A1 (en) * 2007-09-05 2009-03-12 Electronics And Telecommunications Research Institute Video object extraction apparatus and method
CN102074022A (zh) * 2011-01-10 2011-05-25 南京理工大学 一种基于红外图像的弱小运动目标检测方法
CN105374031A (zh) * 2015-10-14 2016-03-02 江苏美的清洁电器股份有限公司 基于机器人的家庭安防数据处理方法及系统
CN105447888A (zh) * 2015-11-16 2016-03-30 中国航天时代电子公司 一种基于有效目标判断的无人机机动目标检测方法
CN106056625A (zh) * 2016-05-25 2016-10-26 中国民航大学 一种基于地理同名点配准的机载红外运动目标检测方法
CN108806142A (zh) * 2018-06-29 2018-11-13 炬大科技有限公司 一种无人安保系统,方法及扫地机器人

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930072B (zh) * 2010-07-28 2013-01-02 重庆大学 基于多特征融合的红外弱小运动目标航迹起始方法
CN103149939B (zh) * 2013-02-26 2015-10-21 北京航空航天大学 一种基于视觉的无人机动态目标跟踪与定位方法
CN103336947B (zh) * 2013-06-21 2016-05-04 上海交通大学 基于显著性和结构性的红外运动小目标识别方法
US9996976B2 (en) * 2014-05-05 2018-06-12 Avigilon Fortress Corporation System and method for real-time overlay of map features onto a video feed
CN103984315A (zh) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 一种家用多功能智能机器人
US20150329217A1 (en) * 2014-05-19 2015-11-19 Honeywell International Inc. Aircraft strike zone display
CN106534614A (zh) * 2015-09-10 2017-03-22 南京理工大学 移动摄像机下运动目标检测的快速运动补偿方法
TWI543611B (zh) * 2015-11-20 2016-07-21 晶睿通訊股份有限公司 影像接合方法及具有影像接合功能的攝影系統
FR3047103B1 (fr) * 2016-01-26 2019-05-24 Thales Procede de detection de cibles au sol et en mouvement dans un flux video acquis par une camera aeroportee
US20190078876A1 (en) * 2016-03-29 2019-03-14 Kyb Corporation Road surface displacement detection device and suspension control method
US10510232B2 (en) * 2016-08-12 2019-12-17 Amazon Technologies, Inc. Parcel theft deterrence for A/V recording and communication devices
US20180150718A1 (en) * 2016-11-30 2018-05-31 Gopro, Inc. Vision-based navigation system
CN106682619B (zh) * 2016-12-28 2020-08-11 上海木木聚枞机器人科技有限公司 一种对象跟踪方法及装置
CN106846367B (zh) * 2017-02-15 2019-10-01 北京大学深圳研究生院 一种基于运动约束光流法的复杂动态场景的运动物体检测方法
CN107092926A (zh) * 2017-03-30 2017-08-25 哈尔滨工程大学 基于深度学习的服务机器人物体识别算法
CN107133969B (zh) * 2017-05-02 2018-03-06 中国人民解放军火箭军工程大学 一种基于背景反投影的移动平台运动目标检测方法
CN107256560B (zh) * 2017-05-16 2020-02-14 北京环境特性研究所 一种红外弱小目标检测方法及其系统
CN107352032B (zh) * 2017-07-14 2024-02-27 广东工业大学 一种人流量数据的监控方法及无人机
US10788584B2 (en) * 2017-08-22 2020-09-29 Michael Leon Scott Apparatus and method for determining defects in dielectric materials and detecting subsurface objects
US10796142B2 (en) * 2017-08-28 2020-10-06 Nutech Ventures Systems for tracking individual animals in a group-housed environment
US10509413B2 (en) * 2017-09-07 2019-12-17 GM Global Technology Operations LLC Ground reference determination for autonomous vehicle operations
US10657833B2 (en) * 2017-11-30 2020-05-19 Intel Corporation Vision-based cooperative collision avoidance
US10737395B2 (en) * 2017-12-29 2020-08-11 Irobot Corporation Mobile robot docking systems and methods
WO2020041734A1 (en) * 2018-08-24 2020-02-27 Bossa Nova Robotics Ip, Inc. Shelf-viewing camera with multiple focus depths

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009031751A1 (en) * 2007-09-05 2009-03-12 Electronics And Telecommunications Research Institute Video object extraction apparatus and method
CN101303732A (zh) * 2008-04-11 2008-11-12 西安交通大学 基于车载单目相机的运动目标感知与告警方法
CN102074022A (zh) * 2011-01-10 2011-05-25 南京理工大学 一种基于红外图像的弱小运动目标检测方法
CN105374031A (zh) * 2015-10-14 2016-03-02 江苏美的清洁电器股份有限公司 基于机器人的家庭安防数据处理方法及系统
CN105447888A (zh) * 2015-11-16 2016-03-30 中国航天时代电子公司 一种基于有效目标判断的无人机机动目标检测方法
CN106056625A (zh) * 2016-05-25 2016-10-26 中国民航大学 一种基于地理同名点配准的机载红外运动目标检测方法
CN108806142A (zh) * 2018-06-29 2018-11-13 炬大科技有限公司 一种无人安保系统,方法及扫地机器人

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHI, DONGCHENG ET AL.: "Moving Object Detection Based on Scene Knowledge", JOURNAL OF JILIN UNIVERSITY ENGINEERING AND TECHNOLOGY EDITION, vol. 43, no. S1, 31 March 2013 (2013-03-31), ISSN: 1671-5497, DOI: 20190808142814 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001296A (zh) * 2020-08-20 2020-11-27 广东电网有限责任公司清远供电局 变电站立体化安全监控方法、装置,服务器及存储介质
CN112001296B (zh) * 2020-08-20 2024-03-29 广东电网有限责任公司清远供电局 变电站立体化安全监控方法、装置,服务器及存储介质
CN112738204A (zh) * 2020-12-25 2021-04-30 国网湖南省电力有限公司 一种变电站二次屏门安全措施自动布置系统与方法
CN112819770A (zh) * 2021-01-26 2021-05-18 中国人民解放军陆军军医大学第一附属医院 碘对比剂过敏监测方法及系统
CN112819770B (zh) * 2021-01-26 2022-11-22 中国人民解放军陆军军医大学第一附属医院 碘对比剂过敏监测方法及系统

Also Published As

Publication number Publication date
US10970859B2 (en) 2021-04-06
US20200184658A1 (en) 2020-06-11
US20210201509A1 (en) 2021-07-01
CN109691090A (zh) 2019-04-26
CN115086606A (zh) 2022-09-20

Similar Documents

Publication Publication Date Title
WO2020113452A1 (zh) 移动目标的监控方法、装置、监控系统及移动机器人
CN109890573B (zh) 移动机器人的控制方法、装置、移动机器人及存储介质
CN108290294B (zh) 移动机器人及其控制方法
WO2019232806A1 (zh) 导航方法、导航系统、移动控制系统及移动机器人
JP6039611B2 (ja) 可動式ロボットシステム
EP2571660B1 (en) Mobile human interface robot
US9400503B2 (en) Mobile human interface robot
WO2019232803A1 (zh) 移动控制方法、移动机器人及计算机存储介质
CA2928262C (en) Mobile robot system
WO2019090833A1 (zh) 定位系统、方法及所适用的机器人
CN106575437B (zh) 信息处理装置、信息处理方法以及程序
WO2019232804A1 (zh) 软件更新方法、系统、移动机器人及服务器
WO2011146259A2 (en) Mobile human interface robot
KR20180118219A (ko) 이동형 원격현전 로봇과의 인터페이싱
Chatterjee et al. Vision based autonomous robot navigation: algorithms and implementations
CN113116224B (zh) 机器人及其控制方法
WO2019001237A1 (zh) 一种移动电子设备以及该移动电子设备中的方法
AU2017201879B2 (en) Mobile robot system
US20220291686A1 (en) Self-location estimation device, autonomous mobile body, self-location estimation method, and program
US20240036582A1 (en) Robot navigation
US20240135686A1 (en) Method and electronic device for training neural network model by augmenting image representing object captured by multiple cameras
KR20240057297A (ko) 신경망 모델을 학습시키는 방법 및 전자 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18942165

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: FESTSTELLUNG EINES RECHTSVERLUSTS NACH REGEL 112(1) EPUE (EPA FORM 1205A VOM 19/10/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18942165

Country of ref document: EP

Kind code of ref document: A1