CN111510667A - Monitoring method, device and storage medium - Google Patents

Monitoring method, device and storage medium Download PDF

Info

Publication number
CN111510667A
CN111510667A CN201910101030.4A CN201910101030A CN111510667A CN 111510667 A CN111510667 A CN 111510667A CN 201910101030 A CN201910101030 A CN 201910101030A CN 111510667 A CN111510667 A CN 111510667A
Authority
CN
China
Prior art keywords
monitoring
video
monitored
user
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910101030.4A
Other languages
Chinese (zh)
Inventor
鲍亮
汤进举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecovacs Robotics Suzhou Co Ltd
Original Assignee
Ecovacs Robotics Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecovacs Robotics Suzhou Co Ltd filed Critical Ecovacs Robotics Suzhou Co Ltd
Priority to CN201910101030.4A priority Critical patent/CN111510667A/en
Publication of CN111510667A publication Critical patent/CN111510667A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • G05D1/0263Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means using magnetic strips
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Signal Processing (AREA)
  • Electromagnetism (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Alarm Systems (AREA)

Abstract

The embodiment of the application provides a monitoring method, monitoring equipment and a storage medium. In the embodiment of the application, the self-moving equipment is combined with video monitoring on the basis of the pre-established environment map, so that the mobile monitoring is realized, and the flexibility of the video monitoring is improved; in addition, an active early warning mode is adopted, and video clips are automatically identified and intercepted from the monitoring video and are provided for a user to check; for a user, the video clips intercepted from the monitoring videos can be directly checked, all the monitoring videos do not need to be checked, the user can conveniently and quickly check problem pictures in time, and the monitoring requirements of the user are greatly met.

Description

Monitoring method, device and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a monitoring method, a monitoring device, and a storage medium.
Background
With the improvement of safety awareness of people, home monitoring systems are more and more popular. The existing household monitoring scheme is that a camera is generally installed at a specific position in a household, and the monitoring view and the monitoring angle are adjusted by utilizing the operations of rotation, pitching and the like of the camera so as to achieve the purpose of monitoring.
Disclosure of Invention
Aspects of the present application provide a monitoring method, a monitoring device, and a storage medium, which are used to implement mobile monitoring, improve monitoring flexibility and efficiency of a user viewing a monitoring video, and meet monitoring requirements.
The embodiment of the application provides a monitoring method, which is suitable for self-moving equipment, and comprises the following steps: receiving a monitoring instruction, wherein the monitoring instruction indicates that video monitoring is performed on an object to be monitored; determining the position of the object to be monitored based on a pre-established environment map; moving to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and intercepting a video clip from the monitoring video and providing the video clip for a user to view.
The embodiment of the present application further provides a monitoring method, which is applicable to a self-moving device, and the method includes: receiving a monitoring instruction, wherein the monitoring instruction indicates that video monitoring is performed on an object to be monitored; determining the position of the object to be monitored based on a pre-established environment map; moving to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and uploading the monitoring video to a server, so that the server intercepts video clips from the monitoring video and provides the video clips for a user to view.
The embodiment of the present application further provides a monitoring method, which is applicable to a self-moving device, and the method includes: when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map; moving to a set space region and autonomously moving in the set space region; in the process of autonomous movement in a set space region, carrying out video monitoring on the set space region; and intercepting a video clip from the monitoring video and providing the video clip for a user to view.
The embodiment of the present application further provides a monitoring method, which is applicable to a self-moving device, and the method includes: when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map; moving to a set space region and autonomously moving in the set space region; in the process of autonomous movement in a set space region, carrying out video monitoring on the set space region; and uploading the monitoring video to a server, so that the server intercepts video clips from the monitoring video and provides the video clips for a user to view.
The embodiment of the present application further provides a monitoring method, which is applicable to a server, and the method includes: receiving a monitoring video sent by mobile equipment, wherein the monitoring video is obtained by carrying out video monitoring on an object to be monitored by the mobile equipment; analyzing the monitoring video, intercepting a video segment from the monitoring video, and providing the video segment for a user to view.
The embodiment of the present application further provides a monitoring method, which is applicable to a terminal device, and the method includes: displaying an environment map of an environment in which the mobile device is located, the environment map including a spatial region in the environment and objects present in the spatial region; responding to the selection operation of a user on the environment map, and determining the object or the space area selected by the user as an object to be monitored; sending a monitoring instruction to the self-moving equipment, wherein the monitoring instruction instructs the self-moving equipment to perform video monitoring on the object to be monitored; and receiving a notification message, and notifying the user to view the video clip intercepted from the monitoring video according to the notification message.
An embodiment of the present application provides a self-moving device, including: the device comprises a device body, wherein one or more processors, a communication assembly and one or more memories for storing computer instructions are arranged on the device body; the one or more processors to execute the computer instructions to: receiving a monitoring instruction through the communication assembly, wherein the monitoring instruction indicates that video monitoring is performed on an object to be monitored; determining the position of the object to be monitored based on a pre-established environment map; controlling the self-moving equipment to move to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and intercepting a video clip from the monitoring video and providing the video clip for a user to view.
Accordingly, embodiments of the present application provide a computer-readable storage medium having stored thereon computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising: receiving a monitoring instruction, wherein the monitoring instruction instructs a self-moving device to perform video monitoring on an object to be monitored; determining the position of the object to be monitored based on a pre-established environment map; controlling the self-moving equipment to move to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and intercepting a video clip from the monitoring video and providing the video clip for a user to view.
An embodiment of the present application further provides a self-moving device, including: the device comprises a device body, wherein one or more processors, a communication assembly and one or more memories for storing computer instructions are arranged on the device body; the one or more processors to execute the computer instructions to: receiving a monitoring instruction through the communication assembly, wherein the monitoring instruction indicates that video monitoring is performed on an object to be monitored; determining the position of the object to be monitored based on a pre-established environment map; controlling the self-moving equipment to move to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and uploading the monitoring video to a server through the communication component, so that the server intercepts video clips from the monitoring video and provides the video clips for a user to view.
Accordingly, embodiments of the present application also provide a computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform acts comprising: receiving a monitoring instruction, wherein the monitoring instruction instructs a self-moving device to perform video monitoring on an object to be monitored; determining the position of the object to be monitored based on a pre-established environment map; controlling the self-moving equipment to move to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and uploading the monitoring video to a server, so that the server intercepts video clips from the monitoring video and provides the video clips for a user to view.
An embodiment of the present application further provides a self-moving device, including: the device comprises a device body, wherein one or more processors and one or more memories for storing computer instructions are arranged on the device body; the one or more processors to execute the computer instructions to: when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map; controlling the self-moving equipment to move to a set space region and move autonomously in the set space region; in the process that the self-moving equipment autonomously moves in a set space region, carrying out video monitoring on the set space region; and intercepting a video clip from the monitoring video and providing the video clip for a user to view.
Accordingly, embodiments of the present application also provide a computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform acts comprising: when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map; controlling the mobile equipment to move to a set space region and autonomously move in the set space region; in the process that the self-moving equipment autonomously moves in a set space region, carrying out video monitoring on the set space region; and intercepting a video clip from the monitoring video and providing the video clip to a user viewing segment.
An embodiment of the present application further provides a self-moving device, including: the device comprises a device body, wherein one or more processors, a communication assembly and one or more memories for storing computer instructions are arranged on the device body; the one or more processors to execute the computer instructions to: when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map; controlling the self-moving equipment to move to a set space region and move autonomously in the set space region; in the process that the self-moving equipment autonomously moves in a set space region, carrying out video monitoring on the set space region; and uploading the monitoring video to a server through the communication component so that the server intercepts a video segment from the monitoring video and provides the video segment for a user to view.
Accordingly, embodiments of the present application also provide a computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform acts comprising: when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map; controlling the mobile equipment to move to a set space region and autonomously move in the set space region; in the process that the self-moving equipment autonomously moves in a set space region, carrying out video monitoring on the set space region; and uploading the monitoring video to a server, so that the server intercepts video clips from the monitoring video and provides the video clips for a user to view.
An embodiment of the present application further provides a server, including: one or more processors, a communications component, and one or more memories storing computer instructions; the one or more processors to execute the computer instructions to: receiving a monitoring video sent by mobile equipment through the communication assembly, wherein the monitoring video is obtained by video monitoring of the mobile equipment aiming at an object to be monitored; analyzing the monitoring video, intercepting a video segment from the monitoring video, and providing the video segment for a user to view.
Accordingly, embodiments of the present application also provide a computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform acts comprising: receiving a monitoring video sent by mobile equipment, wherein the monitoring video is obtained by carrying out video monitoring on an object to be monitored by the mobile equipment; analyzing the monitoring video, intercepting a video segment from the monitoring video, and providing the video segment for a user to view.
An embodiment of the present application further provides a terminal device, including: one or more processors, a display, a communications component, and one or more memories storing computer instructions; the one or more processors to execute the computer instructions to: displaying an environment map of an environment in which the mobile device is located on the display, the environment map including a spatial region in the environment and objects present in the spatial region; responding to the selection operation of a user on the environment map, and determining the object or the space area selected by the user as an object to be monitored; sending a monitoring instruction to the self-moving equipment through the communication assembly, wherein the monitoring instruction instructs the self-moving equipment to perform video monitoring on the object to be monitored; and receiving a notification message through the communication component, and notifying the user to view the video clip intercepted from the monitoring video according to the notification message.
Accordingly, embodiments of the present application also provide a computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform acts comprising: displaying an environment map of an environment in which the mobile device is located, the environment map including a spatial region in the environment and objects present in the spatial region; responding to the selection operation of a user on the environment map, and determining the object or the space area selected by the user as an object to be monitored; sending a monitoring instruction to the self-moving equipment, wherein the monitoring instruction instructs the self-moving equipment to perform video monitoring on the object to be monitored; and receiving a notification message, and notifying the user to view the video clip intercepted from the monitoring video according to the notification message.
In the embodiment of the application, the self-moving equipment is combined with video monitoring on the basis of the pre-established environment map, so that the mobile monitoring is realized, and the flexibility of the video monitoring is improved; in addition, an active early warning mode is adopted, and video clips are automatically identified and intercepted from the monitoring video and provided for a user to view; for a user, the video clips intercepted from the monitoring videos can be directly checked, all the monitoring videos do not need to be checked, the user can conveniently and quickly check problem pictures in time, and the monitoring requirements of the user are greatly met.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic flow chart of a monitoring method according to an exemplary embodiment of the present application;
FIG. 1b is a schematic diagram of a monitoring method according to an exemplary embodiment of the present disclosure;
FIG. 1c is a schematic diagram of another embodiment of a monitoring method according to the present disclosure;
FIG. 1d is a schematic diagram illustrating another exemplary embodiment of a monitoring method according to the present disclosure;
FIG. 1e is a schematic flow chart of another monitoring method provided in an exemplary embodiment of the present application;
FIG. 1f is a schematic diagram of an environment map provided in an exemplary embodiment of the present application;
FIG. 1g is a schematic illustration of another environmental map provided by an exemplary embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating yet another monitoring method provided in an exemplary embodiment of the present application;
FIG. 3a is a schematic diagram of a monitoring system according to an exemplary embodiment of the present disclosure;
fig. 3b is a schematic flow chart of a monitoring method suitable for a monitoring system according to an exemplary embodiment of the present application;
FIG. 3c is a schematic flow chart of another monitoring method suitable for use in a monitoring system according to an exemplary embodiment of the present application;
fig. 4 is a schematic structural diagram of a self-moving device according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a robot according to an exemplary embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a server according to an exemplary embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the existing household monitoring scheme, the monitoring is not flexible enough, and the monitoring requirement of a user cannot be well met. Aiming at the technical problems, in the embodiment of the application, the self-moving equipment is combined with video monitoring on the basis of a pre-established environment map to realize the moving monitoring, which is beneficial to improving the flexibility of the video monitoring; in addition, an active early warning mode is adopted, and video clips are automatically identified and intercepted from the monitoring video and provided for a user to view; for a user, the video clips intercepted from the monitoring videos can be directly checked, all the monitoring videos do not need to be checked, timeliness and efficiency of checking the monitoring videos by the user are improved, and monitoring requirements of the user are greatly met.
It should be noted that, in the monitoring scheme according to the embodiment of the present application, the self-moving device is combined with video monitoring and active warning on the basis of the pre-established environment map, and the monitoring scheme may have different implementation manners on the basis. In some exemplary embodiments of the present application, such as the embodiments shown in fig. 1 a-1 e and fig. 2 described below, the monitoring scheme may be applied to, and implemented primarily by, the self-moving device. In other exemplary embodiments of the present application, such as the embodiments shown in fig. 3a to 3c described below, the monitoring scheme may be applied to a monitoring system, and is mainly implemented by a self-moving device and a server in the monitoring system. In the following embodiments, the detailed description will be given with reference to the accompanying drawings.
Before explaining the embodiments of the present application, explanation is made on "self-moving device", "object to be monitored", and "environment map" in the embodiments of the present application. The explanation is applicable to all the embodiments of the present application, and repeated explanation will not be provided in the following embodiments.
In the embodiment of the present application, the self-moving device may be any mechanical device capable of performing a highly autonomous spatial movement in its environment, and for example, may be a robot, a purifier, or the like. The robot can comprise a sweeping robot, a glass cleaning robot, a family accompanying robot, a welcome robot and the like.
In the embodiment of the present application, the object to be monitored may be an object or a spatial region in an environment where the mobile device is located. Different application scenarios, the environment of the self-moving device may be different, and accordingly, the object or the spatial region in the environment of the self-moving device may also be different. Taking a home environment as an example, the object to be monitored can be a washing machine, a refrigerator, an air conditioner, a television, a water heater, a pet, a child, the old, even a flower and grass and other objects in the home environment; of course, the object to be monitored may also be a certain or several spatial areas in the home environment, such as a kitchen, a master bedroom, a living room, etc. Taking a supermarket or a shopping mall environment as an example, the object to be monitored may be a certain commodity in the supermarket or the shopping mall, a certain shelf, or the like, or, of course, may be a certain commodity area, such as a fresh area, a vegetable and fruit area, a drinking water area, a men's clothing area, a women's clothing area, or the like.
In the embodiment of the application, an environment map is established in advance from a mobile device, and the environment map stores a space area in the environment where the mobile device is located and objects existing in the space area. For example, the self-moving device may collect environment information in an environment through its sensor (such as a laser radar, an infrared sensor, a crash plate sensor, a camera, etc.) during moving, and build an environment map according to the collected environment information. In the process of establishing the environment map, the environment can be autonomously partitioned by combining image recognition technologies such as a deep learning model and other algorithms, the environment is divided into different space areas, objects existing in each space area can be further recognized, and the space areas and the objects existing in the space areas are all reflected in the map to form the environment map.
For example, taking a home environment as an example, the self-moving device may divide the home environment into spatial areas such as a bedroom, a living room, and a restroom, and may recognize that objects such as a bed, a wardrobe, and a bedside table exist in the bedroom, objects such as a television, a bookshelf, a tea table, and a sofa exist in the living room, objects such as a toilet, a shower, and a bathtub exist in the restroom, and the spatial areas and the objects are embodied in an environment map. In the environment map, the boundary of each space region is included, and also the identification information of each space region, such as region name, number, description information, etc., is included, so that the user can identify and distinguish each space region. For the objects existing in each spatial region, the environment map may include not only the positions of the objects, but also identifying information of the objects, such as names, numbers, description information, and the like of the objects, so as to facilitate the user to identify and distinguish the objects.
Optionally, the terminal device of the user includes related software of the self-moving device, for example, an App, and the terminal device can be connected to the self-moving device through the related software (such as the App of the self-moving device) through various ways (bluetooth, USB transmission, or home lan and cloud server, etc.) to view the map. Further, the user may also correct the spatial division autonomously divided from the mobile device, for example, to modify "bedroom" autonomously divided from the mobile device into "living room" or the like. Of course, the user may modify the object autonomously recognized by the mobile device, for example, modifying the "television" autonomously recognized by the mobile device to a "smart speaker" or the like.
According to the method, the self-moving equipment can acquire the environmental information through the sensor family, and a detailed and complete environmental map is established by combining the deep learning model and the image recognition technology. The environment map includes not only each spatial region (e.g., bedroom, living room, etc.) but also information such as the identification and position of a recognized object (e.g., bed, table and chair, sofa, wardrobe, pet). On the basis of the environment map, the self-mobile equipment can perform video monitoring on a space area or an object in the space area in the environment where the self-mobile equipment is located by combining a camera or a camera system.
Fig. 1a is a schematic flowchart of a monitoring method according to an exemplary embodiment of the present disclosure. As shown in fig. 1a, the method comprises the steps of:
and 10a, receiving a monitoring instruction from the mobile equipment, wherein the monitoring instruction indicates that the object to be monitored is subjected to video monitoring.
11a, determining the position of the object to be monitored based on a pre-established environment map.
And 12a, moving the mobile equipment to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored.
And 13a, intercepting the video clip from the monitoring video and providing the video clip for a user to view.
In this embodiment, when a user needs to monitor an object or a spatial area in an environment where the mobile device is located, a monitoring instruction may be sent to the mobile device, and the mobile device is instructed to perform video monitoring on the object to be monitored. Optionally, the monitoring instruction may carry identification information of the object to be monitored, and the identification information is used to identify the object to be monitored. The identification information of the object to be monitored may be a name, an ID, and the like of the object to be monitored. The method comprises the steps of receiving a monitoring instruction from a mobile device, determining that video monitoring needs to be carried out on an object to be monitored according to the monitoring instruction, determining the position of the object to be monitored based on a pre-established environment map, moving to the position matched with the object to be monitored, and carrying out video monitoring on the object to be moved.
The positions matched with the objects to be monitored are different according to the different objects to be monitored. For example, if the object to be monitored is an object in the environment where the mobile device is located, the position adapted to the object to be monitored may be a position close to the object to be monitored or a position near the object to be monitored. For another example, if the object to be monitored is a spatial region in the environment where the mobile device is located, the position adapted to the object to be monitored may be an entrance position of the spatial region or a certain position within the spatial region.
In this embodiment, the self-moving device has a video capture function in addition to the self-moving function, for example, the self-moving device is provided with a visual sensor such as a camera. Therefore, after the mobile equipment moves to the position matched with the object to be monitored, the video acquisition function of the mobile equipment can be utilized to perform video monitoring on the object to be monitored.
Furthermore, the self-moving equipment can analyze the monitoring video, intercept corresponding video clips from the monitoring video and provide the corresponding video clips for the user to view. Optionally, the self-moving device may determine whether a video clip satisfying the condition appears in the surveillance video; if the video clip meeting the condition appears in the monitoring video, the video clip meeting the condition can be intercepted from the monitoring video and provided for the user to view. For a user, the monitoring video does not need to be checked in real time or from time to time, only the video clip taken out by the mobile equipment is needed to be checked, all the monitoring videos do not need to be checked, and the efficiency and timeliness for checking the monitoring videos are improved.
It should be noted that the set conditions may be different according to different application scenarios and different monitoring requirements, and the embodiments of the present application may be flexibly configured. For example, the condition may be a setting screen content, or some operation feature.
Taking the setting picture content as an example, whether a video clip containing the setting picture content appears in the monitoring video can be analyzed; if the set picture content appears in the monitoring video, a video clip containing the set picture content can be intercepted from the monitoring video. Taking the example that the object to be monitored is a washing machine, the setting picture content can be a picture of a child appearing beside the washing machine. For another example, in the case where the object to be monitored is a living room, the setting screen content may be a person other than the person who appears in the living room.
In addition, feature analysis can be performed on the monitoring video based on a deep learning algorithm, and when a video segment with corresponding action features in the monitoring video is identified, the video segment with the corresponding action features can be intercepted from the monitoring video. For example, when action characteristics such as fire and the like appear in the monitoring video are identified based on a deep learning algorithm, a video segment containing the fire action characteristics can be intercepted and provided for a user to view, so that the user can give an alarm in time. For another example, when it is recognized that more interesting action features (such as stubborn and funny actions of children) appear in the surveillance video based on the deep learning algorithm, a video segment containing the interesting action features can be intercepted from the surveillance video and provided for the user to view, so that the user is happy.
In the embodiments of the present application, the sending method of the monitoring instruction is not limited, and any method that can send the monitoring instruction to the self-mobile device is suitable for the embodiments of the present application. The following illustrates an issuing method of the monitoring instruction in combination with several application scenarios:
in the application scenario shown in FIG. 1bThe user is located in an environment in which the self-moving device is located, and the self-moving device has a voice recognition function. When a user needs to monitor a certain object or a space area in the environment, a monitoring instruction can be sent to the self-mobile device in a voice mode, and the self-mobile device is instructed to carry out video monitoring on the object to be monitored. For example, the user may say "please go to the kitchen to monitor the use status of the refrigerator" from the mobile device. Receiving a monitoring instruction sent by a user in a voice mode from a mobile device, and then determining the position of an object to be monitored (such as a refrigerator) based on a pre-established environment map (for example, the position close to a kitchen door in a kitchen area); moving to the position near an object to be monitored, and carrying out video monitoring on the object to be monitored; and analyzing the monitoring video, intercepting the video clip meeting the condition from the monitoring video when the video clip meeting the condition appears in the monitoring video, and informing a user to check the video clip.
In the application scenario shown in figure 1c,the environment of the mobile device includes an audio playing device, and the audio playing device may be any device capable of playing an audio signal, for example, a smart speaker, a television, a smart phone, a tablet computer, and the like. The user can set the monitoring instruction, the sending time of the monitoring instruction and the like on the audio playing device in advance, so that the monitoring instruction can be sent to the self-moving device through the audio playing device which is in the same environment with the self-moving device. Optionally, if the audio playing device is setAnd if the corresponding physical key exists, the user can set a monitoring instruction and playing time thereof on the audio playing equipment through the physical key on the audio playing equipment. Or, if the audio playing device supports the voice recognition function, the user may set the monitoring instruction and the playing time thereof on the audio playing device in a voice manner. Or, if the audio playing device has a touch screen, the user may set the monitoring instruction and the playing time thereof on the audio playing device through the touch screen of the audio playing device. The audio playing device can play the monitoring instruction set by the user when the playing time is up. Receiving a monitoring instruction sent by audio playing equipment in the same environment with the mobile equipment from the mobile equipment, then determining the position of an object to be monitored based on a pre-established environment map, moving the object to be monitored to the vicinity or inside of the object to be monitored, and carrying out video monitoring on the object to be monitored; and analyzing the monitoring video, intercepting the video clip meeting the condition from the monitoring video when the video clip meeting the condition appears in the monitoring video, and informing a user to check the video clip.
In the application scenario shown in figure 1d,the user binds the terminal equipment used by the user with the self-moving equipment, so that no matter whether the user is in the environment of the self-moving equipment or not, a monitoring instruction can be sent to the self-moving equipment through the terminal equipment bound with the self-moving equipment. In particular, when the user is not in the environment of the self-moving device, the monitoring instruction can be remotely sent to the self-moving device through the terminal device. The terminal equipment used by the user can be a smart phone, a smart watch, a smart bracelet and the like. Taking home monitoring as an example, during business trip or work, a user can send a monitoring instruction to the self-moving device in the home through terminal devices such as a smart phone, a smart watch, a smart bracelet and the like which are carried by the user, and the self-moving device is instructed to monitor an object to be monitored. Receiving a monitoring instruction sent by a terminal device bound with the mobile device from the mobile device, then determining the position of an object to be monitored based on a pre-established environment map, moving the object to be monitored to the vicinity or inside of the object to be monitored, and carrying out video monitoring on the object to be monitored; and analyzing the monitored monitoring video, and determining the monitoring videoAnd when the video clip meeting the condition appears, intercepting the video clip meeting the condition from the monitoring video, and informing a user to check the video clip.
In the application scenario shown in fig. 1d, an App for controlling the mobile device may be installed on the terminal device, and the user may send a monitoring instruction to the mobile device through the App. In an optional embodiment, the App may provide a monitoring function, when the monitoring function is triggered, the App displays a monitoring page to a user, the user may input identification information of an object to be monitored on the monitoring page, and click a sending button on the monitoring page to send a monitoring instruction to the mobile device, where the monitoring instruction carries the identification information of the object to be monitored. In addition, the App may also provide an environment map of the environment where the mobile device is located to the user, allow the user to set an object to be monitored based on the environment map, and the following describes a monitoring scheme based on the environment map through a detailed embodiment.
Fig. 1e is a schematic flow chart of another monitoring method provided in an exemplary embodiment of the present application. As shown in fig. 1b, the method comprises:
11e, the terminal device displays an environment map of the environment where the mobile device is located, wherein the environment map comprises a space area in the environment and objects existing in the space area.
12e, the terminal equipment responds to the selection operation of the user on the environment map, and determines the object or the space area on the environment map selected by the user as the object to be monitored.
And 13e, the terminal equipment sends a monitoring instruction to the self-moving equipment, and the monitoring instruction instructs the self-moving equipment to carry out video monitoring on the object to be monitored.
And 14e, after receiving the monitoring instruction from the mobile equipment, determining the position of the object to be monitored based on a pre-established environment map.
And 15e, moving the mobile equipment to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored.
And 16e, intercepting the video clip from the monitoring video from the mobile equipment, and sending a notification message to the terminal equipment bound with the video clip.
And 17e, the terminal equipment receives the notification message sent by the mobile equipment and notifies the user to view the video clip according to the notification message.
In this embodiment, the terminal device may obtain an environment map of an environment in which the mobile device is located. For example, the user may upload or copy an environment map from the environment in which the mobile device is located into the terminal device. Alternatively, the self-moving device may transmit an environment map of its environment to the terminal device.
In an optional embodiment, the self-mobile device may collect environment information in the environment where the self-mobile device is located, and construct an environment map of the environment where the self-mobile device is located according to the collected environment information. For example, taking a floor sweeping robot as an example, in the process of performing a cleaning work in a home environment, the floor sweeping robot can detect surrounding environment information through sensors such as a radar, an infrared sensor, a striking plate, a camera and the like, and then establish an environment map of the environment where the floor sweeping robot is located according to the environment information. Further, the self-moving device may also perform autonomous partitioning on the environment where the self-moving device is located by combining with image recognition such as a deep learning model and other algorithms, for example, the sweeping robot may divide the home environment into different spatial areas such as a master bedroom, a living room, a washroom, a kitchen, a sub-bedroom, and the like, and mark the spatial areas on an environment map, as shown in fig. 1 f. In fig. 1f, A, B, C, D, E, F, G, H, I, J, K spatial regions are marked on the environment map, which is, of course, only an example. At the same time, the self-moving device can mark the objects contained in each space area to the corresponding position of the environment map. Objects in different spatial regions may differ, where the objects mainly comprise some immovable objects. For example, taking a kitchen as an example, objects within the kitchen include, but are not limited to: cabinets, refrigerators, cooktops, microwave ovens, and the like; taking a bedroom as an example, the objects in the bedroom include but are not limited to: beds, bedside cabinets, wardrobes, air conditioners, bedside lights, and the like. For example, in the home environment map shown in fig. 1g, several spatial regions such as a main bed, a sub bed, a living room, a kitchen, a balcony, and a toilet, the main bed includes objects such as a bed, a table, and a wardrobe, the sub bed includes objects such as a bed, a desk, and a wardrobe, the living room includes objects such as a table, a television, a tea table, a sofa, an air conditioner, and a carpet, the balcony includes a washing machine, and the toilet includes objects such as a toilet. From the home environment map shown in fig. 1g, the position of each object in the corresponding spatial region can be clearly seen.
After the self-mobile device generates the environment map of the environment where the self-mobile device is located, the environment map can be sent to the terminal device. It is noted that, for example, the mobile device and the terminal device may establish a communication connection through infrared, bluetooth, USB, WiFi, or a server.
Further optionally, for an environment map autonomously generated from the mobile device, the terminal device may present the environment map to the user before use, and the user may modify the environment map. For example, the user may modify the marked spatial region on the environment map, such as modifying "kitchen" to "living room"; for another example, the user can correct the name or position of an object marked on the environment map, which can improve the accuracy of the environment map.
For the terminal device, an environment map of the environment in which the mobile device is located may be displayed to the user. The user can set the object to be monitored through the environment map. For example, the user may select one or more spatial regions on the environment map as objects to be monitored, or select an object within a certain spatial region on the environment map, such as an air conditioner, a washing machine, or a refrigerator, as an object to be monitored. The terminal equipment can respond to the selection operation of the user on the environment map and determine the object or the space area selected by the user as the object to be monitored. The manner of the selection operation is not limited here, and may be, for example, pointing, long pressing, circle selection, touch control, hovering, sliding, dragging, or the like.
After the object to be monitored is determined, the terminal device may carry the identification information of the object to be monitored in the monitoring instruction, and send the monitoring instruction to the self-moving device through the communication connection between the terminal device and the self-moving device. For the self-moving device, after receiving a monitoring instruction sent by the terminal device, the self-moving device can move to a position matched with an object to be monitored, perform video monitoring on the object to be monitored, analyze the monitoring video, intercept a video clip from the monitoring video, and send a notification message to the terminal device to notify a user to view the video clip. Optionally, it may be determined whether a video segment meeting the condition appears in the surveillance video, and if so, the video segment meeting the condition is intercepted from the surveillance video, and a notification message is sent to the terminal device to notify the user to view the video segment.
After the terminal device sends the monitoring instruction to the self-mobile device, the terminal device can wait for the notification message sent by the self-mobile device. And when receiving the notification message sent by the mobile equipment, notifying the user to view the video clip intercepted from the monitoring video. The reminding mode includes but is not limited to: pop-up messages, notification messages, voice reminders, screen flashes, etc.
Optionally, the user may log in from the mobile device through the App on the terminal device, and then view the video clip meeting the condition. Or, the self-moving device may also carry the video clip meeting the condition in the notification message and send the notification message to the terminal device, so that the user can directly view the video clip on the terminal device.
It should be noted that no matter how the user sends the monitoring instruction, the mobile device may perform video monitoring on the object to be monitored according to the monitoring instruction sent by the user. Besides, the self-moving device can also perform monitoring tasks by itself. In the following embodiments, this will be explained.
Fig. 2 is a schematic flow chart of another monitoring method according to an exemplary embodiment of the present application. As shown in fig. 2, the method includes:
20. when a setting event occurs, the position of the set spatial region is determined based on a pre-established environment map.
21. And moving to a set space region and moving autonomously in the set space region.
22. And in the process of autonomous movement in the set space area, carrying out video monitoring on the set space area.
23. And intercepting a video clip from the monitoring video and providing the video clip for a user to view.
In this embodiment, the event and spatial region are preset from the mobile device. When a set event occurs, video monitoring is carried out on the set space area from the mobile equipment. The event that can trigger the mobile device to perform video monitoring on the set spatial region includes, but is not limited to, the following: the set monitoring time is up, the set monitoring period is up, the starting event and the awakening event are started, and the operation instruction is received. The job instruction is used for instructing the self-moving equipment to execute the job task in the set space region. The following examples illustrate:
for example, in a set space region, some events may occur at a specific time, the user may set a monitoring time on the self-mobile device, for example, 3 pm on 21 st day of 10 months, and when 3 pm on 21 st day of 10 months arrives, the self-mobile device may autonomously perform video monitoring on the set space region.
For another example, in the set spatial area, some events may occur periodically, and the user may set a monitoring period on the self-moving device, for example, once every two days, and when the set monitoring period arrives, the self-moving device may perform video monitoring on the set spatial area autonomously.
For another example, in some scenarios, a user may set a set spatial region to be video monitored autonomously each time the mobile device is powered on or awakened.
For another example, in some scenarios, the user may set the self-mobile device to autonomously perform video surveillance on a set spatial region upon receiving a job instruction. When receiving the operation instruction, the self-moving device generally executes the operation task in the relevant space region, and in the process of executing the operation task, the self-moving device generally moves autonomously, so that the self-moving device can perform video monitoring on the set space region by the way. Alternatively, in this scenario, the set spatial region may be set as the work region specified by the work instruction, but is not limited thereto.
In this embodiment, the set spatial region may be any spatial region in an environment where the mobile device is located, which is not limited to this and may be flexibly set according to an application scenario and a monitoring requirement. For example, in some scenarios, the kitchen may be set as a spatial area requiring video monitoring, and in some scenarios, the living room may be set as a spatial area requiring video monitoring.
In this embodiment, when a setting event occurs, the self-moving device may determine the position of a set spatial region based on a pre-established environment map, move into the set spatial region, and autonomously move within the spatial region; in the process of autonomous movement in a set space region, performing video monitoring on the space region, analyzing a monitoring video, and intercepting a video clip from the monitoring video; and providing the video clip for a user to view.
It should be noted that the present embodiment does not limit the way of providing the video clip for the user to view, and the following methods can be adopted, but are not limited to:
mode 1:and outputting prompt information from the mobile device to prompt the user to view the video clip on the mobile device. The method is suitable for the situation that the user is located in the environment where the self-moving device is located, and the user can directly log in the self-moving device and view the video clip on the self-moving device. Wherein, outputting the prompt information from the mobile device may be outputting a prompt sound through its audio module (e.g. a speaker), or outputting a prompt signal through the on/off of a signal lamp.
Mode 2:and controlling the audio playing device in the same environment with the self-mobile device to emit prompt tones to prompt the user to view the video clips on the self-mobile device. The method is suitable for the situation that the user is located in the environment where the self-moving equipment is located, under the prompt of the prompt tone, the user can directly log in the self-moving equipment and view the video clips meeting the conditions on the self-moving equipment.
Mode 3:and sending a notification message to the terminal equipment bound with the self-mobile equipment to notify a user to view the video clip on the self-mobile equipment. The method is suitable for the situation that the user is located in the environment of the self-moving equipment and the situation that the user is not located in the environment of the self-moving equipmentThe method is described. Alternatively, the user may log in from the mobile device remotely or locally and view the video clip on the mobile device according to the notification message. Or, the notification message may carry the video clip, and based on this, the user may directly view the video clip locally on the terminal device according to the notification message.
The sending method of the notification message includes but is not limited to: a short message mode, a mail mode, or an in-application message mode, etc.
Mode 4:the self-moving device can directly send the video clip to the terminal device so that the user can view the video clip on the terminal device. The method is suitable for the situation that the user is located in the environment of the self-moving device and the situation that the user is not located in the environment of the self-moving device. The user can view the video clip directly locally at the terminal device.
The sending method of the video clip includes but is not limited to: a short message mode, a mail mode, or an in-application message mode, etc.
It should be noted that the above notification methods can be used alternatively or in combination, and they are not only applicable to the embodiment shown in fig. 2, but also applicable to the foregoing or following embodiments.
Fig. 3a is a schematic structural diagram of a monitoring system according to an exemplary embodiment of the present application. With reference to the monitoring system shown in fig. 3a, a flow of a monitoring method applicable to the monitoring system provided in the embodiment of the present application is shown in fig. 3b, where the method includes the following steps:
and 30b, receiving a monitoring instruction from the mobile equipment, wherein the monitoring instruction indicates that the video monitoring is carried out on the object to be monitored.
31b, determining the position of the object to be monitored based on the pre-established environment map by the self-moving equipment.
And 32b, moving the mobile equipment to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored.
And 33b, uploading the monitoring video from the mobile equipment to the server, so that the server intercepts the video segment from the monitoring video and provides the video segment for the user to view.
34b, the server receives the monitoring video sent by the mobile device, and the monitoring video is obtained by video monitoring of the mobile device aiming at the object to be monitored.
35b, the server analyzes the monitoring video, intercepts the video clip from the monitoring video and provides the video clip for the user to view.
In this embodiment, when a user needs to monitor an object or a spatial area in an environment where the mobile device is located, a monitoring instruction may be sent to the mobile device, and the mobile device is instructed to perform video monitoring on the object to be monitored. The manner in which the user sends the monitoring instruction to the self-mobile device may refer to the descriptions in the embodiments shown in fig. 1b to fig. 1d and fig. 2, and is not described herein again.
The method comprises the steps of receiving a monitoring instruction from a mobile device, determining that video monitoring needs to be carried out on an object to be monitored according to the monitoring instruction, determining the position of a set space area based on a pre-established environment map, moving to the position matched with the object to be monitored, and carrying out video monitoring on the object to be moved. For a description of the "position adapted to the object to be monitored", please refer to the description in the embodiment shown in fig. 1a, which is not described herein again.
In this embodiment, the self-moving device has a video capture function in addition to the self-moving function, for example, the self-moving device is provided with a visual sensor such as a camera. Based on this, after moving to the position adaptive to the object to be monitored from the mobile device, the video acquisition function of the mobile device can be utilized to perform video monitoring on the object to be monitored, the monitoring video can be uploaded to the server, and the monitoring video is analyzed and judged by virtue of the computing advantages of the server.
For the server, monitoring videos launched by the mobile equipment can be received, then the monitoring videos can be analyzed, and video segments are intercepted from the monitoring videos; and providing the video clip for a user to view.
In this embodiment, the server may send a notification message to the terminal device of the user, so that the user can view the video clip on the server. Optionally, after receiving the notification message, the terminal device may send a login request to the server; the server performs identity authentication on the terminal equipment according to the login request; after the terminal equipment passes the identity authentication, the terminal equipment can log in the server so as to check the video clips stored by the server. Or, the server may carry the video segment in the notification message and send the notification message to the terminal device, and the terminal device may analyze the video segment from the notification message and store the video segment locally, and remind the user to view the video segment. For example, the terminal device may issue an alert tone to the user or display a notification message to remind the user to view.
Or, the server may also directly send the video clip to the terminal device, and the terminal device may store the video clip locally for the user to view. Further, the terminal device may send an alert tone to the user, or display a notification message, and remind the user to view. For example, the terminal device may issue an alert tone to the user or display a notification message to remind the user to view.
Fig. 3c is a schematic flowchart of another monitoring method suitable for a monitoring system according to an exemplary embodiment of the present application. As shown in fig. 3c, the method comprises the steps of:
30c, when the setting event occurs, determining the position of the set space area based on the pre-established environment map from the mobile device.
31c, moving to the set space region from the mobile equipment and moving autonomously in the set space region.
And 32c, in the process of autonomous movement in the set space area, performing video monitoring on the set space area by the self-moving equipment.
And 33c, uploading the monitoring video from the mobile equipment to the server, so that the server intercepts the video segment from the monitoring video and provides the video segment for the user to view.
34c, the server receives the monitoring video sent by the mobile device, wherein the monitoring video is obtained by video monitoring of the mobile device aiming at the object to be monitored.
35c, the server analyzes the monitoring video, intercepts the video clip from the monitoring video and provides the video clip for the user to view.
In this embodiment, the event and spatial region are preset from the mobile device. When a set event occurs, video monitoring is carried out on the set space area from the mobile equipment. For an example of an event that can trigger video monitoring from the mobile device for a set spatial region and the set spatial region, refer to the description in the embodiment shown in fig. 2, and are not described herein again.
In this embodiment, when a setting event occurs, the self-moving device may determine the position of a set spatial region based on a pre-established environment map, move into the set spatial region, and autonomously move within the spatial region; in the moving process of the set space area, video monitoring is carried out on the space area, the monitoring video is uploaded to the server, and the monitoring video is analyzed and judged by means of the computing advantages of the server.
For the server, the monitoring video launched by the mobile device can be received, and then the monitoring video can be analyzed, and the video segment can be intercepted from the monitoring video and provided for the user to view. As to the way in which the server provides the video clip to the user for viewing the video clip, reference may be made to the description in the embodiment shown in fig. 3b, and details are not repeated here.
In the foregoing embodiments of the present application, the self-moving device needs to perform video monitoring on an object to be monitored. According to different objects to be monitored, the video monitoring mode of the mobile device for the objects to be monitored is different. The following is explained in each case:
in the first case:the environment in which the self-moving device is located includes movable objects. The movable object may be a person, a pet, or other autonomously movable device other than a self-moving device in the environment. For example, an autonomously movable air purifier may be video monitored with a sweeping robot. For another example, the user may perform video surveillance of a child, an elderly person, or a pet (e.g., a cat or a dog) in a home via a sweeping robot or an autonomous mobile air purifier.
When video monitoring is needed to be performed on a movable object in an environment where the self-moving device is located, the self-moving device firstly locks an object to be monitored according to a monitoring instruction, for example, children, old people or pets needing to be monitored are identified; then, video monitoring is carried out on the object to be monitored, and the monitoring process comprises the following steps: determining the position of an object to be monitored based on a pre-established environment map by the mobile equipment, moving the position to the vicinity of the object to be monitored, namely the position matched with the object to be monitored, and acquiring videos containing the object to be monitored by utilizing a camera from the position; and when the object to be monitored moves, the object to be monitored moves along with the object to be monitored, and the camera is continuously utilized to collect the video containing the object to be monitored in the moving process.
Optionally, in the embodiments shown in fig. 1a to 1d and fig. 2, the self-moving device further analyzes the surveillance video to intercept a video segment from the surveillance video. In the embodiment shown in fig. 3 a-3 c, the surveillance video is uploaded from the mobile device to the server, and the server analyzes the surveillance video to intercept a video clip from the surveillance video.
For example, for a scene of monitoring children or the elderly, the self-moving device or the server may analyze whether a picture that children or the elderly fall down appears in the monitoring video, and the like; in addition, for the children, whether the pictures of crying and screaming of the children appear in the monitoring video can be analyzed; and for the old people, whether pictures such as medicine taking by the old people on time appear in the monitoring video can be analyzed. When the pictures are found, the video clips containing the pictures are intercepted by the mobile equipment or the server and are provided for the user to check in time, so that the user can find abnormal conditions in time and take corresponding measures in time. Since the self-moving equipment or the server can intercept and provide the image content to the user for viewing when finding the image content to be viewed by the user, the user does not need to view the monitoring video in real time or at intervals, the interference of monitoring on the user is reduced, and the user experience is improved; in addition, the user does not need to look over a large number of monitoring videos, can directly look over corresponding video clips, is favorable for saving the time of the user, improves the viewing efficiency, and is convenient for the user to take corresponding measures in time.
In the second case:from the environment of the mobile deviceIncluding immovable objects. Immovable objects may be stationary or relatively stationary objects in the environment, such as air conditioners, refrigerators, televisions, washing machines, sofas, lights, cooktops, and the like. For example, a user may perform video monitoring on home devices such as air conditioners, refrigerators, televisions, lamps, cooktops, and the like in a home through a sweeping robot or an autonomously movable air purifier.
When video monitoring needs to be performed on an immovable object in an environment where the self-moving equipment is located, the self-moving equipment firstly locks an object to be monitored according to a monitoring instruction, such as a refrigerator, a washing machine, a television or a lamp and the like which need to be monitored; then, video monitoring is carried out on the object to be monitored, and the monitoring process comprises the following steps: the self-moving equipment determines the position of an object to be monitored based on a pre-established environment map, moves to the position near the object to be monitored, namely the position matched with the object to be monitored, and acquires videos containing the object to be monitored at the position by using a camera.
Optionally, in the embodiments shown in fig. 1a to 1d and fig. 2, the self-moving device further analyzes the surveillance video to intercept a video segment from the surveillance video. In the embodiment shown in fig. 3 a-3 c, the surveillance video is uploaded from the mobile device to the server, and the server analyzes the surveillance video to intercept a video clip from the surveillance video.
For example, for a scene of monitoring the working state of the washing machine, the self-moving device or the server may analyze whether a picture that the washing machine sends out a warning sound after the washing is finished appears in the monitoring video, or whether a picture that the washing machine gives an alarm during the washing process appears, and the like. When the pictures are found, the video clips containing the pictures are intercepted by the mobile equipment or the server and are provided for the user to check in time, so that the user can find relevant conditions in time and take corresponding measures in time. For example, if a picture that the washing machine gives a warning sound after washing is finished appears, when the user looks at the picture, the washing machine can be turned off in time, or the family can be informed to air the clothes.
For another example, for a scene of monitoring the state of a small night lamp in a child room, the self-mobile device or the server may analyze whether a picture that the small night lamp flickers (fails) appears in the monitoring video, or whether a picture that the small night lamp suddenly goes out appears, or the like. When the pictures are found, the video clips containing the pictures are intercepted by the mobile equipment or the server and are provided for the user to check in time, so that the user can find relevant conditions in time and take corresponding measures in time. For example, if the picture that little night-light scintillation or suddenly go out appears, the user can in time look over to children's room, and the little night-light is changed to the inspection circuit to ensure children's safety.
In the third case:the environment in which the mobile device is located is divided into different spatial regions. Taking a home environment as an example, the room comprises a kitchen, a bedroom, a balcony, a living room, a toilet and other spatial areas. Taking a supermarket environment as an example, the method comprises the following steps: fresh areas, shelf areas, delicatessen areas, import areas, wine areas and other spatial areas.
When video monitoring needs to be performed on a spatial region in an environment where the self-moving device is located, the self-moving device firstly locks the spatial region to be monitored, such as a kitchen, a living room and the like, according to a monitoring instruction; then, video monitoring is carried out on the spatial area to be monitored, and the monitoring process comprises the following steps: the self-moving equipment can determine the position of a space area to be monitored based on a pre-established environment map, move to the space area to be monitored, namely the position matched with an object to be monitored, move in the space area, and acquire videos in the space area by using a camera in the moving process.
Optionally, in the embodiments shown in fig. 1a to 1d and fig. 2, the self-moving device further analyzes the surveillance video to intercept a video segment from the surveillance video. In the embodiment shown in fig. 3 a-3 c, the surveillance video is uploaded from the mobile device to the server, and the server analyzes the surveillance video to intercept a video clip from the surveillance video.
For example, in a kitchen monitoring scene, the self-moving device or the server can analyze whether smoke, fire, water overflow and other pictures appear in the monitoring video. When the pictures are found, the video clips containing the pictures are intercepted by the mobile equipment or the server and are provided for the user to check in time, so that the user can find dangerous conditions in time and take corresponding measures in time. For example, if a picture of smoke or fire appears in a monitoring video of a kitchen, when a user views the picture, the user can give an alarm in time, and start a fire extinguisher and the like stored in the kitchen, so as to suppress the fire in time.
For another example, in a monitoring scene of a fresh area in a supermarket, the self-mobile device or the server can analyze whether pictures such as power failure of an ice chest, water shortage of a fish tank and the like appear in a monitoring video. When the pictures are found, the video clips containing the pictures are intercepted by the mobile equipment or the server and are provided for the user to check in time, so that the user can find abnormal conditions in time and take corresponding measures in time. For example, if the refrigerator is powered off and the fish tank is lack of water, the power supply of the refrigerator can be turned on in time to add water into the fish tank when a user sees the picture, so that the freshness of the fresh water is ensured.
Further, in any case, the monitoring instruction may also carry a monitoring time period, where the monitoring time period is used to limit a time period for monitoring the object to be monitored. Based on this, the self-moving device can perform video monitoring on the object to be monitored in the monitoring period. For example, in a monitoring scene of a washing machine, the washing machine needs to work at 2 pm to 3 pm, the user may set the monitoring period to be from 2 pm to 3 pm, and accordingly, the self-moving device may capture a monitoring video including the washing machine by using the camera thereof within the period from 2 pm to 3 pm. For another example, in a small night light monitoring scene in a children's house, if the small night light needs to work between 8 pm and 4 pm in the next morning, the user can set the monitoring period from 8 pm to 4 pm in the next morning, and accordingly, the mobile device can acquire a monitoring video including the small night light by using the camera thereof from 8 pm to 4 pm in the next morning.
Further, in the above embodiments, the self-mobile device or the server needs to analyze whether there is a video segment in the monitoring video that needs to be intercepted (for example, a condition is met). In the embodiment of the present application, an analysis method used by a mobile device or a server is not limited, and any method that can analyze whether a video clip that needs to be captured exists in a surveillance video is suitable for the embodiment of the present application. Several embodiments are listed below:
mode 1: some known video contents can be subjected to feature extraction, and the extracted features are taken as reference features to be stored; when the monitoring video is analyzed, feature extraction is carried out on each frame of video, the extracted features are matched with pre-extracted reference features, if the matching degree is greater than a set matching degree threshold value, the matching degree is considered to be high, and then video clips meeting the conditions are determined to appear in the frame of video; and otherwise, if the matching degree is not high, determining that the video clips meeting the conditions do not appear in the frame of video.
Mode 2: some known video contents can be used as training samples to be deeply learned in advance, so that a deep learning model for video recognition can be established. When the monitoring video is analyzed, the monitoring video can be input into the deep learning model, whether corresponding action characteristics exist in the monitoring video or not is analyzed by using the deep learning model, and whether video segments needing to be intercepted exist in the monitoring video or not is further judged. If the corresponding action characteristics appear, the video clip needing to be intercepted appears in the monitoring video, and the video clip corresponding to the corresponding action characteristics is intercepted from the monitoring video; and on the contrary, the video clip needing to be intercepted does not appear in the monitoring video.
It should be noted that in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 11a, 11b, etc., are merely used for distinguishing different operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 4 is a schematic structural diagram of a self-moving device according to an exemplary embodiment of the present application. As shown in fig. 4, the self-moving apparatus includes: the device body 40 is provided with one or more processors 401, one or more memories 402 for storing computer instructions, a communication component 403 and a camera 407.
Further, as shown in fig. 4, the self-moving device may further include: a display 404, a power component 405, an audio component 406, and other components. The present embodiment is only given to some of the components schematically, and does not mean that the self-moving apparatus includes only these components. It is to be noted that the components shown in the dotted line block in fig. 4 are optional components, not essential components.
In a first alternative embodiment, the one or more processors 401 are configured to execute computer instructions stored in the one or more memories 402 for: receiving a monitoring instruction through the communication component 403, where the monitoring instruction indicates to perform video monitoring on an object to be monitored; determining the position of the object to be monitored based on a pre-established environment map; controlling the mobile equipment to move to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and intercepting the video clip from the monitoring video and providing the video clip for the user to view.
Or in a second alternative embodiment, the one or more processors 401 are configured to execute computer instructions stored in the one or more memories 402 to: receiving a monitoring instruction through the communication component 403, where the monitoring instruction indicates to perform video monitoring on an object to be monitored; determining the position of the object to be monitored based on a pre-established environment map; controlling the mobile equipment to move to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and uploading the surveillance video to the server through the communication component 403 for the server to intercept the video clip from the surveillance video for viewing by the user.
Or
In a third alternative embodiment, the one or more processors 401 are configured to execute computer instructions stored in the one or more memories 402 for: when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map; controlling the mobile equipment to move to a set space region and autonomously move in the set space region; in the process that the self-moving equipment autonomously moves in a set space region, video monitoring is carried out on the set space region; and intercepting a video clip from the monitoring video and providing the video clip for a user to view.
Or
In a fourth alternative embodiment, the one or more processors 401 are configured to execute computer instructions stored in the one or more memories 402 for: when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map; controlling the mobile equipment to move to a set space region and autonomously move in the set space region; in the process that the self-moving equipment moves in the set space region, carrying out video monitoring on the set space region; the surveillance video is uploaded to the server through the communication component 403 for the server to intercept video segments from the surveillance video for viewing by the user.
No matter which of the above optional embodiments, the object to be monitored may be an object that is movable from an environment where the mobile device is located, and the one or more processors 401 are specifically configured to, when performing video monitoring on the object to be monitored: controlling the mobile equipment to start from a position matched with the object to be monitored, and acquiring a video containing the object to be monitored by using a camera; and when the object to be monitored moves, controlling the self-moving equipment to move along with the object to be monitored, and continuously acquiring the video containing the object to be monitored by using the camera.
No matter which of the above optional embodiments is, the object to be monitored may be an object that is not movable in an environment where the mobile device is located, and the one or more processors 401 are specifically configured to, when performing video monitoring on the object to be monitored: and controlling the self-moving equipment to acquire the video containing the object to be monitored at the position matched with the object to be monitored by using the camera.
Regardless of the above optional embodiments, the object to be monitored may be a spatial area in an environment where the mobile device is located, and the one or more processors 401 are specifically configured to, when performing video monitoring on the object to be monitored: and controlling the self-moving equipment to move in the space area, and acquiring the video in the space area by utilizing the camera in the process of moving the self-moving equipment in the space area.
No matter which of the above optional embodiments, the monitoring instruction may further include a monitoring time period, and when performing video monitoring on the object to be monitored, the one or more processors 401 are specifically configured to: and in the monitoring time period, carrying out video monitoring on the object to be monitored.
Regardless of the alternative embodiments described above, the one or more processors 401 may specifically intercept video segments from the surveillance video that satisfy the condition. The conditions here can be flexibly set, and are not limited. For example, the one or more processors 401 are specifically configured to: intercepting a video clip containing set picture content from a monitoring video; or, intercepting the video segment with the corresponding action characteristic from the monitoring video based on the deep learning algorithm.
In the first and second alternative embodiments, the receiving of the monitoring instruction by the communication component 403 includes at least one of the following manners:
receiving a monitoring instruction sent by a user in a voice mode;
receiving a monitoring instruction sent by terminal equipment bound with the mobile equipment;
and receiving a monitoring instruction sent by the audio playing equipment in the same environment with the self-mobile equipment, wherein the monitoring instruction is preset by a user.
In the first and third optional embodiments, when notifying the user to view the video clip, the one or more processors 401 may specifically adopt at least one of the following manners:
outputting prompt information to prompt a user to view the video clip on the self-moving device;
sending a notification message to the terminal device bound to the self-moving device through the communication component 403 to notify the user to view the video clip on the self-moving device;
controlling an audio playing device in the same environment with the self-mobile device to send out prompt tones to prompt a user to view the video clips on the self-mobile device;
the video segments meeting the conditions are sent to the terminal devices bound to the video segments through the communication component 403, so that the video segments can be viewed on the terminal devices by users.
Whichever alternative embodiment is described above, the one or more processors 401 are further configured to: acquiring environmental information in an environment where the mobile equipment is located, and constructing an environmental map according to the environmental information; and outputting the environment map to the user through the communication component 403 for the user to determine the object to be monitored based on the environment map; the environment map comprises a space area in the environment where the mobile device is located and objects existing in the space area.
Alternatively, the self-moving device of the present embodiment may be a robot, a cleaner, or the like.
In an alternative embodiment, the self-moving device is implemented as a robot. As shown in fig. 5, the robot 500 of the present embodiment includes: the machine body 501 is provided with one or more processors 502, one or more memories 503 for storing computer instructions, and a communication component 504. The communication component 504 may be a Wifi module, an infrared module, or a bluetooth module, etc.
In addition to one or more processors 502, communication components 504, and one or more memories 503, some basic components of the robot 500, such as a vision sensor 506, a power supply component 507, a driving component 508, and the like, are provided on the machine body 501. The vision sensor may be a camera, or the like. Alternatively, the drive assembly 508 may include drive wheels, drive motors, universal wheels, and the like. Optionally, if the robot 500 is a sweeping robot, the robot 500 may further include a sweeping assembly 505, and the sweeping assembly 505 may include a sweeping motor, a sweeping brush, a dust suction fan, and the like. These basic components and the configurations of the basic components included in different robots 500 are different, and the embodiments of the present application are only some examples. It is to be noted that the components shown in fig. 5 by the dashed line boxes are optional components, not essential components.
It is noted that the one or more processors 502 and the one or more memories 503 may be disposed inside the machine body 501, or may be disposed on the surface of the machine body 501.
The machine body 501 is an execution mechanism by which the robot 500 performs a task, and can execute an operation designated by the processor 502 in a certain environment. The machine body 501 represents the appearance of the robot 500 to some extent. In the present embodiment, the external appearance of the robot 500 is not limited, and may be, for example, a circle, an ellipse, a triangle, a convex polygon, or the like.
The one or more memories 503 are used primarily to store computer instructions that are executable by the one or more processors 502 to cause the one or more processors 502 to control the robot 500 to perform corresponding tasks. In addition to storing computer instructions, the one or more memories 503 may also be configured to store other various data to support operations on the robot 500. Examples of such data include instructions for any application or method operating on the robot 500, an environment map of the environment/scene in which the robot 500 is located, a signal strength map, and so forth.
The memory or memories 503 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
One or more processors 502, which may be considered a control system for the robot 500, may be used to execute computer instructions stored in one or more memories 503 to control the robot 500 to perform corresponding tasks.
In this embodiment, the one or more processors 502 may execute computer instructions stored in the one or more memories 503 to:
receiving a monitoring instruction through the communication component 504, wherein the monitoring instruction indicates that video monitoring is performed on an object to be monitored; determining the position of an object to be monitored based on a pre-established environment map; controlling the robot 500 to move to a position adapted to the object to be monitored, and performing video monitoring on the object to be monitored; intercepting a video clip from the monitoring video and providing the video clip for a user to view;
or
Receiving a monitoring instruction through the communication component 504, wherein the monitoring instruction indicates that video monitoring is performed on an object to be monitored; determining the position of an object to be monitored based on a pre-established environment map; controlling the robot 500 to move to a position adapted to the object to be monitored, and performing video monitoring on the object to be monitored; and uploading the surveillance video to the server through the communication component 504 for the server to intercept the video clip from the surveillance video for viewing by the user;
or
When a setting event occurs, determining the position of a set spatial region based on a pre-established environment map; controlling the robot 500 to move to a set space region and move autonomously in the set space region; in the autonomous movement process of the robot 500 in the set spatial region, performing video monitoring on the set spatial region; intercepting a video clip from a monitoring video, and providing the video clip for a user to view;
or
When a setting event occurs, determining the position of a set spatial region based on a pre-established environment map; controlling the robot 500 to move to a set space region and move autonomously in the set space region; in the autonomous movement process of the robot 500 in the set spatial region, performing video monitoring on the set spatial region; the surveillance video is uploaded to the server via the communication component 504 for viewing by the user as a video clip intercepted by the server from the surveillance video.
In addition to the self-moving device or robot described above, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising: receiving a monitoring instruction, wherein the monitoring instruction indicates that video monitoring is carried out on an object to be monitored; determining the position of an object to be monitored based on a pre-established environment map; controlling the mobile equipment to move to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; intercepting a video clip from the monitoring video and providing the video clip for a user to view;
or
The computer instructions, when executed by the one or more processors, cause the one or more processors to perform acts comprising: receiving a monitoring instruction, wherein the monitoring instruction indicates that video monitoring is carried out on an object to be monitored; determining the position of an object to be monitored based on a pre-established environment map; controlling the mobile equipment to move to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; uploading the monitoring video to a server so that the server can intercept video clips from the monitoring video and provide the video clips for a user to view;
or
The computer instructions, when executed by the one or more processors, cause the one or more processors to perform acts comprising: when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map; controlling the mobile equipment to move to a set space region and autonomously move in the set space region; performing video monitoring on a set space region in the autonomous moving process of the autonomous moving equipment in the set space region; intercepting a video clip from a monitoring video, and providing the video clip for a user to view;
or
The computer instructions, when executed by the one or more processors, cause the one or more processors to perform acts comprising: when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map; controlling the mobile equipment to move to a set space region and autonomously move in the set space region; performing video monitoring on a set space region in the autonomous moving process of the autonomous moving equipment in the set space region; and uploading the monitoring video to a server, so that the server intercepts video segments from the monitoring video and provides the video segments for a user to view.
In addition to the above actions, the one or more processors may also perform other actions when executing the computer instructions in the computer-readable storage medium, and the other actions may refer to the description in the foregoing embodiments and are not described herein again.
It is noted that the one or more processors executing the computer instructions may be processors in the self-moving device described above. When the self-moving device is the robot, the one or more processors executing the computer instructions are specifically processors in the robot.
Fig. 6 is a schematic structural diagram of a server according to an exemplary embodiment of the present application. As shown in fig. 6, the server includes: one or more processors 601, one or more memories 602 storing computer instructions, and communication components 603, power components 605. The present embodiment is only given to some of the components schematically, and does not mean that the self-moving apparatus includes only these components.
Among other things, one or more processors 601 to execute computer instructions stored in one or more memories 602 to: receiving a monitoring video sent from the mobile device through the communication component 603, wherein the monitoring video is obtained by performing video monitoring on an object to be monitored by the mobile device; and analyzing the monitoring video, intercepting a video segment from the monitoring video, and providing the video segment for a user to view.
Optionally, when providing the video segment for viewing by the user, the one or more processors 601 are specifically configured to: and sending a notification message to the terminal equipment of the user so that the user can view the video clip on the server. Optionally, the one or more processors 601 may carry the video clip in a notification message and send the notification message to the terminal device.
Alternatively, the one or more processors 601 may also directly send the video clip to the terminal device for the user to view the video clip on the terminal device.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising: receiving a monitoring video sent by mobile equipment, wherein the monitoring video is obtained by carrying out video monitoring on an object to be monitored by the mobile equipment; and analyzing the monitoring video, intercepting a video segment from the monitoring video, and providing the video segment for a user to view.
Fig. 7 is a schematic structural diagram of a terminal device according to an exemplary embodiment of the present application. As shown in fig. 7, the terminal device includes: one or more processors 701, one or more memories 702 storing computer instructions, and communications components 703, a display 704.
Further, as shown in fig. 7, the terminal device may further include: power component 705, audio component 706, and other components. The present embodiment is only given to some of the components schematically, and does not mean that the self-moving apparatus includes only these components. It is to be noted that the components shown in fig. 7 by the dashed line boxes are optional components, not essential components.
Among other things, one or more processors 701 for executing computer instructions stored in one or more memories 702 for: displaying an environment map from an environment in which the mobile device is located on the display 704, the environment map including a spatial region in the environment and objects present in the spatial region; responding to the selection operation of a user on an environment map, and determining an object or a space area selected by the user as an object to be monitored; sending a monitoring instruction to the self-moving device through the communication component 703, wherein the monitoring instruction instructs the self-moving device to perform video monitoring on an object to be monitored; and receiving the notification message through the communication component 703 and notifying the user to view the video clip intercepted from the surveillance video according to the notification message.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising: displaying an environment map of an environment where the mobile device is located, wherein the environment map comprises a spatial region in the environment and objects existing in the spatial region; responding to the selection operation of a user on an environment map, and determining an object or a space area selected by the user as an object to be monitored; sending a monitoring instruction to the self-moving equipment, wherein the monitoring instruction instructs the self-moving equipment to perform video monitoring on an object to be monitored; and receiving the notification message, and notifying the user to view the video clip intercepted from the monitoring video according to the notification message.
The communication component in the above embodiments is configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device in which the communication component is located can access a wireless network based on a communication standard, such as Wifi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may further include a Near Field Communication (NFC) module, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and the like.
The display in the above embodiments includes a screen, which may include a liquid crystal display (L CD) and a Touch Panel (TP). if the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
The power supply components in the embodiments of the figures described above provide power to the various components of the device in which the power supply components are located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio component in the above embodiments may be configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (27)

1. A monitoring method adapted for use with a self-moving device, the method comprising:
receiving a monitoring instruction, wherein the monitoring instruction indicates that video monitoring is performed on an object to be monitored;
determining the position of the object to be monitored based on a pre-established environment map;
moving to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and
and intercepting a video clip from the monitoring video and providing the video clip for a user to view.
2. The method according to claim 1, wherein if the object to be monitored is an object that is movable in an environment where the self-moving device is located, performing video monitoring on the object to be monitored comprises:
starting from a position matched with the object to be monitored, acquiring a video containing the object to be monitored by using a camera; and
when the object to be monitored moves, the object to be monitored moves along with the object to be monitored, and a camera is continuously used for collecting videos containing the object to be monitored.
3. The method according to claim 1, wherein if the object to be monitored is an object that is not movable in the environment where the self-moving device is located, performing video monitoring on the object to be monitored comprises:
and acquiring a video containing the object to be monitored by utilizing a camera at a position matched with the object to be monitored.
4. The method according to claim 1, wherein if the object to be monitored is a spatial region in an environment where the self-moving device is located, performing video monitoring on the object to be monitored comprises:
and moving in the space area, and acquiring videos in the space area by using a camera in the moving process.
5. The method of claim 1, wherein intercepting a video segment from a surveillance video comprises:
intercepting a video clip containing set picture content from the monitoring video; or
And intercepting video clips with corresponding action characteristics from the monitoring video based on a deep learning algorithm.
6. The method of any of claims 1-5, wherein the monitoring instructions further comprise a monitoring period; carrying out video monitoring on the object to be monitored, comprising:
and in the monitoring time period, carrying out video monitoring on the object to be monitored.
7. The method according to any one of claims 1-5, wherein the receiving of the monitoring instruction comprises at least one of:
receiving a monitoring instruction sent by a user in a voice mode;
receiving a monitoring instruction sent by the terminal equipment bound with the self-mobile equipment;
and receiving a monitoring instruction sent by the audio playing equipment in the same environment with the self-moving equipment, wherein the monitoring instruction is preset by a user.
8. The method of any of claims 1-5, wherein providing the video segments for viewing by a user comprises at least one of:
outputting prompt information from the mobile device to prompt a user to view the video clip on the mobile device;
sending a notification message from the mobile equipment to the terminal equipment bound with the mobile equipment so as to notify a user to view the video clip on the mobile equipment;
controlling an audio playing device in the same environment with the self-mobile device to send out prompt tones to prompt a user to view the video clips on the self-mobile device;
and the self-mobile equipment sends the video clip to the terminal equipment bound with the self-mobile equipment so that the user can view the video clip on the terminal equipment.
9. The method of any one of claims 1-5, further comprising:
acquiring environment information in the environment where the mobile equipment is located, and constructing the environment map according to the environment information;
outputting the environment map to a user so that the user can determine an object to be monitored based on the environment map; wherein the environment map comprises a spatial region in the environment where the mobile device is located and objects existing in the spatial region.
10. A monitoring method adapted for use with a self-moving device, the method comprising:
receiving a monitoring instruction, wherein the monitoring instruction indicates that video monitoring is performed on an object to be monitored;
determining the position of the object to be monitored based on a pre-established environment map;
moving to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and
and uploading the monitoring video to a server, so that the server intercepts video clips from the monitoring video and provides the video clips for a user to view.
11. A monitoring method adapted for use with a self-moving device, the method comprising:
when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map;
moving to a set space region and autonomously moving in the set space region;
in the process of autonomous movement in a set space region, carrying out video monitoring on the set space region;
and intercepting a video clip from the monitoring video and providing the video clip for a user to view.
12. The method of claim 11, wherein the occurring setup event comprises at least one of: the method comprises the steps that set monitoring time is up, a set monitoring period is up, a starting event and a wake-up event are started, and an operation instruction is received;
the job instruction is used for instructing the self-moving equipment to execute a job task in the set space region.
13. A monitoring method adapted for use with a self-moving device, the method comprising:
when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map;
moving to a set space region and autonomously moving in the set space region;
in the process of autonomous movement in a set space region, carrying out video monitoring on the set space region;
and uploading the monitoring video to a server, so that the server intercepts video clips from the monitoring video and provides the video clips for a user to view.
14. A monitoring method applicable to a server is characterized by comprising the following steps:
receiving a monitoring video sent by mobile equipment, wherein the monitoring video is obtained by carrying out video monitoring on an object to be monitored by the mobile equipment;
analyzing the monitoring video, intercepting a video segment from the monitoring video, and providing the video segment for a user to view.
15. A monitoring method is suitable for terminal equipment, and is characterized in that the method comprises the following steps:
displaying an environment map of an environment in which the mobile device is located, the environment map including a spatial region in the environment and objects present in the spatial region;
responding to the selection operation of a user on the environment map, and determining the object or the space area selected by the user as an object to be monitored;
sending a monitoring instruction to the self-moving equipment, wherein the monitoring instruction instructs the self-moving equipment to perform video monitoring on the object to be monitored; and
and receiving a notification message, and notifying a user to view a video clip intercepted from the monitoring video according to the notification message.
16. An autonomous mobile device, comprising: the device comprises a device body, wherein one or more processors, a communication assembly and one or more memories for storing computer instructions are arranged on the device body;
the one or more processors to execute the computer instructions to:
receiving a monitoring instruction through the communication assembly, wherein the monitoring instruction indicates that video monitoring is performed on an object to be monitored;
determining the position of the object to be monitored based on a pre-established environment map;
controlling the self-moving equipment to move to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and
and intercepting a video clip from the monitoring video and providing the video clip for a user to view.
17. A computer-readable storage medium having stored thereon computer instructions, which when executed by one or more processors, cause the one or more processors to perform acts comprising:
receiving a monitoring instruction, wherein the monitoring instruction instructs a self-moving device to perform video monitoring on an object to be monitored;
determining the position of the object to be monitored based on a pre-established environment map;
controlling the self-moving equipment to move to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and
and intercepting a video clip from the monitoring video and providing the video clip for a user to view.
18. An autonomous mobile device, comprising: the device comprises a device body, wherein one or more processors, a communication assembly and one or more memories for storing computer instructions are arranged on the device body;
the one or more processors to execute the computer instructions to:
receiving a monitoring instruction through the communication assembly, wherein the monitoring instruction indicates that video monitoring is performed on an object to be monitored;
determining the position of the object to be monitored based on a pre-established environment map;
controlling the self-moving equipment to move to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and
and uploading the monitoring video to a server through the communication component so that the server intercepts a video segment from the monitoring video and provides the video segment for a user to view.
19. A computer-readable storage medium having stored thereon computer instructions, which when executed by one or more processors, cause the one or more processors to perform acts comprising:
receiving a monitoring instruction, wherein the monitoring instruction instructs a self-moving device to perform video monitoring on an object to be monitored;
determining the position of the object to be monitored based on a pre-established environment map;
controlling the self-moving equipment to move to a position matched with the object to be monitored, and carrying out video monitoring on the object to be monitored; and
and uploading the monitoring video to a server, so that the server intercepts video clips from the monitoring video and provides the video clips for a user to view.
20. An autonomous mobile device, comprising: the device comprises a device body, wherein one or more processors and one or more memories for storing computer instructions are arranged on the device body;
the one or more processors to execute the computer instructions to:
when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map;
controlling the self-moving equipment to move to a set space region and move autonomously in the set space region;
in the process that the self-moving equipment autonomously moves in a set space region, carrying out video monitoring on the set space region;
and intercepting a video clip from the monitoring video and providing the video clip for a user to view.
21. A computer-readable storage medium having stored thereon computer instructions, which when executed by one or more processors, cause the one or more processors to perform acts comprising:
when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map;
controlling the mobile equipment to move to a set space region and autonomously move in the set space region;
in the process that the self-moving equipment autonomously moves in a set space region, carrying out video monitoring on the set space region;
and intercepting a video clip from the monitoring video and providing the video clip for a user to view.
22. An autonomous mobile device, comprising: the device comprises a device body, wherein one or more processors, a communication assembly and one or more memories for storing computer instructions are arranged on the device body;
the one or more processors to execute the computer instructions to:
when a setting event occurs, determining the position of a set spatial region based on a pre-established environment map;
controlling the self-moving equipment to move to a set space region and move autonomously in the set space region;
in the process that the self-moving equipment moves in a set space region, carrying out video monitoring on the set space region;
and uploading the monitoring video to a server through the communication component so that the server intercepts a video segment from the monitoring video and provides the video segment for a user to view.
23. A computer-readable storage medium having stored thereon computer instructions, which when executed by one or more processors, cause the one or more processors to perform acts comprising:
setting the position of a spatial region based on a pre-established environment map when a setting event occurs;
controlling the mobile equipment to move to a set space region and autonomously move in the set space region;
in the process of autonomous movement in the set space area of the self-moving equipment, carrying out video monitoring on the set space area;
and uploading the monitoring video to a server, so that the server intercepts video clips from the monitoring video and provides the video clips for a user to view.
24. A server, comprising: one or more processors, a communications component, and one or more memories storing computer instructions;
the one or more processors to execute the computer instructions to:
receiving a monitoring video sent by mobile equipment through the communication assembly, wherein the monitoring video is obtained by video monitoring of the mobile equipment aiming at an object to be monitored;
analyzing the monitoring video, intercepting a video segment from the monitoring video, and providing the video segment for a user to view.
25. A computer-readable storage medium having stored thereon computer instructions, which when executed by one or more processors, cause the one or more processors to perform acts comprising:
receiving a monitoring video sent by mobile equipment, wherein the monitoring video is obtained by carrying out video monitoring on an object to be monitored by the mobile equipment;
analyzing the monitoring video, intercepting a video segment from the monitoring video, and providing the video segment for a user to view.
26. A terminal device, comprising: one or more processors, a display, a communications component, and one or more memories storing computer instructions;
the one or more processors to execute the computer instructions to:
displaying an environment map of an environment in which the mobile device is located on the display, the environment map including a spatial region in the environment and objects present in the spatial region;
responding to the selection operation of a user on the environment map, and determining the object or the space area selected by the user as an object to be monitored;
sending a monitoring instruction to the self-moving equipment through the communication assembly, wherein the monitoring instruction instructs the self-moving equipment to perform video monitoring on the object to be monitored; and
and receiving a notification message through the communication component, and notifying the user to view the video clip intercepted from the monitoring video according to the notification message.
27. A computer-readable storage medium having stored thereon computer instructions, which when executed by one or more processors, cause the one or more processors to perform acts comprising:
displaying an environment map of an environment in which the mobile device is located, the environment map including a spatial region in the environment and objects present in the spatial region;
responding to the selection operation of a user on the environment map, and determining the object or the space area selected by the user as an object to be monitored;
sending a monitoring instruction to the self-moving equipment, wherein the monitoring instruction instructs the self-moving equipment to perform video monitoring on the object to be monitored; and
and receiving a notification message, and notifying the user to view the video clip intercepted from the monitoring video according to the notification message.
CN201910101030.4A 2019-01-31 2019-01-31 Monitoring method, device and storage medium Pending CN111510667A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910101030.4A CN111510667A (en) 2019-01-31 2019-01-31 Monitoring method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910101030.4A CN111510667A (en) 2019-01-31 2019-01-31 Monitoring method, device and storage medium

Publications (1)

Publication Number Publication Date
CN111510667A true CN111510667A (en) 2020-08-07

Family

ID=71864590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910101030.4A Pending CN111510667A (en) 2019-01-31 2019-01-31 Monitoring method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111510667A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112437267A (en) * 2020-11-13 2021-03-02 珠海大横琴科技发展有限公司 Monitoring map regulating and controlling method and device of video monitoring system and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150033443A (en) * 2013-09-24 2015-04-01 삼성테크윈 주식회사 Surveillance system controlling cleaning robot
CN105100710A (en) * 2015-07-07 2015-11-25 小米科技有限责任公司 Indoor monitoring method and device
CN105828035A (en) * 2016-03-28 2016-08-03 乐视控股(北京)有限公司 Monitoring method and device
CN106411927A (en) * 2016-10-28 2017-02-15 北京奇虎科技有限公司 Monitoring video recording method and device
CN108269571A (en) * 2018-03-07 2018-07-10 佛山市云米电器科技有限公司 A kind of voice control terminal with camera function
CN208117857U (en) * 2017-07-31 2018-11-20 王佳豪 A kind of nurse robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150033443A (en) * 2013-09-24 2015-04-01 삼성테크윈 주식회사 Surveillance system controlling cleaning robot
CN105100710A (en) * 2015-07-07 2015-11-25 小米科技有限责任公司 Indoor monitoring method and device
CN105828035A (en) * 2016-03-28 2016-08-03 乐视控股(北京)有限公司 Monitoring method and device
CN106411927A (en) * 2016-10-28 2017-02-15 北京奇虎科技有限公司 Monitoring video recording method and device
CN208117857U (en) * 2017-07-31 2018-11-20 王佳豪 A kind of nurse robot
CN108269571A (en) * 2018-03-07 2018-07-10 佛山市云米电器科技有限公司 A kind of voice control terminal with camera function

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112437267A (en) * 2020-11-13 2021-03-02 珠海大横琴科技发展有限公司 Monitoring map regulating and controlling method and device of video monitoring system and readable storage medium

Similar Documents

Publication Publication Date Title
JP6878494B2 (en) Devices, methods, and related information processing for homes equipped with smart sensors
US10755546B2 (en) Wireless device and methods for use in determining classroom attendance
JP6490675B2 (en) Smart home hazard detector that gives a non-alarm status signal at the right moment
US11583997B2 (en) Autonomous robot
CN112166350B (en) System and method for ultrasonic sensing in smart devices
US10706699B1 (en) Projector assisted monitoring system
CN114342357B (en) Event-based recording
US20160028828A1 (en) Hub and cloud based control and automation
EP3752999B1 (en) Systems and methods of power-management on smart devices
CN106415509A (en) Hub-to-hub peripheral discovery
CN112784664A (en) Semantic map construction and operation method, autonomous mobile device and storage medium
US20230333075A1 (en) Air quality sensors
CN114158980A (en) Job method, job mode configuration method, device, and storage medium
US11412157B1 (en) Continuous target recording
CN111510667A (en) Monitoring method, device and storage medium
CA3104823C (en) Network activity validation
JP6897696B2 (en) Servers, methods, and programs
CN108600062B (en) Control method, device and system of household appliance
CA3004002A1 (en) Video surveillance with context recognition
US11908308B2 (en) Reduction of false detections in a property monitoring system using ultrasound emitter
US11830332B2 (en) Vibration triangulation network
WO2023219649A1 (en) Context-based user interface
CN117041476A (en) Control method and device of intelligent equipment, storage medium and electronic device
CN115167162A (en) Visiting information reminding method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200807

RJ01 Rejection of invention patent application after publication