CN112514374A - Monitoring system, monitoring method, mobile platform and remote equipment - Google Patents

Monitoring system, monitoring method, mobile platform and remote equipment Download PDF

Info

Publication number
CN112514374A
CN112514374A CN202080003992.7A CN202080003992A CN112514374A CN 112514374 A CN112514374 A CN 112514374A CN 202080003992 A CN202080003992 A CN 202080003992A CN 112514374 A CN112514374 A CN 112514374A
Authority
CN
China
Prior art keywords
monitoring
mobile platform
monitoring target
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080003992.7A
Other languages
Chinese (zh)
Inventor
郭晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112514374A publication Critical patent/CN112514374A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

A monitoring system (1000) comprising: a remote device (100) and a mobile platform (200) in wireless communication with the remote device (100). The remote device (100) comprises a first processor (101). The mobile platform (200) comprises a second processor (201). The first processor (101) is used for acquiring a monitoring image, identifying a monitoring target in the monitoring image according to a preset identifier and sending the monitoring target to the mobile platform (200); the second processor (201) is used for controlling the mobile platform (200) to track the monitoring target. A monitoring method, a mobile platform (200) and a remote device (100) are also disclosed.

Description

Monitoring system, monitoring method, mobile platform and remote equipment
Technical Field
The present disclosure relates to the field of monitoring technologies, and in particular, to a monitoring system, a monitoring method, a mobile platform, and a remote device.
Background
At present, unmanned aerial vehicle's intelligence is followed mainly to be used in the field of taking photo by plane. During the use, unmanned aerial vehicle discerns the people in the image, controls the flight of self according to people's activity. However, the intelligent following function of the drone can detect people in the image, but cannot recognize a specific person. Furthermore, the tracking target in the image must be manually selected, and the tracking cannot be automatically started after the specific target is recognized. This does not satisfy the monitoring requirements.
Disclosure of Invention
The embodiment of the application provides a monitoring system, a monitoring method, a mobile platform and remote equipment.
The monitoring system of the embodiment of the application comprises:
a remote device comprising a first processor;
a mobile platform in wireless communication with the remote device, the mobile platform comprising a second processor;
the first processor is used for acquiring a monitoring image, identifying a monitoring target in the monitoring image according to a preset identifier and sending the monitoring target to the mobile platform;
the second processor is used for controlling the mobile platform to track the monitoring target.
The monitoring method is used for the mobile platform. The monitoring method comprises the following steps:
acquiring a monitoring target sent by remote equipment, wherein the remote equipment is in wireless communication with the mobile platform, and the monitoring target is identified by the remote equipment from a monitoring image according to a preset identifier;
and controlling the mobile platform to track the monitoring target.
The mobile platform comprises a processor, wherein the processor is used for acquiring a monitoring target sent by remote equipment, and the remote equipment is in wireless communication with the mobile platform; and the system is used for controlling the mobile platform to track the monitoring target, wherein the monitoring target is identified by the remote equipment from the monitoring image according to a preset identifier.
The monitoring method is used for the remote equipment. The monitoring method comprises the following steps:
acquiring a monitoring image;
identifying a monitoring target in the monitoring image according to a preset identifier;
and sending the monitoring target to the mobile platform so that the mobile platform tracks the monitoring target.
The remote equipment of the embodiment of the application comprises a processor, a monitoring module and a display module, wherein the processor is used for acquiring a monitoring image; the system is used for identifying a monitoring target in the monitoring image according to a preset identifier; and the monitoring target is sent to the mobile platform so that the mobile platform can track the monitoring target.
According to the monitoring system, the monitoring method, the mobile platform and the remote equipment, the monitoring target can be identified through the preset identification, and then the mobile platform is controlled to track the monitoring target, so that monitoring and tracking of the specific target are achieved. Meanwhile, the monitoring target is identified by the remote equipment, so that the identification efficiency can be improved, and the resources of the mobile platform can be saved.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a block schematic diagram of a monitoring system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a monitoring method for a mobile platform according to an embodiment of the present application;
FIG. 3 is a flow chart of a monitoring method for a remote device according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of another module of a monitoring system according to an embodiment of the present application;
FIG. 5 is a schematic flow chart diagram illustrating a monitoring method for a remote device according to another embodiment of the present application;
FIG. 6 is a schematic flow chart diagram illustrating a monitoring method for a remote device according to yet another embodiment of the present application;
FIG. 7 is a schematic flow chart diagram illustrating a monitoring method for a mobile platform according to another embodiment of the present application;
FIG. 8 is a schematic flow chart diagram illustrating a monitoring method for a mobile platform according to yet another embodiment of the present application;
FIG. 9 is a schematic flow chart diagram illustrating a monitoring method for a remote device according to yet another embodiment of the present application;
FIG. 10 is a flow chart illustrating a monitoring method for a mobile platform according to yet another embodiment of the present application;
FIG. 11 is a schematic flow chart diagram illustrating a monitoring method for a mobile platform according to another embodiment of the present application;
FIG. 12 is a schematic flow chart diagram illustrating a monitoring method for a remote device according to another embodiment of the present application;
fig. 13 is a schematic flow chart diagram of a monitoring method for a mobile platform according to yet another embodiment of the present application;
FIG. 14 is a schematic flow chart diagram illustrating a monitoring method for a remote device according to yet another embodiment of the present application;
fig. 15 is a flowchart illustrating a monitoring method for a mobile platform according to still another embodiment of the present application;
FIG. 16 is a flow chart illustrating a monitoring method for a remote device according to yet another embodiment of the present application;
fig. 17 is a flowchart illustrating a monitoring method for a remote device according to another embodiment of the present application.
Description of the main element symbols:
monitoring system 1000, remote device 100, first processor 101, mobile platform 200, second processor 201.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative and are only for the purpose of explaining the present application and are not to be construed as limiting the present application.
In the description of the present application, it is to be understood that the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
The following disclosure provides many different embodiments or examples for implementing different features of the application. In order to simplify the disclosure of the present application, specific example components and arrangements are described below. Of course, they are merely examples and are not intended to limit the present application. Moreover, the present application may repeat reference numerals and/or letters in the various examples, such repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. In addition, examples of various specific processes and materials are provided herein, but one of ordinary skill in the art may recognize applications of other processes and/or use of other materials.
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative and are only for the purpose of explaining the present application and are not to be construed as limiting the present application.
Referring to fig. 1 to fig. 3, embodiments of the present application provide a monitoring system 1000, a monitoring method, a remote device 100 and a mobile platform 200.
The monitoring system 1000 of the embodiment of the present application includes a remote device 100 and a mobile platform 200 in wireless communication with the remote device 100. The remote device 100 comprises a first processor 101. The mobile platform 200 includes a second processor 201. The first processor 101 is configured to obtain a monitoring image, identify a monitoring target in the monitoring image according to a preset identifier, and send the monitoring target to the mobile platform 200; the second processor 201 is used for controlling the mobile platform 200 to track the monitoring target.
Referring to fig. 2, a monitoring method according to an embodiment of the present invention is applied to a mobile platform 200. The monitoring method for the mobile platform 200 includes:
step S23: acquiring a monitoring target sent by the remote equipment 100, wherein the remote equipment 100 is in wireless communication with the mobile platform 200, and the monitoring target is identified by the remote equipment 100 from a monitoring image according to a preset identifier;
step S25: and controlling the mobile platform 200 to track the monitoring target.
The mobile platform 200 of the embodiment of the application includes a second processor 201, where the second processor 201 is configured to acquire a monitoring target sent by the remote device 100, and the remote device 100 and the mobile platform 200 perform wireless communication; and for controlling the mobile platform 200 to track a monitored target, which is recognized from the monitored image by the remote device 100 according to the preset identifier.
Referring to fig. 3, the monitoring method according to the embodiment of the present application is applied to the remote device 100. The monitoring method for the remote device 100 includes:
step S13: acquiring a monitoring image;
step S16: identifying a monitoring target in the monitoring image according to a preset identifier;
step S19: the monitoring target is transmitted to the mobile platform 200 so that the mobile platform 200 tracks the monitoring target.
The remote device 100 of the embodiment of the present application includes a first processor 101, where the first processor 101 is configured to obtain a monitoring image; the system comprises a monitoring image acquisition unit, a monitoring target identification unit and a monitoring target identification unit, wherein the monitoring image acquisition unit is used for acquiring a monitoring image; and for sending the monitoring target to the mobile platform 200, so that the mobile platform 200 tracks the monitoring target.
The monitoring system 1000, the monitoring method, the mobile platform 200 and the remote device 100 according to the embodiment of the application can identify the monitoring target through the preset identifier, and then control the mobile platform 200 to track the monitoring target, thereby realizing monitoring and tracking of the specific target. Meanwhile, the monitoring target is identified by the remote device 100, which can improve the identification efficiency and save the resources of the mobile platform 200.
Specifically, the monitoring image may be collected by the mobile platform 200 and transmitted to the remote device 100. For example, the mobile platform 200 includes a camera, and the mobile platform 200 transmits the monitoring image captured by the camera to the remote device 100.
The monitoring image may also be acquired by the remote device 100 from other sources. For example, the remote device 100 includes a camera, and the camera of the remote device 100 captures a monitoring image, identifies a monitoring target in the monitoring image according to a preset identifier, and sends the monitoring target to the mobile platform 200, so that the mobile platform 200 tracks the monitoring target.
The monitoring image may also be transmitted to the remote device 100 by devices other than the mobile platform 200 and the remote device 100. For example, the camera of the other terminal takes a monitoring image and transmits the monitoring image to the remote apparatus 100. Other terminals include, but are not limited to, cell phones, tablets, wearable smart devices, and the like. The specific source of the monitoring image is not limited herein.
The preset identification comprises at least one of a human face, a number and a character. Specifically, the preset identifier may include a face, the preset identifier may include numbers, the preset identifier may include characters, and the preset identifier may include a combination of two or three of the face, the numbers, and the characters. The characters may include various characters that can be recognized in chinese, english, and the like. The specific form of the preset mark is not limited herein. In one example, the predetermined identifier may include a license plate composed of numbers and/or words.
In step S19, sending the monitoring target to the mobile platform 200, which may include intercepting an image of the monitoring target from the monitoring image according to the monitoring target, and sending the image of the monitoring target to the mobile platform 200; or may include determining the location of the monitored target based on the monitored image and transmitting the location of the monitored target to the mobile platform 200. The specific manner of transmitting the monitoring target to the mobile platform 200 is not limited herein.
In the present embodiment, the mobile platform 200 includes at least one of a drone, a robot, and a vehicle. The remote device 100 includes at least one of a server, a cell phone, a tablet, a remote control, and a wearable smart device.
In one example, the mobile platform 200 includes a drone and a robot, the remote device 100 includes a server; in another example, the mobile platform 200 comprises a drone, the remote device 100 comprises a server and a cell phone; in yet another example, the mobile platform 200 includes a drone, a robot, and a vehicle, the remote device 100 includes a wearable smart device; in yet another example, the mobile platform 200 comprises a drone and the remote device 100 comprises a remote control.
Further, the number of mobile platforms 200 may be one or more, such as 2, 3, 4, or 5. When the number of the mobile platforms 200 is plural, the plural mobile platforms 200 may be the same or different in kind.
Similarly, the number of remote devices 100 may be one or more, such as 2, 3, 4, or 5. When the number of the remote apparatuses 100 is plural, the kinds of the plural remote apparatuses 100 may be the same or different.
The specific form and the specific number of mobile platforms 200 and remote devices 100 are not limited herein. For ease of explanation and illustration, the mobile platform 200 is explained and illustrated next with the example of the mobile platform including a drone and the remote device 100 including a server.
In this embodiment, remote device 100 establishes a cellular network connection with mobile platform 200. As such, wireless communication of remote device 100 with mobile platform 200 may be enabled via a cellular network. Moreover, the cellular network has a strong self-configuration, self-organization and self-healing capability, which can improve the reliability of wireless communication between the remote device 100 and the mobile platform 200.
Further, the remote device 100 establishes a cellular network connection with the mobile platform 200 through a 5 th generation mobile communication technology 5G base station. In this way, the delay between the remote device 100 and the mobile platform 200 can be reduced, so that the communication effect is better.
Specifically, compared with 4G communication, the delay of 5G communication is reduced from 40ms to 1-2ms on average, and the data transmission speed is increased from 0.02-0.03Gbps to 0.1-5.0Gbps, so that low-delay high-definition return of the monitoring image can be realized.
Referring to fig. 4, in one example, the camera 202 of the mobile platform 200 is used to capture a monitoring image. The 5G module 2011 of the second processor 201 sends the monitor image to the identification module 1011 of the first processor 101. The recognition module 1011 includes, but is not limited to, a face recognition module.
The identification module 1011 of the first processor 101 performs identification according to the preset identifier, determines that the identification is successful when the monitored target in the monitored image is identified, and sends the identification result to the mobile platform 200. The preset identifier includes, but is not limited to, a preset face identifier. The recognition results include, but are not limited to, the monitored target and related information. The related information includes, but is not limited to, position information and size information of a frame of the identified monitoring target in the monitored image. The size information is, for example, the length and width of the frame where the monitoring target is located.
The target estimation module 2012 of the second processor 201 calculates target data of the monitoring target according to the monitoring target, the related information, and the current data of the mobile platform 200, and sends the target data to the planning control module 2013 of the second processor 201. The current data of the mobile platform 200 includes, but is not limited to, the current position and posture of the mobile platform 200, the cradle head posture, and the data of the binocular camera. Target data includes, but is not limited to, the position and velocity of the monitored target.
The planning control module 2013 of the second processor 201 determines a control instruction of the mobile platform 200 according to the target data and the obstacle information, and sends the control instruction to the motion control module 2014 of the second processor 201 for execution, so that the mobile platform 200 follows the monitored target to move, and avoids the obstacle encountered in the tracking process. Among other things, the obstacle information may be based on map information established from data of a sensing device of the mobile platform 200, such as a binocular camera, a monocular camera, and the like. Motion control module 2014 includes, but is not limited to, a flight control module.
In addition, in the tracking process, the mobile platform 200 may send information such as the position of the monitoring target, the position of the mobile platform 200, the speed of the monitoring target, the speed of the mobile platform 200, and a map around the mobile platform 200 to the remote device 100 through the 5G module 2011, so as to implement a closed loop of the monitoring system 100. In this way, it is advantageous for the remote device 100 to further determine whether the mobile platform 200 has actually tracked the monitoring target according to the above information.
The above example, utilizing a 5G communications network, enables wireless communication of mobile platform 200 with remote device 100. The mobile platform 200 transmits the monitoring image back to the remote device 100, and performs planning and control by using the recognition result of the remote device 100, thereby implementing online tracking of the monitoring target.
In the above example, the total time from the exposure of the camera 202 of the mobile platform 200 to the planning control module 2013 of the second processor 201 to calculate the control instructions of the mobile platform 200, plus the time taken for the image recognition, link delay, etc. of the machine learning is about 100 ms. Therefore, the link delay of 5G has substantially no influence on the control of the mobile platform 200, thereby facilitating the follow-up control of the mobile platform 200 using the recognition result of the remote device 100.
In the present embodiment, the networking method of 5G is stand-alone networking (SA). Therefore, the whole line is specially laid for the 5G network, so that the communication delay is low.
It is understood that in other embodiments, the networking mode of 5G is Non-Standalone Networking (NSA). Therefore, part of services and functions of the 5G network continue to depend on the 4G network, a 5G core network does not need to be newly established, and the cost is low.
In some embodiments, the first processor 101 is configured to acquire an initial image; and the mark in the initial image is recognized as a preset mark.
Referring to fig. 5, the monitoring method for the remote device 100 further includes:
step S11: acquiring an initial image;
step S12: and identifying the mark in the initial image as a preset mark.
Therefore, the acquisition of the preset identification is realized by identifying the initial image, so that the monitoring target in the monitoring image can be identified according to the preset identification subsequently.
Similarly, in step S11, an initial image may be captured by the mobile platform 200 and transmitted to the remote device 100. For example. The user carries out self-shooting through the mobile platform 200 so that the mobile platform 200 collects an initial image, and sends the initial image to the remote equipment 100 so that the remote equipment 100 can identify the face of the user in the initial image, and the user can be followed by the subsequent mobile platform 200.
It will be appreciated that the initial image may also be acquired by the remote device 100 itself. The initial image may also be transmitted to the remote device 100 by devices other than the mobile platform 200 and the remote device 100. The specific source of the initial image is not limited herein.
In step S12, in the case where the identifier is recognized from the initial image, the recognized identifier may be taken as a preset identifier; in case the identity is not recognized from the initial image, the remote device 100 or the mobile platform 200 is controlled to output a prompt message. Therefore, the user can be prompted to replace or adjust the initial image in time, and the situation that the preset identification cannot be obtained due to the fact that the identification cannot be recognized in the initial image is avoided.
Specifically, the prompt message includes, but is not limited to, text message, voice message, vibration message, and light message. The specific form of the prompt message is not limited herein.
In one example, the prompt message includes voice information, and in the event that the identification is not recognized from the initial image, the remote device 100 is controlled to broadcast: "no identification is recognized from the image, please replace or adjust the image".
Note that, in the example of fig. 5, steps S11-S12 are performed before step S13. It is understood that, in other examples, steps S11-S12 may be performed after step S13, and steps S11-S12 may be performed simultaneously with step S13. The specific execution order is not limited herein.
In some embodiments, the first processor 101 is configured to identify a target identifier from the monitored image according to a preset identifier; and the target corresponding to the target identification is determined as the monitoring target.
Referring to fig. 6, step S16 includes:
step S161: identifying a target identifier from the monitoring image according to a preset identifier;
step S162: and determining the target corresponding to the target identification as the monitoring target.
Therefore, the monitoring target in the monitoring image is identified according to the preset identification, and the accuracy is high. Specifically, step S161 includes: identifying a mark in the monitoring image; and matching the preset identifier with the identifier in the monitoring image to determine the target identifier. Therefore, the target identification is recognized from the monitored image according to the preset identification in a matching mode, and the accuracy of recognizing the target identification can be guaranteed.
Further, matching the preset identifier with the identifier in the monitoring image to determine the target identifier includes: under the condition that the number of the identifiers in the monitored image is 1, determining the matching degree of the identifiers in the monitored image and a preset identifier; and taking the mark in the monitored image as a target mark under the condition that the matching degree is greater than a preset matching degree threshold value.
Therefore, whether the mark in the monitoring image is matched with the preset mark or not is determined through the preset matching degree threshold, the standard is clear, and the target mark recognition efficiency is high.
Further, matching the preset identifier with the identifier in the monitoring image to determine the target identifier includes: under the condition that the number of the identifiers in the monitoring image is multiple, matching the preset identifiers with each identifier in the monitoring image one by one to obtain the matching degree of the preset identifiers and each identifier in the monitoring image; and taking the mark with the highest matching degree in the monitored image as a target mark.
Therefore, the target identification is the identification with the highest matching degree with the preset identification in the monitored image, the identification with the non-highest matching degree can be avoided being used as the target identification, and the accuracy of identifying the target identification is improved.
In one example, the monitoring image includes 4 markers, the degree of matching between the preset marker and the first marker in the monitoring image is 10%, the degree of matching between the preset marker and the second marker in the monitoring image is 40%, the degree of matching between the preset marker and the third marker in the monitoring image is 50%, and the degree of matching between the preset marker and the fourth marker in the monitoring image is 70%. The identifier with the highest matching degree in the monitored image is the fourth identifier, and the fourth identifier can be used as the target identifier.
Further, matching the preset identifier with the identifier in the monitoring image to determine the target identifier includes: and under the condition that the matching degree corresponding to the identifier with the highest matching degree in the monitored image is greater than a preset matching degree threshold value, taking the identifier with the highest matching degree in the monitored image as a target identifier.
Therefore, the matching degree of the target identification and the preset identification is greater than the preset matching degree threshold value, and the accuracy of identifying the target identification is guaranteed.
In one example, the monitoring image includes 4 markers, and the preset matching degree threshold is 80%. The matching degree of the preset identifier and the first identifier in the monitored image is 10%, the matching degree of the preset identifier and the second identifier in the monitored image is 40%, the matching degree of the preset identifier and the third identifier in the monitored image is 50%, and the matching degree of the preset identifier and the fourth identifier in the monitored image is 90%. And if the identifier with the highest matching degree in the monitored image is the fourth identifier, and the matching degree corresponding to the fourth identifier is greater than the matching degree threshold value, the fourth identifier can be used as the target identifier.
In another example, the monitoring image includes 4 markers, and the preset matching degree threshold is 80%. The matching degree of the preset mark and the first mark in the monitored image is 10%, the matching degree of the preset mark and the second mark in the monitored image is 40%, the matching degree of the preset mark and the third mark in the monitored image is 50%, and the matching degree of the preset mark and the fourth mark in the monitored image is 70%. And if the identifier with the highest matching degree in the monitored image is the fourth identifier, and the matching degree corresponding to the fourth identifier is smaller than the matching degree threshold value, not taking the fourth identifier as the target identifier.
Of course, it is understood that in some embodiments, there may be more than one preset identifier. Optionally, when a plurality of preset identifiers are provided, if one of the preset identifiers is matched with an identifier in the monitored image, the identifier may be used as a target identifier, and a target corresponding to the target identifier may be used as a monitored target.
In some embodiments, the second processor 201 is configured to control the mobile platform 200 to approach the monitoring target to make the distance between the monitoring target and the mobile platform 200 smaller than the preset distance, if the distance between the monitoring target and the mobile platform 200 is greater than or equal to the preset distance.
Referring to fig. 7, step S25 includes:
step S250: and under the condition that the distance between the monitoring target and the mobile platform 200 is greater than or equal to the preset distance, controlling the mobile platform 200 to approach the monitoring target so that the distance between the monitoring target and the mobile platform 200 is smaller than the preset distance.
Therefore, the distance between the monitoring target and the mobile platform 200 is smaller than the preset distance, the situation that the monitoring target is lost due to overlarge distance between the monitoring target and the mobile platform 200 can be avoided, and the follow continuity of the monitoring target is guaranteed. Moreover, the problems of small occupied range of the monitoring target in the monitoring image shot by the mobile platform 200 and fuzzy monitoring picture caused by too large distance can be avoided, and the monitoring effect can be improved.
In addition, step S25 may further include: and under the condition that the distance between the monitoring target and the mobile platform 200 is smaller than the preset safety distance, controlling the mobile platform 200 to be far away from the monitoring target so that the distance between the monitoring target and the mobile platform 200 is larger than the safety distance, wherein the safety distance is smaller than the preset distance.
Thus, the monitored target can be prevented from being discovered due to the fact that the mobile platform 200 is too close to the monitored target. Moreover, under the condition that the monitoring target is switched from fast motion to suddenly stationary, the mobile platform 200 can be buffered by the safety distance, so that the safety problem caused by collision between the mobile platform 200 and the monitoring target is avoided.
In some embodiments, the first processor 101 is configured to generate obstacle information according to the monitoring image, and send the obstacle information to the mobile platform 200; the second processor 201 is configured to control the mobile platform 200 to track the monitoring target according to the obstacle information and the position of the monitoring target.
Referring to fig. 8, the monitoring method for the mobile platform 200 further includes:
step S241: acquiring obstacle information generated by the remote device 100 according to the monitoring image;
step S25 includes:
step S251: and controlling the mobile platform 200 to track the monitoring target according to the obstacle information and the position of the monitoring target.
In some embodiments, the second processor 201 is configured to obtain obstacle information generated by the remote device 100 according to the monitoring image; and for controlling the mobile platform 200 to track the monitoring target according to the obstacle information and the position of the monitoring target.
Referring to fig. 9, the monitoring method for the remote device 100 further includes:
step S171: and generating obstacle information according to the monitoring image, and sending the obstacle information to the mobile platform 200, so that the mobile platform 200 tracks the monitoring target according to the obstacle information and the position of the monitoring target.
In some embodiments, the first processor 101 is configured to generate obstacle information according to the monitoring image, and send the obstacle information to the mobile platform 200, so that the mobile platform 200 tracks the monitoring target according to the obstacle information and the position of the monitoring target.
Thus, the mobile platform 200 tracks the monitored target according to the obstacle information generated by the remote device 100 based on the monitored image, so as to realize obstacle avoidance in the tracking process, avoid the damage or loss of the tracked target caused by the collision of the mobile platform 200 with the obstacle in the tracking process, and be beneficial to ensuring the smooth tracking. Also, since the obstacle information is generated by the remote device 100, it is possible to improve efficiency of generating the obstacle information and save resources of the mobile platform 200.
In step S171, obstacle information is generated from the monitoring image, including: modeling according to the monitoring image to generate a map; obstacle information is determined from the map. Wherein the modeling may include 3D modeling and the map may include a 3D map.
In step S251, controlling the mobile platform 200 to track the monitoring target according to the obstacle information and the position of the monitoring target includes: determining a monitoring route according to the barrier information and the position of the monitoring target; and controlling the mobile platform 200 to track the monitoring target according to the monitoring route. Thus, the monitoring route of the mobile platform 200 is reliable, which is beneficial to ensuring the safety of the mobile platform.
Specifically, the obstacle information includes at least one of a position of the obstacle and a range of the obstacle. Therefore, the information of the obstacles is rich, and the obstacle avoidance effect of the mobile platform 200 in the tracking process is favorably improved.
Further, the location of the obstacle may include the direction and distance of the obstacle relative to the mobile platform 200. Therefore, the position of the obstacle relative to the mobile platform 200 is determined accurately according to the direction and the distance.
Further, the range of obstacles may include a length range, a width range, and a height range of the obstacles. Therefore, the range of the obstacle is based on three dimensions, and the obstacle avoidance method is more accurate and beneficial to improving the obstacle avoidance effect of the mobile platform 200 in the tracking process.
Note that, in the example of fig. 8, step S241 is executed after step S23. It is understood that in other examples, step S241 is performed before step S23, and step S241 may be performed simultaneously with step S23. The specific execution order is not limited herein.
Note that, in the example of fig. 9, step S171 is performed after step S16. It is understood that in other examples, step S171 may be performed before step S16, and step S171 may also be performed simultaneously with step S16. The specific execution order is not limited herein.
In some embodiments, the second processor 201 is configured to obtain a monitoring image and generate obstacle information according to the monitoring image; and for controlling the mobile platform 200 to track the monitoring target according to the obstacle information and the position of the monitoring target.
Referring to fig. 10, the monitoring method for the mobile platform 200 further includes:
step S242: acquiring a monitoring image and generating barrier information according to the monitoring image;
step S25 includes:
step S252: and controlling the mobile platform 200 to track the monitoring target according to the obstacle information and the position of the monitoring target.
Therefore, the mobile platform 200 generates the obstacle information based on the monitoring image to track the monitoring target, so as to realize obstacle avoidance in the tracking process, avoid the damage or the loss of the tracking target caused by the collision of the mobile platform 200 with the obstacle in the tracking process, and be beneficial to ensuring the smooth tracking.
Moreover, since the obstacle information is generated by the mobile platform 200, the obstacle information does not need to be acquired from the outside through wireless communication, so that the safety of the obstacle information can be improved, and the obstacle avoidance failure caused by tampering of the obstacle information in the transmission process can be avoided.
Note that, in the example of fig. 10, step S242 is performed after step S23. It is understood that in other examples, step S242 may be performed before step S23, and step S242 may be performed simultaneously with step S23. The specific execution order is not limited herein.
Please note that, the explanation and description of this part can refer to the part of the obstacle information generated by the remote device 100 according to the monitoring image in the foregoing, and the details are not repeated here to avoid redundancy.
In some embodiments, the mobile platform 200 includes a binocular camera, and the surveillance images are captured by the binocular camera. Therefore, the monitoring image comprises the depth information, so that the barrier information generated based on the monitoring image is richer, and the barrier avoiding effect is favorably improved.
Additionally, in some other embodiments, the mobile platform 200 may include a structured light based camera.
In other further embodiments, the mobile platform 200 may include a Time of flight (Time of flight) based camera.
Therefore, the acquisition of depth information can be realized, so that the barrier information generated based on the monitoring image is richer, and the barrier avoiding effect is favorably improved. The specific manner in which the mobile platform 200 collects depth information is not limited herein.
In some embodiments, the first processor 101 is configured to obtain a geographic location of the mobile platform 200; the map corresponding to the geographic position is obtained according to the geographic position; and is configured to generate obstacle information according to the map, and send the obstacle information to the mobile platform 200; the second processor 201 is configured to control the mobile platform 200 to track the monitoring target according to the obstacle information and the position of the monitoring target.
Referring to fig. 11, the monitoring method for the mobile platform 200 further includes:
step S243: collecting the geographic position of the mobile platform 200 and sending the geographic position to the remote device 100, so that the remote device 100 acquires a map corresponding to the geographic position according to the geographic position and generates obstacle information according to the map;
step S244: acquiring obstacle information transmitted by the remote device 100;
step S25 includes:
step S254: and controlling the mobile platform 200 to track the monitoring target according to the obstacle information and the position of the monitoring target.
In some embodiments, the second processor 201 is configured to acquire a geographic position of the mobile platform 200 and transmit the geographic position to the remote device 100, so that the remote device 100 acquires a map corresponding to the geographic position according to the geographic position and generates obstacle information according to the map; and for obtaining obstacle information sent by the remote device 100; and for controlling the mobile platform 200 to track the monitoring target according to the obstacle information and the position of the monitoring target.
Referring to fig. 12, the monitoring method for the remote device 100 further includes:
step S172: obtaining a geographic location of the mobile platform 200;
step S173: acquiring a map corresponding to the geographic position according to the geographic position;
step S174: and generating obstacle information according to the map, and sending the obstacle information to the mobile platform 200, so that the mobile platform 200 tracks the monitoring target according to the obstacle information and the position of the monitoring target.
In some embodiments, the first processor 101 is configured to obtain a geographic location of the mobile platform 200; the map corresponding to the geographic position is obtained according to the geographic position; and is configured to generate obstacle information according to the map, and send the obstacle information to the mobile platform 200, so that the mobile platform 200 tracks the monitoring target according to the obstacle information and the position of the monitoring target.
Thus, the mobile platform 200 tracks the monitored target according to the obstacle information generated by the remote device 100 based on the map, and realizes obstacle avoidance in the tracking process, so that damage or loss of the tracked target due to collision with the obstacle in the tracking process of the mobile platform 200 can be avoided, and smooth tracking is ensured. Moreover, since the obstacle information is generated by the remote device 100 based on the map, the accuracy is high, and the obstacle avoidance effect can be improved.
Specifically, in step S172, the mobile platform 200 may be positioned using 5G to obtain the geographic location of the mobile platform 200. Further, the mobile platform 200 may be located using 5G base stations. Thus, the geographic position of the mobile platform 200 can be accurately obtained. Moreover, the 5G base station is used for positioning, no separate positioning hardware is needed to be added, the positioning result is difficult to be tampered and damaged, the monitoring cost is reduced, and the reliability of the positioning result is improved.
It is understood that in other embodiments, the mobile platform 200 may also include a Global Positioning System (GPS) module or a BeiDou Navigation Satellite System (BDS) module, and in step S172, the GPS module may be utilized to position the mobile platform 200 to obtain the geographic position of the mobile platform 200. Thus, the geographic position of the mobile platform 200 can be accurately obtained.
The specific manner in which the geographic location of the mobile platform 200 is obtained is not limited herein.
In step S173, a map within a circular range centered on the geographic position of the mobile platform 200 and having a radius of a first preset length may be acquired. In this way, the map corresponding to the geographic position of the mobile platform 200 is obtained, and the geographic position of the mobile platform 200 is located at the center of the obtained map, so that the actual environment around the mobile platform 200 can be fully reflected according to the obstacle information generated by the obtained map, and the obstacle avoidance effect can be improved.
Further, the first preset length may be a distance between the mobile platform 200 and the monitoring target or greater than the distance. Therefore, the obstacle information can sufficiently reflect the actual environment around the path of the monitoring target tracked by the mobile platform 200, and the obstacle avoidance effect is favorably improved.
In step S173, a map within a rectangular range having the distance between the geographic position and the monitoring target as the length and the second preset length as the width may also be acquired. The specific manner of obtaining the map corresponding to the geographic location according to the geographic location is not limited herein.
Note that, in the example of fig. 11, steps S243 to S244 are performed after step S23. It is understood that, in other examples, the steps S243 to S244 may be performed before the step S23, and the steps S243 to S244 may be performed simultaneously with the step S23. The specific execution order is not limited herein.
Note that, in the example of fig. 12, steps S172 to S174 are performed after step S16. It is understood that, in other examples, the steps S172 to S174 may be performed before the step S16, and the steps S172 to S174 may be performed simultaneously with the step S16. The specific execution order is not limited herein.
Referring to fig. 13, the monitoring method for the mobile platform 200 further includes:
step S245: collecting the geographic location of the mobile platform 200;
step S246: acquiring a map corresponding to the geographic position according to the geographic position;
step S247: generating obstacle information according to the map;
step S25 includes:
step S257: and controlling the mobile platform 200 to track the monitoring target according to the obstacle information and the position of the monitoring target.
Correspondingly, the second processor 201 is configured to acquire the geographic location of the mobile platform 200; the map corresponding to the geographic position is obtained according to the geographic position; and for generating obstacle information from the map; and for controlling the mobile platform 200 to track the monitoring target according to the obstacle information and the position of the monitoring target.
Therefore, the mobile platform 200 tracks the monitored target according to the obstacle information generated based on the map, so as to realize obstacle avoidance in the tracking process, avoid the damage or the loss of the tracked target caused by the collision of the mobile platform 200 with the obstacle in the tracking process, and be beneficial to ensuring the smooth tracking. Moreover, the obstacle information is generated based on the map, so that the accuracy is higher, and the obstacle avoidance effect can be improved.
In addition, since the obstacle information is generated by the mobile platform 200, the obstacle information does not need to be acquired from the outside through wireless communication, the safety of the obstacle information can be improved, and the obstacle avoidance failure caused by tampering of the obstacle information in the transmission process is avoided.
Note that, in the example of fig. 13, steps S245 to S247 are performed after step S23. It is understood that, in other examples, steps S245 to S247 may be performed before step S23, and steps S245 to S247 may be performed simultaneously with step S23. The specific execution order is not limited herein.
Please note that, the explanation and description of this part can refer to the part of the obstacle information generated by the remote device 100 according to the map in the foregoing, and the details are not repeated here to avoid redundancy.
In some embodiments, the monitoring image includes a first monitoring image and a second monitoring image;
the first processor 101 is configured to identify a feature of the monitoring target from the first monitoring image, where the feature of the monitoring target is different from the preset identifier; the correlation between the characteristics of the monitoring target and the preset identification is formed; and the monitoring target is identified from the second monitoring image according to the characteristics and the relevance of the monitoring target.
Referring to fig. 14, the monitoring image includes a first monitoring image and a second monitoring image, and the monitoring method for the remote device 100 further includes:
step S14: identifying the characteristics of the monitoring target from the first monitoring image, wherein the characteristics of the monitoring target are different from the preset identification;
step S15: forming the relevance between the characteristics of the monitoring target and the preset identification;
step S16 includes:
step S161: and identifying the monitoring target from the second monitoring image according to the characteristics and the relevance of the monitoring target.
Therefore, under the condition that the monitoring target cannot be identified according to the preset identification, the monitoring target can also be identified through the characteristics and the relevance of the monitoring target, so that the monitoring target is prevented from being lost or mistakenly monitored in a single identification mode, and the monitoring effect is favorably improved.
Specifically, the preset identifier includes at least one of a human face, a number and a character, and the characteristic of the monitoring target includes at least one of a color, a joint and a physical position. The specific form of the characteristics of the preset identification and the monitored target is not limited herein.
It can be understood that, in the initial stage of monitoring, the monitoring target can be identified from the initial monitoring image according to the preset identifier, however, along with the movement of the monitoring target, it may result that the monitoring target cannot be identified according to the preset identifier in the subsequent monitoring image or the monitoring target is wrongly tracked, thereby causing the situation that the monitoring target is lost.
In one example, the monitoring image is captured by the mobile platform 200, and the preset identifier is a preset human face. The monitoring target and the accompanying person walk in parallel, the monitoring target wears red clothes, and the accompanying person wears black clothes.
In the initial stage of monitoring, the camera of the mobile platform 200 is facing the face of the monitoring target, so that the first monitoring image obtained by shooting includes the face of the monitoring target. In this way, the remote device 100 can recognize the monitoring target from the first monitoring image captured by the mobile platform according to the preset human face.
However, the monitored target and the accompanying person suddenly turn around, at this time, the camera of the mobile platform 200 is directly facing to the back of the monitored target, and the second monitored image obtained by shooting includes the back of the monitored target and the accompanying person, and does not include the face of the monitored target. In this case, if the remote device 100 still recognizes the second monitoring image captured by the mobile platform according to the preset face identifier, the monitoring target cannot be recognized.
However, in the present embodiment, the above problem can be solved by forming the association between the feature of the monitoring target and the preset identifier and recognizing the monitoring target from the second monitoring image according to the feature and the association of the monitoring target.
In the above example, in the initial stage of monitoring, the camera of the mobile platform 200 is directly facing the face of the monitoring target, so that the first monitored image obtained by shooting includes the face of the monitoring target. The remote device 100 can recognize the monitoring target from the first monitoring image according to the preset face, and recognize the feature that the monitoring target is different from the face identification, that is: red clothes. The remote device 100 may form an association of the red clothing with the preset face identification.
If the monitored target and the accompanying person turn around suddenly, the camera of the mobile platform 200 is right opposite to the back of the monitored target at the moment, and the shot second monitored image includes the back of the monitored target and does not include the face of the monitored target. The remote device 100 can recognize the second monitoring image captured by the mobile platform 200 according to the relevance between the red clothes and the preset face identifier, and recognize the target of the red clothes as the monitoring target. Therefore, the monitoring target can be continuously tracked for a long time, and the monitoring of the monitoring target cannot be interrupted because the face of the monitoring target moves out of the visual field of the camera.
The characteristics of the monitoring target may also include physical location. In the above example, where the monitoring target walks along the road on the right side of the road, the remote device 100 may form an association between the right side of the road and the preset face identification. If a passerby wearing red clothes on the left side of the road enters the shooting range, so that red clothes appear on the left side and the right side of the road in another shot second monitoring image, the other second monitoring image shot by the mobile platform 200 can be identified according to the relevance between the right side of the road and the preset face identification, and the target on the right side of the road is used as the monitoring target, so that the monitoring target is identified. Therefore, the monitoring target and the passerby can be distinguished due to different physical positions, and the monitoring target can be identified.
The case of the features including joints is similar to the above case and will not be described in detail.
In some embodiments, the mobile platform 200 includes a zoom camera, and the second processor 201 is configured to control the zoom camera to zoom if the distance between the monitoring target and the mobile platform 200 is greater than a preset distance, so that the size of the monitoring target in the monitoring image acquired by the mobile platform 200 is greater than a preset size; and for transmitting the monitoring image to the remote device 100.
Referring to fig. 15, the mobile platform 200 includes a zoom camera, and the method for controlling the mobile platform 200 further includes:
step S21: under the condition that the distance between the monitoring target and the mobile platform 200 is greater than the preset distance, the mobile platform 200 controls the zoom camera to zoom so that the size of the monitoring target in the monitoring image acquired by the mobile platform 200 is greater than the preset size;
step S22: the monitoring image is transmitted to the remote device 100.
Therefore, the zoom camera of the mobile platform 200 is controlled to zoom, so that the size of the monitoring target in the monitoring image collected by the mobile platform 200 is adjusted, and the method is simple and convenient and has high adjusting speed.
Specifically, in step S21, the current size of the monitoring target in the current monitoring image may be determined; determining zooming parameters of the zooming camera according to the current size and the preset size; and controlling the zooming of the zooming camera according to the zooming parameters. Therefore, the zoom of the zoom camera is controlled through the zoom parameter, so that the size of the monitoring target in the monitoring image acquired by the mobile platform 200 is larger than the preset size, and the speed is high.
In the embodiment, the size of the monitoring target is determined based on the size of the frame in which the monitoring target is located in the monitoring image; the relevant information of the frame in which the monitoring target is located in the monitoring image is obtained by the first processor 101 by identifying the monitoring target in the monitoring image, and the relevant information includes size information.
Therefore, the size of the monitoring target is determined through the size of the frame where the monitoring target is located, the problem that the size of the monitoring target is difficult to determine due to the fact that the shape of the monitoring target is irregular is avoided, the size of the monitoring target can be simply and conveniently determined, and the mobile platform 200 is controlled to track the monitoring target according to the size of the monitoring target.
Specifically, the frame in which the monitoring target is located includes a rectangular frame, a square frame, a circular frame, an oval frame, and the like. Further, under the condition that the frame where the monitoring target is located is a rectangular frame, the size of the frame where the monitoring target is located is the area of the rectangular frame; under the condition that the frame where the monitoring target is located is a square frame, the size of the frame where the monitoring target is located is the area of the square frame; under the condition that the frame where the monitoring target is located is a circular frame, the size of the frame where the monitoring target is located is the area of the circular frame; and under the condition that the frame where the monitoring target is located is an oval frame, the size of the frame where the monitoring target is located is the area of the oval frame.
The specific form and the specific form of the size of the frame in which the monitoring target is located are not limited herein.
In addition, the related information may further include position information, and the mobile platform 200 may be controlled to track the monitoring target according to the position information, so that the position of the frame where the monitoring target is located in the monitoring image acquired by the mobile platform 200 is at the preset position. Therefore, the stability of the position of the frame where the monitoring target is located in the monitoring image can be guaranteed, and monitoring personnel can check conveniently.
The specific content of the related information is not limited herein.
In other embodiments, the preset size includes, but is not limited to, a preset area of the monitoring target in the monitoring image, a preset length of the monitoring target in the monitoring image, a preset width of the monitoring target in the monitoring image, and a preset ratio of the monitoring target in the monitoring image. The specific form of the preset dimension is not limited herein.
Referring to fig. 16, in some embodiments, the monitoring method for the remote device 100 further includes:
step S181: identifying a monitoring target in the monitoring image through a frame, wherein at least part of the monitoring target is positioned in the frame;
step S182: sending the relevant information of the frame where the monitoring target is located in the monitoring image to the mobile platform 200, so that the mobile platform 200 controls a zoom camera on the mobile platform 200 to perform zoom shooting according to the relevant information and/or controls the mobile platform 200 to perform tracking of a preset composition on the monitoring target;
wherein the related information comprises size information and/or position information.
In some embodiments, the first processor 101 is configured to identify a monitoring target in the monitoring image by a frame, wherein at least a part of the monitoring target is located in the frame; the mobile platform 200 is used for sending the relevant information of the frame where the monitoring target is located in the monitoring image to enable the mobile platform 200 to control a zoom camera on the mobile platform 200 to carry out zoom shooting according to the relevant information and/or control the mobile platform 200 to carry out tracking of a preset composition on the monitoring target; wherein the related information comprises size information and/or position information.
Therefore, the monitoring target is identified through the frame, so that the monitoring target can be marked continuously when being more striking so as to be convenient for monitoring personnel to check, even in subsequent monitoring images, the monitoring target cannot be identified according to the preset identification, and the monitoring target can be determined according to the frame.
For example, the monitoring target is determined to be the target A according to the preset identification, the target A is identified in the monitoring image through the frame, the passerby B appears in the subsequent monitoring image, but the target A is identified by the frame, and the passerby B does not exist, so that the mobile platform 200 cannot be mistakenly followed or lost, and the mobile platform 200 always follows the target A identified by the frame.
Moreover, the zoom camera on the mobile platform 200 is controlled to zoom and shoot through the size information of the frame where the monitoring target is located, so that the monitoring image shot by the zoom camera can be more clear due to the fact that the size of the monitoring target is larger than the preset size, the remote device 100 is convenient to recognize, and monitoring personnel can check conveniently.
In addition, the mobile platform 200 is controlled to track the preset composition of the monitoring target through the position information of the frame where the monitoring target is located, so that the position of the monitoring target is in the preset position in the monitoring image shot by the zoom camera, the monitoring image is in the preset composition, the stability of the position of the frame where the monitoring target is located in the monitoring image can be guaranteed, and monitoring personnel can conveniently check the position.
In step S181, the monitoring target may be identified in the monitoring image by a dashed box. Therefore, the frame is distinguished from the real object in the monitored image in a dotted line mode, so that the frame and the monitored target are more striking, and monitoring personnel can conveniently check the frame and the monitored target.
In step S181, the monitoring target may also be identified in the monitoring image by a frame of a preset color. Therefore, the frame is distinguished from the real object in the monitored image in a color mode, so that the frame and the monitored target are more striking, and monitoring personnel can conveniently check the frame and the monitored target.
Note that "at least part of the monitoring target is located in the frame" may mean that part of the monitoring target is located in the frame, that is, the frame does not completely frame the monitoring target, and part of the monitoring target is located outside the frame; it can also mean that all the monitored targets are located in the frame, i.e. the frame completely frames the monitored targets.
It is understood that steps S181-S18 may be performed before step S19, after step S19, or simultaneously with step S19, and are not limited herein.
In some embodiments, the first processor 101 is configured to select a monitoring target in the monitoring image according to a selection instruction.
Referring to fig. 17, correspondingly, the monitoring method for the remote device 100 further includes:
step S163: and selecting a monitoring target in the monitoring image according to the selection instruction.
Therefore, the monitoring target is determined based on the selection instruction, and the user can manually select the monitoring target, so that the improvement of user experience is facilitated. Specifically, the selection instruction includes, but is not limited to, a click instruction, a touch instruction, a key instruction, and a voice instruction.
In one example, the selection instruction is a click instruction, and the user may click a target desired to be selected on the remote device 100 displaying the monitoring image, and the remote device 100 may use the clicked target as the monitoring target.
In another example, the selection instruction is a touch instruction, and the user may touch a desired target on the remote device 100 displaying the monitoring image, and the remote device 100 may take the touched target as the monitoring target.
In yet another example, the selection instruction is a key instruction, and the user may press a key on the remote apparatus 100 displaying the monitoring image to jump the selection frame to the target desired to be selected, and in the case where the user presses the confirmation key, the remote apparatus 100 takes the target in the selection frame as the monitoring target.
In yet another example, the selection instruction is a voice instruction, and the user may input a voice to the remote apparatus 100 displaying the monitoring image: "people wearing red clothes are taken as a monitoring target. The remote device 100 targets a red clothes wearing target as a monitoring target according to the inputted voice.
The specific form of the selection instruction and the specific manner of inputting the selection instruction are not limited herein.
Further, after the monitoring target is determined according to the selection instruction, the preset identifier and the related features corresponding to the monitoring target may be stored to establish the association between the preset identifier and the preset identifier, so that the mobile platform 200 can track the monitoring target more accurately.
It is understood that the selection instruction may be a preliminary selection instruction or a modification instruction. In other words, in the case where the monitoring target in the monitoring image is not determined, the remote apparatus 100 may determine the monitoring target according to the selection instruction; in the case where the monitoring target in the monitoring image has been determined, the remote apparatus 100 may modify the monitoring target according to the selection instruction. Therefore, the primary selection and modification of the monitoring target are realized, and the monitoring target meets the requirements of users.
In summary, the monitoring system 1000, the monitoring method, the mobile platform 200 and the remote device 100 according to the embodiments of the present application can lock the monitored target and perform online tracking and monitoring by using the mobility of the mobile platform 200, and the mobile platform 200 can move along with the target, so that the monitoring time is prolonged, and continuous and long-time monitoring of the monitored target is further achieved. Moreover, the remote device 100 recognizes the monitoring target in the monitoring image according to the preset identifier, and the mobile platform 200 tracks the monitoring target, so that the specific target is recognized and tracked to meet the monitoring requirement, manual intervention is not needed, and the monitoring task is automatically followed after successful recognition, thereby improving the efficiency of executing the monitoring task.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and the scope of the preferred embodiments of the present application includes additional implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be performed by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for performing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried out in the method of implementing the above embodiments may be implemented by hardware associated with instructions of a program, which may be stored in a computer-readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be executed in the form of hardware or in the form of a software functional module. The integrated module, if executed in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (70)

1. A monitoring system, comprising:
a remote device comprising a first processor;
a mobile platform in wireless communication with the remote device, the mobile platform comprising a second processor;
the first processor is used for acquiring a monitoring image, identifying a monitoring target in the monitoring image according to a preset identifier and sending the monitoring target to the mobile platform;
the second processor is used for controlling the mobile platform to track the monitoring target.
2. The monitoring system of claim 1, wherein the first processor is configured to acquire an initial image; and the mark in the initial image is recognized to be the preset mark.
3. The monitoring system according to claim 1, wherein the first processor is configured to identify a target identifier from the monitored image according to the preset identifier; and the target corresponding to the target identification is determined as the monitoring target.
4. The monitoring system of claim 1, wherein the second processor is configured to control the mobile platform to approach the monitoring target such that the distance between the monitoring target and the mobile platform is smaller than a preset distance if the distance between the monitoring target and the mobile platform is greater than or equal to the preset distance.
5. The monitoring system of claim 1, wherein the first processor is configured to generate obstacle information from the monitoring image and send the obstacle information to the mobile platform;
and the second processor is used for controlling the mobile platform to track the monitoring target according to the obstacle information and the position of the monitoring target.
6. The monitoring system of claim 1, wherein the second processor is configured to obtain the monitoring image and generate obstacle information from the monitoring image; and the system is used for controlling the mobile platform to track the monitoring target according to the obstacle information and the position of the monitoring target.
7. The monitoring system of claim 6, wherein the mobile platform includes a camera including at least one of a binocular camera, a structured light based camera, and a time of flight based camera, the monitored images being captured by the camera.
8. The monitoring system of claim 1, wherein the first processor is configured to obtain a geographic location of the mobile platform; the map corresponding to the geographic position is obtained according to the geographic position; the mobile platform is used for generating obstacle information according to the map and sending the obstacle information to the mobile platform;
and the second processor is used for controlling the mobile platform to track the monitoring target according to the obstacle information and the position of the monitoring target.
9. The monitoring system of claim 1, wherein the second processor is configured to collect a geographic location of the mobile platform; the map corresponding to the geographic position is obtained according to the geographic position; and for generating obstacle information from the map; and the system is used for tracking the monitoring target according to the obstacle information and the position of the monitoring target.
10. The monitoring system of claim 1, wherein the monitoring image comprises a first monitoring image and a second monitoring image;
the first processor is used for identifying the characteristics of the monitoring target from the first monitoring image, wherein the characteristics of the monitoring target are different from the preset identification; the correlation between the characteristics of the monitoring target and the preset identification is formed; and the second monitoring image is used for identifying the monitoring target according to the characteristics of the monitoring target and the relevance.
11. The monitoring system of claim 10, wherein the predetermined identifier comprises at least one of a human face, a number, and a word, and the characteristic of the monitoring target comprises at least one of a color, a joint, and a physical position.
12. The monitoring system according to claim 1, wherein the mobile platform comprises a zoom camera, and the second processor is configured to control the zoom camera to zoom if the distance between the monitoring target and the mobile platform is greater than a preset distance, so that the size of the monitoring target in the monitoring image acquired by the mobile platform is greater than a preset size; and for sending the monitoring image to the remote device.
13. The monitoring system according to claim 12, wherein the size of the monitoring target is determined based on the size of a frame in the monitoring image in which the monitoring target is located;
the relevant information of the frame where the monitoring target is located in the monitoring image is obtained by the first processor identifying the monitoring target in the monitoring image, and the relevant information comprises size information.
14. The monitoring system of claim 1, wherein the first processor is configured to select the monitoring target in the monitoring image according to a selection instruction.
15. The monitoring system of claim 1, wherein the mobile platform comprises at least one of a drone, a robot, and a vehicle.
16. The monitoring system of claim 1, wherein the remote device comprises at least one of a server, a cell phone, a tablet, a remote control, and a wearable smart device.
17. The monitoring system of claim 1, wherein the remote device establishes a cellular network connection with the mobile platform.
18. The monitoring system of claim 17, wherein the remote device establishes a cellular network connection with the mobile platform via a 5 th generation mobile communication technology 5G base station.
19. A monitoring method for a mobile platform, the monitoring method comprising:
acquiring a monitoring target sent by remote equipment, wherein the remote equipment is in wireless communication with the mobile platform, and the monitoring target is identified by the remote equipment from a monitoring image according to a preset identifier;
and controlling the mobile platform to track the monitoring target.
20. The monitoring method of claim 19, wherein the controlling the mobile platform to track the monitoring target comprises:
and under the condition that the distance between the monitoring target and the mobile platform is greater than or equal to a preset distance, controlling the mobile platform to approach the monitoring target so that the distance between the monitoring target and the mobile platform is smaller than the preset distance.
21. The monitoring method of claim 19, further comprising:
acquiring obstacle information generated by the remote equipment according to the monitoring image;
the controlling the mobile platform to track the monitoring target includes:
and controlling the mobile platform to track the monitoring target according to the obstacle information and the position of the monitoring target.
22. The monitoring method of claim 19, further comprising:
acquiring the monitoring image and generating barrier information according to the monitoring image;
the controlling the mobile platform to track the monitoring target includes:
and controlling the mobile platform to track the monitoring target according to the obstacle information and the position of the monitoring target.
23. The method of monitoring of claim 22, wherein the mobile platform comprises a camera comprising at least one of a binocular camera, a structured light based camera, and a time of flight based camera, the monitored images being captured by the camera.
24. The monitoring method of claim 19, further comprising:
acquiring the geographic position of the mobile platform and sending the geographic position to the remote equipment so that the remote equipment can acquire a map corresponding to the geographic position according to the geographic position and generate obstacle information according to the map;
acquiring the obstacle information sent by the remote equipment;
the controlling the mobile platform to track the monitoring target includes:
and controlling the mobile platform to track the monitoring target according to the obstacle information and the position of the monitoring target.
25. The monitoring method of claim 19, further comprising:
collecting a geographic location of the mobile platform;
acquiring a map corresponding to the geographic position according to the geographic position;
generating obstacle information according to the map;
the controlling the mobile platform to track the monitoring target includes:
and controlling the mobile platform to track the monitoring target according to the obstacle information and the position of the monitoring target.
26. The monitoring method of claim 19, wherein the mobile platform comprises a zoom camera, the control method further comprising:
under the condition that the distance between the monitoring target and the mobile platform is greater than a preset distance, the mobile platform controls the zooming camera to zoom so that the size of the monitoring target in the monitoring image acquired by the mobile platform is greater than a preset size;
and sending the monitoring image to the remote equipment.
27. The monitoring method according to claim 26, wherein the size of the monitoring target is determined based on the size of a frame in the monitoring image in which the monitoring target is located;
the relevant information of the frame where the monitoring target is located in the monitoring image is obtained by identifying the monitoring target in the monitoring image by the remote equipment, and the relevant information comprises size information.
28. The monitoring method of claim 19, wherein the mobile platform comprises at least one of a drone, a robot, and a vehicle.
29. The monitoring method of claim 19, wherein the remote device comprises at least one of a server, a cell phone, a tablet, a remote control, and a wearable smart device.
30. The monitoring method of claim 19, wherein the remote device establishes a cellular network connection with the mobile platform.
31. The monitoring method of claim 30, wherein the remote device establishes a cellular network connection with the mobile platform via a 5 th generation mobile communication technology 5G base station.
32. A mobile platform, comprising a processor configured to obtain a monitoring target transmitted by a remote device, wherein the remote device is in wireless communication with the mobile platform; and the system is used for controlling the mobile platform to track the monitoring target, wherein the monitoring target is identified by the remote equipment from the monitoring image according to a preset identifier.
33. The mobile platform of claim 32, wherein the processor is configured to control the mobile platform to approach the monitoring target such that the distance between the monitoring target and the mobile platform is less than a preset distance if the distance between the monitoring target and the mobile platform is greater than or equal to the preset distance.
34. The mobile platform of claim 32, wherein the processor is configured to obtain obstacle information generated by the remote device from the monitoring image; and the system is used for controlling the mobile platform to track the monitoring target according to the obstacle information and the position of the monitoring target.
35. The mobile platform of claim 32, wherein the processor is configured to obtain the monitoring image and generate obstacle information from the monitoring image; and the system is used for controlling the mobile platform to track the monitoring target according to the obstacle information and the position of the monitoring target.
36. The mobile platform of claim 35, wherein the mobile platform comprises a camera comprising at least one of a binocular camera, a structured light based camera, and a time of flight based camera, the surveillance images being captured by the camera.
37. The mobile platform of claim 32, wherein the processor is configured to collect a geographic location of the mobile platform and send the geographic location to the remote device, so that the remote device obtains a map corresponding to the geographic location according to the geographic location and generates obstacle information according to the map; and the obstacle information is used for acquiring the obstacle information sent by the remote equipment; and the system is used for controlling the mobile platform to track the monitoring target according to the obstacle information and the position of the monitoring target.
38. The mobile platform of claim 32, wherein the processor is configured to collect a geographic location of the mobile platform; the map corresponding to the geographic position is obtained according to the geographic position; and for generating obstacle information from the map; and the system is used for controlling the mobile platform to track the monitoring target according to the obstacle information and the position of the monitoring target.
39. The mobile platform of claim 32, wherein the mobile platform comprises a zoom camera, and the processor is configured to control the zoom camera to zoom if the distance between the monitoring target and the mobile platform is greater than a preset distance, so that the size of the monitoring target in the monitoring image acquired by the mobile platform is greater than a preset size; and for sending the monitoring image to the remote device.
40. The mobile platform of claim 39, wherein the size of the monitoring object is determined based on the size of a frame in the monitoring image in which the monitoring object is located;
the relevant information of the frame where the monitoring target is located in the monitoring image is obtained by identifying the monitoring target in the monitoring image by the remote equipment, and the relevant information comprises size information.
41. The mobile platform of claim 32, wherein the mobile platform comprises at least one of a drone, a robot, and a vehicle.
42. The mobile platform of claim 32, wherein the remote device comprises at least one of a server, a cell phone, a tablet, a remote control, and a wearable smart device.
43. The mobile platform of claim 32, wherein the remote device establishes a cellular network connection with the mobile platform.
44. The mobile platform of claim 43, wherein the remote device establishes a cellular network connection with the mobile platform via a 5 th generation mobile communication technology 5G base station.
45. A monitoring method for a remote device, the monitoring method comprising:
acquiring a monitoring image;
identifying a monitoring target in the monitoring image according to a preset identifier;
and sending the monitoring target to the mobile platform so that the mobile platform tracks the monitoring target.
46. The monitoring method of claim 45, further comprising:
acquiring an initial image;
and identifying the mark in the initial image as the preset mark.
47. The monitoring method according to claim 45, wherein the identifying a monitoring target in the monitoring image according to a preset identifier comprises:
identifying a target identifier from the monitoring image according to the preset identifier;
and determining the target corresponding to the target identification as the monitoring target.
48. The monitoring method of claim 45, further comprising:
and generating obstacle information according to the monitoring image, and sending the obstacle information to the mobile platform so that the mobile platform tracks the monitoring target according to the obstacle information and the position of the monitoring target.
49. The monitoring method of claim 45, further comprising:
acquiring the geographic position of the mobile platform;
acquiring a map corresponding to the geographic position according to the geographic position;
and generating obstacle information according to the map, and sending the obstacle information to the mobile platform so that the mobile platform tracks the monitoring target according to the obstacle information and the position of the monitoring target.
50. The monitoring method of claim 45, wherein the monitored image comprises a first monitored image and a second monitored image, the monitoring method further comprising:
identifying the characteristics of the monitoring target from the first monitoring image, wherein the characteristics of the monitoring target are different from the preset identification;
forming the relevance between the characteristics of the monitoring target and the preset identification;
the identifying of the monitoring target in the monitoring image according to the preset identification comprises:
and identifying the monitoring target from the second monitoring image according to the characteristics and the relevance of the monitoring target.
51. The monitoring method of claim 50, wherein the preset identification comprises at least one of a human face, a number and a character, and the characteristic of the monitoring target comprises at least one of a color, a joint and a physical position.
52. The monitoring method of claim 45, further comprising:
and selecting the monitoring target in the monitoring image according to the selection instruction.
53. The monitoring method of claim 45, further comprising:
identifying the monitoring target in the monitoring image through a frame, wherein at least part of the monitoring target is positioned in the frame;
sending the relevant information of the frame where the monitoring target is located in the monitoring image to the mobile platform, so that the mobile platform controls a zoom camera on the mobile platform to carry out zoom shooting according to the relevant information and/or controls the mobile platform to track a preset composition of the monitoring target;
wherein the related information comprises size information and/or position information.
54. The monitoring method of claim 45, wherein the mobile platform comprises at least one of a drone, a robot, and a vehicle.
55. The monitoring method of claim 45, wherein the remote device comprises at least one of a server, a cell phone, a tablet, a remote control, and a wearable smart device.
56. The monitoring method of claim 45, wherein the remote device establishes a cellular network connection with the mobile platform.
57. The monitoring method of claim 56, wherein the remote device establishes a cellular network connection with the mobile platform via a 5 th generation mobile communication technology 5G base station.
58. A remote device, comprising a processor configured to obtain a monitoring image; the system is used for identifying a monitoring target in the monitoring image according to a preset identifier; and the monitoring target is sent to the mobile platform so that the mobile platform can track the monitoring target.
59. The remote device of claim 58, wherein the processor is configured to obtain an initial image; and the mark in the initial image is recognized to be the preset mark.
60. The remote device according to claim 58, wherein the processor is configured to identify a target identifier from the monitored image according to the preset identifier; and the target corresponding to the target identification is determined as the monitoring target.
61. The remote device according to claim 58, wherein the processor is configured to generate obstacle information from the monitoring image and send the obstacle information to the mobile platform, so that the mobile platform tracks the monitoring target according to the obstacle information and the position of the monitoring target.
62. The remote device of claim 58, wherein the processor is configured to obtain a geographic location of the mobile platform; the map corresponding to the geographic position is obtained according to the geographic position; and the mobile platform is used for generating obstacle information according to the map and sending the obstacle information to the mobile platform so as to enable the mobile platform to track the monitoring target according to the obstacle information and the position of the monitoring target.
63. The remote device of claim 58, wherein the monitor image comprises a first monitor image and a second monitor image, and wherein the processor is configured to identify a characteristic of the monitor object from the first monitor image, the characteristic of the monitor object being different from the preset identifier; the correlation between the characteristics of the monitoring target and the preset identification is formed; and the second monitoring image is used for identifying the monitoring target according to the characteristics of the monitoring target and the relevance.
64. The remote device of claim 63, wherein the predetermined identifier comprises at least one of a human face, a number, and a word, and wherein the characteristic of the monitoring target comprises at least one of a color, a joint, and a physical location.
65. The remote device of claim 58, wherein the processor is configured to select the monitoring target in the monitoring image according to a selection instruction.
66. The remote device of claim 58, wherein the processor is configured to identify the monitoring target in the monitoring image by a frame, wherein at least a portion of the monitoring target is located in the frame; the mobile platform is used for sending the relevant information of the frame where the monitoring target is located in the monitoring image to the mobile platform, so that the mobile platform controls a zoom camera on the mobile platform to carry out zoom shooting according to the relevant information and/or controls the mobile platform to carry out tracking of a preset composition on the monitoring target; wherein the related information comprises size information and/or position information.
67. The remote device of claim 58, wherein the mobile platform comprises at least one of a drone, a robot, and a vehicle.
68. The remote device of claim 58, wherein the remote device comprises at least one of a server, a cell phone, a tablet, a remote control, and a wearable smart device.
69. The remote device of claim 58, wherein the remote device establishes a cellular network connection with the mobile platform.
70. The remote device of claim 69, wherein the remote device establishes a cellular network connection with the mobile platform via a 5 th generation mobile communications technology 5G base station.
CN202080003992.7A 2020-03-09 2020-03-09 Monitoring system, monitoring method, mobile platform and remote equipment Pending CN112514374A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/078437 WO2021179125A1 (en) 2020-03-09 2020-03-09 Monitoring system, monitoring method, mobile platform and remote device

Publications (1)

Publication Number Publication Date
CN112514374A true CN112514374A (en) 2021-03-16

Family

ID=74952798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080003992.7A Pending CN112514374A (en) 2020-03-09 2020-03-09 Monitoring system, monitoring method, mobile platform and remote equipment

Country Status (2)

Country Link
CN (1) CN112514374A (en)
WO (1) WO2021179125A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691774A (en) * 2021-08-02 2021-11-23 深圳市航顺芯片技术研发有限公司 Movable monitoring equipment, platform, system and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080144884A1 (en) * 2006-07-20 2008-06-19 Babak Habibi System and method of aerial surveillance
CN106096573A (en) * 2016-06-23 2016-11-09 乐视控股(北京)有限公司 Method for tracking target, device, system and long distance control system
CN109753076A (en) * 2017-11-03 2019-05-14 南京奇蛙智能科技有限公司 A kind of unmanned plane vision tracing implementing method
CN110162102A (en) * 2019-05-17 2019-08-23 广东技术师范大学 Unmanned plane automatic identification tracking and system based on cloud platform and machine vision

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106325290A (en) * 2016-09-30 2017-01-11 北京奇虎科技有限公司 Monitoring system and device based on unmanned aerial vehicle
US10725472B2 (en) * 2017-08-10 2020-07-28 Beijing Airlango Technology Co., Ltd. Object tracking using depth information
CN110109482A (en) * 2019-06-14 2019-08-09 上海应用技术大学 Target Tracking System based on SSD neural network
CN110658852A (en) * 2019-09-16 2020-01-07 苏州米龙信息科技有限公司 Intelligent target searching method and system for unmanned aerial vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080144884A1 (en) * 2006-07-20 2008-06-19 Babak Habibi System and method of aerial surveillance
CN106096573A (en) * 2016-06-23 2016-11-09 乐视控股(北京)有限公司 Method for tracking target, device, system and long distance control system
CN109753076A (en) * 2017-11-03 2019-05-14 南京奇蛙智能科技有限公司 A kind of unmanned plane vision tracing implementing method
CN110162102A (en) * 2019-05-17 2019-08-23 广东技术师范大学 Unmanned plane automatic identification tracking and system based on cloud platform and machine vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691774A (en) * 2021-08-02 2021-11-23 深圳市航顺芯片技术研发有限公司 Movable monitoring equipment, platform, system and method

Also Published As

Publication number Publication date
WO2021179125A1 (en) 2021-09-16

Similar Documents

Publication Publication Date Title
US10979625B2 (en) Method for editing image based on artificial intelligent and artificial device
US11479894B2 (en) Method and apparatus for compensating vibration of deep-learning based washing machine
US11082610B2 (en) Artificial device and method of collecting image of the same
KR20190104486A (en) Service Requester Identification Method Based on Behavior Direction Recognition
CN107301377B (en) Face and pedestrian sensing system based on depth camera
US11250869B2 (en) Audio zoom based on speaker detection using lip reading
US11467604B2 (en) Control device and method for a plurality of robots
CN110969644B (en) Personnel track tracking method, device and system
KR20190103079A (en) Vehicle external information output method using augmented reality and apparatus therefor
CN111988524A (en) Unmanned aerial vehicle and camera collaborative obstacle avoidance method, server and storage medium
KR20190098108A (en) Control system to control intelligent robot device
KR102327872B1 (en) Apparatus for Extracting GPS Coordinate of Image-based Tracking Object and Driving Method Thereof
CN109544870A (en) Alarm decision method and intelligent monitor system for intelligent monitor system
KR20190099170A (en) A method for providing notification according to the surrounding situation of an intelligent terminal and a device therefor
KR20190106946A (en) Artificial device and method for controlling the same
CN112508865A (en) Unmanned aerial vehicle inspection obstacle avoidance method and device, computer equipment and storage medium
KR20190107613A (en) User profiling method using captured image
CN105493086A (en) Monitoring installation and method for presenting a monitored area
CN113965733A (en) Binocular video monitoring method, system, computer equipment and storage medium
CN112514374A (en) Monitoring system, monitoring method, mobile platform and remote equipment
US20210125478A1 (en) Intelligent security device
JP7282186B2 (en) situational awareness surveillance
US11423881B2 (en) Method and apparatus for updating real-time voice recognition model using moving agent
KR20210061115A (en) Speech Recognition Method of Artificial Intelligence Robot Device
CN112911151B (en) Target following method, device, equipment, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210316