CN110633612A - Monitoring method and system for inspection robot - Google Patents

Monitoring method and system for inspection robot Download PDF

Info

Publication number
CN110633612A
CN110633612A CN201910463607.6A CN201910463607A CN110633612A CN 110633612 A CN110633612 A CN 110633612A CN 201910463607 A CN201910463607 A CN 201910463607A CN 110633612 A CN110633612 A CN 110633612A
Authority
CN
China
Prior art keywords
monitoring target
monitoring
determining
current frame
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910463607.6A
Other languages
Chinese (zh)
Other versions
CN110633612B (en
Inventor
杨宇
赵涛
赵旭亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Comservice Enrising Information Technology Co Ltd
Original Assignee
China Comservice Enrising Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Comservice Enrising Information Technology Co Ltd filed Critical China Comservice Enrising Information Technology Co Ltd
Priority to CN201910463607.6A priority Critical patent/CN110633612B/en
Publication of CN110633612A publication Critical patent/CN110633612A/en
Application granted granted Critical
Publication of CN110633612B publication Critical patent/CN110633612B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The embodiment of the invention discloses a monitoring method of an inspection robot, which comprises the following steps: determining a monitoring target; extracting a current frame image of the monitoring target, and determining a reference area of the current frame image according to a preset rule; identifying the position of the monitoring target in the current frame image, and judging the position relation between the monitoring target and the reference area; adjusting a terminal to enable the monitoring target to be located in the reference area; like this, through reading the position of monitoring object in the current image frame to the drive is patrolled and examined the robot and is moved on the track, and then makes the monitoring object from being monitored all the time after getting into the factory building, thereby has improved the result of use of patrolling and examining the robot.

Description

Monitoring method and system for inspection robot
Technical Field
The invention relates to the field of rail type inspection robots, in particular to a monitoring method and a monitoring system of an inspection robot.
Background
With the increase of unattended demand, the track type inspection robot gradually replaces the traditional manpower monitoring operation and maintenance, and is increasingly applied to various industry fields. At present, in the process of monitoring and maintaining a plant, a rail-type mobile robot generally adopts a face recognition mode to an operator waiting to enter the plant to judge whether the operator has a corresponding right, when the operator has the corresponding right, the operator is allowed to enter the plant, and at the moment, the rail-type inspection robot moves to an operation position of the right of the operator to monitor the operator. However, when the monitoring device is used, all actions of the operators after entering the factory building cannot be monitored, so that the monitoring effect is influenced.
Disclosure of Invention
In order to solve the technical problems, embodiments of the present invention provide a monitoring method and a monitoring system for an inspection robot, which can perform tracking shooting on an operator entering a plant, thereby monitoring all actions of the operator after entering the plant, and further ensuring a monitoring effect of the rail-type inspection robot when in use.
In order to achieve the purpose, the technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a monitoring method of an inspection robot, which comprises the following steps:
determining a monitoring target;
extracting a current frame image of the monitoring target, and determining a reference area of the current frame image according to a first preset rule;
determining the position of the monitoring target in the current frame image, and judging the position relation between the monitoring target in the current frame image and the reference area;
and adjusting the terminal to enable the monitoring target to be located in the reference area.
In an embodiment of the present invention, the method for determining a monitoring target includes:
receiving a user instruction;
acquiring a face image of a user at a predetermined photographing point;
preprocessing the face image, and extracting a feature vector of the preprocessed face image;
comparing the extracted feature vector with a feature vector prestored in a database;
and when the comparison results are matched, starting tracking shooting to determine the monitoring target user.
In the embodiment of the present invention, the reference region is located in the middle of the current frame image.
In this embodiment of the present invention, the method for determining the position of the monitoring target in the current frame image and determining the position relationship between the monitoring target in the current frame image and the reference region includes:
extracting the features of the current frame image, and determining the target features of the monitoring target in the current frame image;
and determining the position relation between the monitoring target and the reference area by judging the position relation between the target characteristic and the reference area.
In an embodiment of the present invention, the method for determining the position relationship between the monitoring target in the current frame image and the reference region includes:
determining a position judgment area of the current frame image according to a second preset rule, wherein the position judgment area is respectively positioned at the upper side, the lower side, the left side, the right side, the upper left side, the lower left side, the upper right side and the lower right side of the reference area; and when the monitoring target is positioned in the position judgment area, determining the position relation between the monitoring target and the reference area according to the position relation between the position judgment area and the reference area.
In this embodiment of the present invention, the method for adjusting the terminal to enable the monitoring target to be located in the reference area includes:
when the monitoring target is located in the reference area, the terminal keeps the shooting position of the current lens and the shooting angle of the lens;
and when the monitoring target is positioned outside the reference area, the terminal moves the shooting position of the lens and/or adjusts the shooting angle of the lens.
The embodiment of the invention also provides a monitoring system of the inspection robot, which comprises:
the determining module is used for determining a monitoring target;
the extraction module is used for extracting a current frame image of the monitoring target and determining a reference area of the current frame image according to a first preset rule;
the identification module is used for determining the position of the monitoring target in the current frame image and judging the position relation between the monitoring target in the current frame image and the reference area;
and the adjusting module is used for adjusting the terminal to enable the monitoring target to be located in the reference area.
In an embodiment of the present invention, the monitoring system further includes:
a receiving unit for receiving a user instruction;
an acquisition unit configured to acquire a face image of a user at a predetermined shooting point;
the processing unit is used for preprocessing the face image and extracting a feature vector of the preprocessed face image;
the comparison unit is used for comparing the extracted characteristic vector with a characteristic vector prestored in a database;
and the determining unit is used for starting tracking shooting and determining the monitoring target user when the comparison results are matched.
In an embodiment of the present invention, the monitoring system further includes:
the extraction unit is used for extracting the characteristics of the current frame image and determining the target characteristics of the monitoring target in the current frame image;
and the first judging unit is used for determining the position relation between the monitoring target and the reference area by judging the position relation between the target characteristic and the reference area.
In an embodiment of the present invention, the monitoring system further includes:
and the second judging unit is used for determining the position relation between the monitoring target and the reference area according to the position relation between the position judging area and the reference area when the monitoring target is positioned in the position judging area.
In an embodiment of the present invention, the monitoring system further includes:
the first adjusting unit is used for keeping the shooting position and the shooting angle of the current lens by the terminal when the monitoring target is positioned in the reference area;
and the second adjusting unit is used for moving the shooting position of the lens and/or adjusting the shooting angle of the lens by the terminal when the monitoring target is positioned outside the reference area.
The embodiment of the invention provides a monitoring method of an inspection robot, which comprises the following steps: determining a monitoring target; extracting a current frame image of the monitoring target, and determining a reference area of the current frame image according to a preset rule; identifying the position of the monitoring target in the current frame image, and judging the position relation between the monitoring target and the reference area; adjusting a terminal to enable the monitoring target to be located in the reference area; the embodiment of the invention also provides a monitoring system of the inspection robot, which comprises: the determining module is used for determining a monitoring target; the extraction module is used for extracting the current frame image of the monitoring target and determining the reference area of the current frame image according to a preset rule; the identification module is used for identifying the position of the monitoring target in the current frame image and judging the position relation between the monitoring target and the reference area; the adjusting module is used for adjusting the terminal to enable the monitoring target to be located in the reference area; like this, through reading the position of monitoring object in the current image frame to the drive is patrolled and examined the robot and is moved on the track, and then makes the monitoring object from being monitored all the time after getting into the factory building, thereby has improved the result of use of patrolling and examining the robot.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of a track type inspection robot inspection detection method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a position relationship between a reference region and a shot image according to a first embodiment of the present invention;
fig. 3 is a schematic view of an implementation flow of the inspection detection method for the rail-type inspection robot according to the second embodiment of the present invention;
fig. 4 is a schematic view of an implementation flow of another track type inspection robot inspection detection method according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of a monitoring system according to a third embodiment of the present invention;
fig. 6 is a flowchart of a method for inspecting a track type inspection robot according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of a monitoring system according to a fourth embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Example one
In order to solve the technical problems in the background art, the embodiment of the invention provides an inspection method of an inspection robot, which is applied to an inspection monitoring system device of a rail type inspection robot. In practical application, the system device for the inspection and monitoring of the rail-type inspection robot can be a server with large data processing capacity, a computer client and the like. Fig. 1 is a schematic view of an implementation flow of a track type inspection robot inspection detection method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
step S310: and determining a monitoring target.
Here, step S311: receiving a user instruction; step S312: acquiring a face image of a user at a predetermined photographing point; step S313: preprocessing the face image, and extracting a feature vector of the preprocessed face image; step S314: comparing the extracted feature vector with a feature vector prestored in a database; step S315: and when the comparison results are matched, starting tracking shooting to determine the monitoring target user.
The user instruction includes but is not limited to an instruction that a user is about to enter a plant, and the user instruction can be input through a remote host or an entrance guard system of the plant.
When the track type inspection robot is inspected and monitored, the system device receives the user instruction, the rail type inspection robot is taken the terminal and is moved to the predetermined shooting point department and obtains the face image of the user, the predetermined shooting point is the best shooting point for obtaining the face characteristic of the user at the factory building door on the track, wherein, the predetermined shooting point can be set manually. When the inspection robot moves to the preset shooting point, a shooting terminal on the inspection robot can shoot the face image of the user through a camera to obtain a clear image with the face image of the user; the shooting terminal on the inspection robot can also shoot clear frame images with the user face images in the video through interception.
And the preprocessing of the face image comprises image preprocessing such as gray correction and noise filtration of the shot image. The feature vector of the face image after the extraction preprocessing includes, but is not limited to, feature data such as the shape of the face organs and the distance between the organs.
And then comparing the extracted feature data (feature vectors) with the feature vectors prestored in the database, starting a tracking shooting signal by the rail type inspection robot when the comparison results are matched, starting tracking shooting by the rail type inspection robot for the user, and transmitting the shot picture to a remote host.
The remote host is an intelligent terminal with a visual interface, and includes but is not limited to a smart phone, a tablet computer, a notebook computer, a desktop computer and the like.
Step S320: and extracting the current frame image of the monitoring target, and determining the reference area of the current frame image according to a first preset rule.
And capturing the current frame image of the monitoring target according to the lens of the track type inspection robot when the monitoring target is shot, and acquiring the image frame of the current monitoring target. And then, determining a reference region of the current frame image according to the first preset rule, wherein the first preset rule is that a certain position on a shooting lens is selected and determined as the reference region in the shooting process, and for example, if the upper left of the shooting lens is selected as the reference region, the reference region is located at the upper left of each extracted frame image. Specifically, as shown in fig. 2, the reference region 10 may be a middle position of the photographing lens.
Step S330: and determining the position of the monitoring target in the current frame image, and judging the position relation between the monitoring target in the current frame image and the reference region 10.
Here, feature extraction is performed on the current frame image through an image feature detection algorithm, features of the monitoring target in the image are determined, the features of the monitoring target are compared with the reference region 10, and the position relationship between the monitoring target and the reference region 10 is determined.
Judging the positional relationship of the monitoring target and the reference region 10, the reference region 10 may be set as a reference coordinate point, and then set as a negative coordinate of the X-axis in the horizontal direction with the left side of the reference region 10, the right side of the reference region 10 is set as a positive coordinate of the X-axis, the upper side of the reference region 10 is set as a positive coordinate of the Y-axis in the vertical direction, and the lower side of the reference region 10 is set as a negative coordinate of the Y-axis. The position of the monitoring target with respect to the reference region 10 is determined by determining the position of the monitoring target (or a reference point or reference points within the monitoring target image region, where the reference point may be a center point) on the coordinate axis.
Step S340: and adjusting the terminal to enable the monitoring target to be located in the reference area 10. Here, the method of adjusting the terminal so that the monitoring target is located within the reference area 10 includes:
step S340 a: when the monitoring target is located in the reference area 10, the terminal keeps the shooting position of the current lens and the shooting angle of the lens;
step S340 b: and when the monitoring target is positioned outside the reference area 10, the terminal moves the shooting position of the lens and/or adjusts the shooting angle of the lens.
Wherein, the camera lens of terminal (track type patrols and examines robot) is 360 rotatable camera lenses, consequently, when the angle is shot in the adjustment, only need rotatory camera lens can. And when the shooting position is adjusted, only the position of the terminal (the rail type inspection robot) on the rail needs to be adjusted.
Specifically, when part of the monitoring target is located outside the reference area 10, the shooting angle of the lens of the terminal may be preferentially adjusted, and when the angle adjustment is larger than a predetermined adjustment angle range, the shooting position of the lens is adjusted; when the monitoring target is completely located outside the reference area 10, the shooting position of the terminal lens is preferentially adjusted; when the monitoring target is completely located in the reference area 10, the terminal does not perform an adjustment action.
In the monitoring method of the inspection robot provided by the embodiment of the invention, a monitoring target is determined; extracting a current frame image of the monitoring target, and determining a reference area 10 of the current frame image according to a first preset rule; determining the position of the monitoring target in the current frame image, and judging the position relationship between the monitoring target in the current frame image and the reference region 10; adjusting the terminal to enable the monitoring target to be located in the reference area 10; therefore, people entering the plant can be tracked and shot, and workers at the remote host can monitor the operators entering the plant in real time.
Example two
In order to further improve the accuracy of determining the position of the monitoring target in the current frame image and make the system more convenient to judge the position relationship between the monitoring target in the current frame image and the reference region, the embodiment of the invention provides an inspection method of an inspection robot, which is applied to a system device for inspection and monitoring of a rail-type inspection robot. In practical application, the system device for the inspection and monitoring of the rail-type inspection robot can be a server with large data processing capacity, a computer client and the like. Fig. 3 is a schematic flow chart of an implementation of the inspection detection method for the rail-type inspection robot according to the second embodiment of the present invention, and as shown in fig. 3, the method includes:
step S331 a: and extracting the features of the current frame image, and determining the target features of the monitoring target in the current frame image.
Here, feature extraction is performed on the current frame image through an image feature detection algorithm, and a target feature of the monitoring target in the image is determined, where there may be one or more target features of the monitoring target. The target features of the monitoring target include, but are not limited to, color, shape, contour, and the like.
Step S331 b: and determining the position relation between the monitoring target and the reference area by judging the position relation between the target characteristic and the reference area.
Here, the position relationship between the target feature and the reference region is determined, that is, the position relationship between the monitoring target and the reference region is determined.
Further, fig. 4 is a schematic flow chart illustrating an implementation of another inspection method for a rail-type inspection robot according to a second embodiment of the present invention, and as shown in fig. 4, the method for determining a position relationship between the monitoring target in the current frame image and the reference region includes:
step S332 a: and determining a position judgment area of the current frame image according to a second preset rule, wherein the position judgment area is respectively positioned at the upper side, the lower side, the left side, the right side, the upper left side, the lower left side, the upper right side and the lower right side of the reference area.
Here, the second preset rule may be that a certain position on the photographing lens is selected during photographing and determined as a position determination area, and the selected position is set around the reference area, that is, as shown in fig. 2, the picture is divided into 9 areas, wherein there are one reference area 10 and 8 position determination areas 11, which are respectively located at the upper side, the lower side, the left side, the right side, the upper left side, the lower left side, the upper right side and the lower right side of the reference area 10.
Step S332 b: when the monitoring target is located in the position judgment area 11, determining the position relationship between the monitoring target and the reference area 10 according to the position relationship between the position judgment area 11 and the reference area 10.
Specifically, when the control target is located the upper right side of benchmark district 10 when judging in the district 11 in the position, the system that the robot was patrolled and examined to the rail mounted is according to the position of control target is judged the control target is located the upper right side of benchmark district 10, at this moment, the system that the robot was patrolled and examined to the rail mounted patrol and examine the control to the signal that the terminal sent upper right side to remove, here, when removing, can direct control the camera lens at terminal carries out upper right side and rotates, also can control the terminal moves right on the track, and the cooperation the camera lens at terminal upwards rotates, thereby makes the control target gets back to in the benchmark district 10.
When the control target is located during the regional 10 upper left side of benchmark and the left side during in the position judgement region 11, this moment the system that the robot was patrolled and examined to the rail mounted patrols and examines the control basis the position of control target can be affirmed the control target is located the upper left side of benchmark region 10, this moment, the system that the robot was patrolled and examined to the rail mounted patrols and examines the control to the terminal sends the signal that the upper left removed, here, when removing, can direct control the camera lens at terminal carries out the upper left rotation, also can control the terminal removes to the left on the track, and the cooperation the camera lens at terminal upwards rotates, then judges whether the control target still is in position judgement region 11 continues to carry out above-mentioned operation.
More specifically, the area of the position determination region 11 is much smaller than the area of the reference region 10, and the area of the reference region 10 is larger than the area of the target feature.
In the monitoring method of the inspection robot provided by the embodiment of the invention, the target characteristics of the monitoring target in the current frame image are determined by extracting the characteristics of the current frame image; determining the position relation between the monitoring target and the reference area 10 by judging the position relation between the target characteristic and the reference area 10; determining a position judgment area 11 of the current frame image according to a second preset rule, wherein the position judgment area 11 is respectively located at the upper side, the lower side, the left side, the right side, the upper left side, the lower left side, the upper right side and the lower right side of the reference area 10; when the monitoring target is located in the position judgment area 11, determining the position relation between the monitoring target and the reference area 10 according to the position relation between the position judgment area 11 and the reference area 10; therefore, the pictures shot by tracking the people entering the factory building are processed, so that the interference options can be removed when the system judges, and the working personnel at the remote host can obtain more accurate shot pictures.
EXAMPLE III
The embodiment of the invention provides a monitoring method of an inspection robot, which can track and shoot workers entering a factory building when the factory building is inspected, transmit a video shooting video stream to a big data platform through the 4th general mobile communication technology (4G) in cooperation with a network, identify the face in the video stream by adopting a face identification technology, a big data real-time analysis technology and a machine learning technology, then perform image processing analysis, acquire the characteristics of a target person in an image, analyze the image and judge the position of the target person, so that the shooting position and the shooting angle of a lens of the inspection robot are adjusted, and the aim of remote tracking and shooting is fulfilled. The monitoring method of the inspection robot is applied to the monitoring method of a rail-type inspection robot, and the monitoring method is a schematic diagram of a composition structure of a monitoring system, as shown in fig. 5, the monitoring system comprises: the system comprises a rail type inspection robot, a big data platform and a host.
The track type inspection robot is internally provided with an operating system, can install an Application program (App), opens the App to perform data interaction with a platform, and is connected with an operator 4G + network through a Wireless Fidelity (WiFi) and a mobile hotspot: when the track type inspection robot inspects the images, the App can transmit the shot field videos to a big data platform through a Real Time Streaming Protocol (RTSP) Protocol; meanwhile, control signal information returned by the platform can be received, and the rail type inspection robot is controlled and adjusted.
And the big data platform is responsible for analyzing the RTSP video stream, analyzing the image, detecting the position of an operator in the image, judging the position relation of the operator and a set reference area, and sending a control signal to the rail type patrol robot. The hardware uses a distributed server cluster, the operating system uses Centos Linux 6.5, and the big data platform further comprises: the device comprises an image acquisition module, a face recognition module, an image coding and decoding module, a position judgment module and a driving module.
And the image acquisition module is used for analyzing the video to acquire the current frame image with the monitoring target.
In the actual implementation process, Apache Kafka can be used for image acquisition, a real-time computing model routes an acquired image with a monitoring target to a data warehouse tool (HIVE)/high fault-tolerant Distributed File System (HDFS) through Apache Storm or Apache Spark, and then a current frame image of the monitoring target is extracted.
And the face recognition module is used for analyzing the monitoring target in the current frame image and determining the authority of the monitoring target.
And the image coding and decoding module is used for coding and decoding the image. The streaming media Server uses RTSP Server, and the video codec uses FFmpeg.
And the position judging module is used for judging the position relation of the monitoring target on the current frame image relative to the reference area.
Here, when the position is determined, the position relationship of the current frame image with respect to the reference region may be determined using a machine learning technique, or a feature point matching algorithm may be used, that is, after the current frame image is compared with the original frame image, it is determined whether the feature points are consistent, and if not, where the inconsistent region position is with respect to the original frame image is identified, and then it is determined whether the position is in the reference region or the position determination region.
And the driving signal sending module is used for sending a driving control signal to the rail type inspection robot according to the position relation between the monitoring target and the reference area, so that the lens of the rail type inspection robot is adjusted.
And the host is used for receiving the alarm information sent by the data platform and displaying the alarm information on a visual interface of the client for an auditor to check. The host is an intelligent terminal with a visual interface, and comprises but is not limited to a smart phone, a tablet computer, a notebook computer, a desktop computer and the like.
The detection method provided by the embodiment of the invention mainly comprises the following steps: the track type inspection robot provides a network environment as a mobile terminal, and the track type inspection robot comprises an intelligent mobile terminal, a mobile hotspot or an operator 4G + network and a rear-end big data platform which are connected with each other: secondly, the rail-mounted inspection robot collects an inspection video, encodes a video stream and transmits the video stream to a big data platform at the rear end of the inspection through an RTSP (real time streaming protocol); and thirdly, analyzing and processing the acquired video stream in real time by the big data platform, calling an image processing algorithm, and judging whether the position of a person in the image is in a reference area or not so as to send out a corresponding adjusting signal. Fig. 6 is a schematic flow chart of a polling method according to a seventh embodiment of the present invention, as shown in fig. 6, the polling method includes:
step S201: and starting the track type inspection robot, entering inspection application, and starting to shoot inspection scene videos along the track.
Step S202: and receiving a user instruction, and moving the rail type inspection robot to a preset shooting point and acquiring a face image of the user.
Here, the remote user can send a signal to the rail-mounted inspection robot through a radio frequency device on the big data platform to acquire a face image of a person located at the doorway of the factory building, wherein the radio frequency device includes but is not limited to an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. The radio frequency device may communicate with a network and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access 2000(Code Division Multiple Access 2000, CDMA2000), Wideband Code Division Multiple Access (WCDMA), Time Division Synchronous Code Division Multiple Access (Time Division Multiple Access, TD-SCDMA), Frequency Division duplex Long term evolution (FDD-LTE), and Time Division duplex Long term evolution (TDD-LTE), etc.
The user at the factory building door can also directly send instruction signals to the rail type inspection robot through a wired connection mode through an access control system or a key switch arranged at the factory building door.
Or, the user who needs to enter the factory building can send a signal to the rail type inspection robot through a wireless signal (for example, WiFi).
After the track type inspection robot receives the signal, the track type inspection robot drives the motor to move to the preset shooting point of the track, wherein the track type inspection robot can determine whether the inspection robot accurately reaches the preset position by setting a position sensor.
And when the inspection robot reaches a preset position, the user aims at a shooting terminal on the inspection robot so as to perform face recognition.
Step S203: the camera of the rail-mounted inspection robot shoots the face image of a user, the face image of the user is subjected to face recognition through an Adaboost learning algorithm, when the recognition result is matched, the user is allowed to enter a plant, and the rail-mounted inspection robot starts a tracking shooting mode.
Step S204: the track type inspection robot sends the video to the big data platform in a streaming media RTSP form in real time.
Step S205: and the big data platform analyzes the streaming media video through the image acquisition module and processes each frame of image in parallel.
Here, the Apache Kafka component may be used to extract an image in practical applications, and the extracted image may be used as a target image. Spark may be used as a computational model in performing the extraction of the target image.
Step S206: and the big data platform detects the target image through an image acquisition module and extracts image characteristics.
Here, the features of the image include, but are not limited to, the color, shape, and the like of the image. In a specific implementation, the code may use an Open Source Computer Vision Library (OpenCV) Open Source Component, and the target recognition algorithm may be, but is not limited to, Local Face Analysis (LFA), Principal Component Analysis (PCA) based on Principal Component Analysis (PCA), Neural network (Neural Networks), or fourier shape description.
Step S207: and the big data platform analyzes the position relation between the characteristics of the image and the reference area through a position judgment module.
Step S208: and sending the analysis result to the rail-type inspection robot, and adjusting the rail-type inspection robot according to the analysis result to track and shoot the user.
The embodiment of the invention provides a monitoring method and a monitoring system of an inspection robot, which can meet the function of tracking and shooting a user when the rail type inspection robot monitors.
Example four
An embodiment of the present invention provides a monitoring system for an inspection robot, and fig. 7 is a schematic structural diagram of a monitoring system provided in a fourth embodiment of the present invention, and as shown in fig. 7, the monitoring system includes: the device comprises a determining module 1, an extracting module 2, an identifying module 3 and an adjusting module 4, wherein:
the determining module 1 is used for determining a monitoring target; the extraction module 2 is used for extracting a current frame image of the monitoring target and determining a reference area of the current frame image according to a first preset rule; the identification module 3 is configured to determine a position of the monitoring target in the current frame image, and determine a position relationship between the monitoring target in the current frame image and the reference region; and the adjusting module 4 is used for adjusting the terminal to enable the monitoring target to be located in the reference area.
Further, the monitoring system further comprises: a receiving unit for receiving a user instruction; an acquisition unit configured to acquire a face image of a user at a predetermined shooting point; the processing unit is used for preprocessing the face image and extracting a feature vector of the preprocessed face image; the comparison unit is used for comparing the extracted characteristic vector with a characteristic vector prestored in a database; and the determining unit is used for starting tracking shooting and determining the monitoring target user when the comparison results are matched.
Further, the monitoring system further comprises: the extraction unit is used for extracting the characteristics of the current frame image and determining the target characteristics of the monitoring target in the current frame image; and the first judging unit is used for determining the position relation between the monitoring target and the reference area by judging the position relation between the target characteristic and the reference area.
Further, the monitoring system further comprises: and the second judging unit is used for determining the position relation between the monitoring target and the reference area according to the position relation between the position judging area and the reference area when the monitoring target is positioned in the position judging area.
Further, the monitoring system further comprises: the first adjusting unit is used for keeping the shooting position and the shooting angle of the current lens by the terminal when the monitoring target is positioned in the reference area; and the second adjusting unit is used for moving the shooting position of the lens and/or adjusting the shooting angle of the lens by the terminal when the monitoring target is positioned outside the reference area.
Here, it should be noted that: the above description of the embodiment of the inspection system is similar to the above description of the embodiment of the method, and has similar beneficial effects to the embodiment of the method, and therefore, the description is omitted. For technical details that are not disclosed in the embodiment of the inspection system of the present invention, please refer to the description of the embodiment of the method of the present invention for understanding, and therefore, for brevity, will not be described again.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It is noted that, in this document, the term "comprises" \ "comprising" or any other variation thereof is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. Additionally, the various elements shown or discussed as being coupled or directly coupled or communicatively coupled to each other may be coupled or communicatively coupled indirectly, via some interface, device or element, whether electrically, mechanically, or otherwise.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units: either in one location or distributed over multiple network elements: some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit: the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be accomplished by hardware related to program instructions, the program may be stored in a computer-readable storage medium, and when executed, the program performs the steps including the method embodiments: and the aforementioned storage medium includes: various media that can store program code, such as removable storage devices, read-only memories, magnetic or optical disks, etc.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and all such changes or substitutions are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (11)

1. A method for monitoring an inspection robot, the method comprising:
determining a monitoring target;
extracting a current frame image of the monitoring target, and determining a reference region (10) of the current frame image according to a first preset rule;
determining the position of the monitoring target in the current frame image, and judging the position relation between the monitoring target in the current frame image and the reference region (10);
adjusting the terminal so that the monitoring target is located within the reference area (10).
2. The monitoring method for the inspection robot according to claim 1, wherein the method for determining the monitoring target includes:
receiving a user instruction;
acquiring a face image of a user at a predetermined photographing point;
preprocessing the face image, and extracting a feature vector of the preprocessed face image;
comparing the extracted feature vector with a feature vector prestored in a database;
and when the comparison results are matched, starting tracking shooting to determine the monitoring target user.
3. The inspection robot monitoring method according to claim 1, wherein the reference area (10) is located at a middle position of the current frame image.
4. The monitoring method for the inspection robot according to claim 1, wherein the method for determining the position of the monitoring target in the current frame image and judging the position relationship between the monitoring target in the current frame image and the reference area (10) comprises the following steps:
extracting the features of the current frame image, and determining the target features of the monitoring target in the current frame image;
and determining the position relation between the monitoring target and the reference region (10) by judging the position relation between the target characteristic and the reference region (10).
5. The monitoring method for an inspection robot according to claim 1, wherein the method of determining the positional relationship between the monitoring target in the current frame image and the reference area (10) includes:
determining a position judgment region (11) of the current frame image according to a second preset rule, wherein the position judgment region (11) is respectively positioned at the upper side, the lower side, the left side, the right side, the upper left side, the lower left side, the upper right side and the lower right side of the reference region (10); and when the monitoring target is positioned in the position judgment area (11), determining the position relation between the monitoring target and the reference area (10) according to the position relation between the position judgment area (11) and the reference area (10).
6. The monitoring method for inspection robots according to claim 1, characterized in that the method for the adjustment terminal to position the monitoring target in the reference area (10) comprises:
when the monitoring target is located in the reference area (10), the terminal keeps the shooting position of the current lens and the shooting angle of the lens;
and when the monitoring target is positioned outside the reference area (10), the terminal moves the shooting position of the lens and/or adjusts the shooting angle of the lens.
7. A monitoring system for patrolling a robot, characterized in that, the system includes:
a determination module (1) for determining a monitoring target;
the extraction module (2) is used for extracting a current frame image of the monitoring target and determining a reference region (10) of the current frame image according to a first preset rule;
the identification module (3) is used for determining the position of the monitoring target in the current frame image and judging the position relation between the monitoring target in the current frame image and the reference region (10);
and the adjusting module (4) is used for adjusting the terminal so that the monitoring target is positioned in the reference area (10).
8. The monitoring system for an inspection robot according to claim 7, further comprising:
a receiving unit for receiving a user instruction;
an acquisition unit configured to acquire a face image of a user at a predetermined shooting point;
the processing unit is used for preprocessing the face image and extracting a feature vector of the preprocessed face image;
the comparison unit is used for comparing the extracted characteristic vector with a characteristic vector prestored in a database;
and the determining unit is used for starting tracking shooting and determining the monitoring target user when the comparison results are matched.
9. The monitoring system for an inspection robot according to claim 7, further comprising:
the extraction unit is used for extracting the characteristics of the current frame image and determining the target characteristics of the monitoring target in the current frame image;
and a first judging unit for determining the position relation between the monitoring target and the reference area (10) by judging the position relation between the target characteristic and the reference area (10).
10. The monitoring system for an inspection robot according to claim 7, further comprising:
and the second judging unit is used for determining the position relation between the monitoring target and the reference area (10) according to the position relation between the position judging area (11) and the reference area (10) when the monitoring target is positioned in the position judging area (11).
11. The monitoring system for an inspection robot according to claim 7, further comprising:
the first adjusting unit is used for keeping the shooting position of the current lens and the shooting angle of the lens when the monitoring target is positioned in the reference area (10);
and the second adjusting unit is used for moving the shooting position of the lens and/or adjusting the shooting angle of the lens when the monitoring target is positioned outside the reference area (10).
CN201910463607.6A 2019-11-20 2019-11-20 Monitoring method and system for inspection robot Active CN110633612B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910463607.6A CN110633612B (en) 2019-11-20 2019-11-20 Monitoring method and system for inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910463607.6A CN110633612B (en) 2019-11-20 2019-11-20 Monitoring method and system for inspection robot

Publications (2)

Publication Number Publication Date
CN110633612A true CN110633612A (en) 2019-12-31
CN110633612B CN110633612B (en) 2020-09-11

Family

ID=68968459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910463607.6A Active CN110633612B (en) 2019-11-20 2019-11-20 Monitoring method and system for inspection robot

Country Status (1)

Country Link
CN (1) CN110633612B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111324131A (en) * 2020-03-31 2020-06-23 中通服创立信息科技有限责任公司 Following monitoring method of track type inspection robot based on human body radar
CN111371990A (en) * 2020-03-12 2020-07-03 黄成驰 Computer lab environmental monitoring system and camera based on remove thing networking
CN111399517A (en) * 2020-03-31 2020-07-10 中通服创立信息科技有限责任公司 Track type inspection robot following monitoring method based on UWB positioning system
CN111432334A (en) * 2020-03-31 2020-07-17 中通服创立信息科技有限责任公司 Following monitoring method and system for rail-mounted inspection robot
CN111604888A (en) * 2020-05-29 2020-09-01 珠海格力电器股份有限公司 Inspection robot control method, inspection system, storage medium and electronic device
CN111314667B (en) * 2020-03-13 2021-04-16 江苏嘉和天盛信息科技有限公司 Security monitoring method
CN115760989A (en) * 2023-01-10 2023-03-07 西安华创马科智能控制系统有限公司 Hydraulic support robot track alignment method and device

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201732274U (en) * 2010-04-14 2011-02-02 哈尔滨大禹智能自动化有限公司 Track cruising wireless network monitoring system
CN103197679A (en) * 2013-03-22 2013-07-10 长沙理工大学 Accurate positioning method for orbit type routing-inspection robot
CN203276591U (en) * 2013-01-30 2013-11-06 深圳市布博卡科技有限公司 Indoor parking lot vehicle intelligent positioning system
CN203933796U (en) * 2014-06-17 2014-11-05 广州市幸福网络技术有限公司 A kind of self-timer
CN104267731A (en) * 2014-10-21 2015-01-07 山东鲁能智能技术有限公司 Indoor track type intelligent patrolling robot system based on combined track
CN105208349A (en) * 2015-10-10 2015-12-30 上海慧体网络科技有限公司 Method for controlling automatic following shot of cameras according to number of people on game site
CN105718862A (en) * 2016-01-15 2016-06-29 北京市博汇科技股份有限公司 Method, device and recording-broadcasting system for automatically tracking teacher via single camera
CN106331511A (en) * 2016-11-16 2017-01-11 广东欧珀移动通信有限公司 Method and device of tracking shoot by intelligent terminal
CN106506956A (en) * 2016-11-17 2017-03-15 歌尔股份有限公司 Based on the track up method of unmanned plane, track up apparatus and system
CN106791420A (en) * 2016-12-30 2017-05-31 深圳先进技术研究院 A kind of filming control method and device
CN106803880A (en) * 2017-02-14 2017-06-06 扬州奚仲科技有限公司 Orbit camera device people's is autonomous with clapping traveling control method
CN107231546A (en) * 2017-07-05 2017-10-03 哈尔滨理工大学 The household monitoring system of autonomous tracking moving characteristic based on cloud security service device
CN107330917A (en) * 2017-06-23 2017-11-07 歌尔股份有限公司 The track up method and tracking equipment of mobile target
CN108391058A (en) * 2018-05-17 2018-08-10 Oppo广东移动通信有限公司 Image capturing method, device, electronic device and storage medium
CN108445882A (en) * 2018-03-26 2018-08-24 北京智山机器人科技有限责任公司 Automatic guided vehicle with following function
CN109167920A (en) * 2018-10-12 2019-01-08 北京地平线机器人技术研发有限公司 Camera system and method
CN109547600A (en) * 2018-12-25 2019-03-29 罗轶 Thin cloud platform mobile phone
US20190186876A1 (en) * 2017-12-20 2019-06-20 Garmin Switzerland Gmbh Shot tracking and feedback system
CN109977770A (en) * 2019-02-21 2019-07-05 安克创新科技股份有限公司 A kind of auto-tracking shooting method, apparatus, system and storage medium
CN110009914A (en) * 2019-03-29 2019-07-12 杭州晶一智能科技有限公司 Suspended rail formula parking position manages intelligent robot
CN110177285A (en) * 2019-05-29 2019-08-27 王子君 Live broadcasting method, device, system and dollying head
CN110221626A (en) * 2019-06-06 2019-09-10 睿魔智能科技(深圳)有限公司 One kind is with clapping control method, device, computer equipment and storage medium
CN110414381A (en) * 2019-07-10 2019-11-05 武汉联析医疗技术有限公司 Tracing type face identification system

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201732274U (en) * 2010-04-14 2011-02-02 哈尔滨大禹智能自动化有限公司 Track cruising wireless network monitoring system
CN203276591U (en) * 2013-01-30 2013-11-06 深圳市布博卡科技有限公司 Indoor parking lot vehicle intelligent positioning system
CN103197679A (en) * 2013-03-22 2013-07-10 长沙理工大学 Accurate positioning method for orbit type routing-inspection robot
CN203933796U (en) * 2014-06-17 2014-11-05 广州市幸福网络技术有限公司 A kind of self-timer
CN104267731A (en) * 2014-10-21 2015-01-07 山东鲁能智能技术有限公司 Indoor track type intelligent patrolling robot system based on combined track
CN105208349A (en) * 2015-10-10 2015-12-30 上海慧体网络科技有限公司 Method for controlling automatic following shot of cameras according to number of people on game site
CN105718862A (en) * 2016-01-15 2016-06-29 北京市博汇科技股份有限公司 Method, device and recording-broadcasting system for automatically tracking teacher via single camera
CN106331511A (en) * 2016-11-16 2017-01-11 广东欧珀移动通信有限公司 Method and device of tracking shoot by intelligent terminal
CN106506956A (en) * 2016-11-17 2017-03-15 歌尔股份有限公司 Based on the track up method of unmanned plane, track up apparatus and system
CN106791420A (en) * 2016-12-30 2017-05-31 深圳先进技术研究院 A kind of filming control method and device
CN106803880A (en) * 2017-02-14 2017-06-06 扬州奚仲科技有限公司 Orbit camera device people's is autonomous with clapping traveling control method
CN107330917A (en) * 2017-06-23 2017-11-07 歌尔股份有限公司 The track up method and tracking equipment of mobile target
CN107231546A (en) * 2017-07-05 2017-10-03 哈尔滨理工大学 The household monitoring system of autonomous tracking moving characteristic based on cloud security service device
US20190186876A1 (en) * 2017-12-20 2019-06-20 Garmin Switzerland Gmbh Shot tracking and feedback system
CN108445882A (en) * 2018-03-26 2018-08-24 北京智山机器人科技有限责任公司 Automatic guided vehicle with following function
CN108391058A (en) * 2018-05-17 2018-08-10 Oppo广东移动通信有限公司 Image capturing method, device, electronic device and storage medium
CN109167920A (en) * 2018-10-12 2019-01-08 北京地平线机器人技术研发有限公司 Camera system and method
CN109547600A (en) * 2018-12-25 2019-03-29 罗轶 Thin cloud platform mobile phone
CN109977770A (en) * 2019-02-21 2019-07-05 安克创新科技股份有限公司 A kind of auto-tracking shooting method, apparatus, system and storage medium
CN110009914A (en) * 2019-03-29 2019-07-12 杭州晶一智能科技有限公司 Suspended rail formula parking position manages intelligent robot
CN110177285A (en) * 2019-05-29 2019-08-27 王子君 Live broadcasting method, device, system and dollying head
CN110221626A (en) * 2019-06-06 2019-09-10 睿魔智能科技(深圳)有限公司 One kind is with clapping control method, device, computer equipment and storage medium
CN110414381A (en) * 2019-07-10 2019-11-05 武汉联析医疗技术有限公司 Tracing type face identification system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111371990A (en) * 2020-03-12 2020-07-03 黄成驰 Computer lab environmental monitoring system and camera based on remove thing networking
CN111314667B (en) * 2020-03-13 2021-04-16 江苏嘉和天盛信息科技有限公司 Security monitoring method
CN111324131A (en) * 2020-03-31 2020-06-23 中通服创立信息科技有限责任公司 Following monitoring method of track type inspection robot based on human body radar
CN111399517A (en) * 2020-03-31 2020-07-10 中通服创立信息科技有限责任公司 Track type inspection robot following monitoring method based on UWB positioning system
CN111432334A (en) * 2020-03-31 2020-07-17 中通服创立信息科技有限责任公司 Following monitoring method and system for rail-mounted inspection robot
CN111432334B (en) * 2020-03-31 2022-05-27 中通服创立信息科技有限责任公司 Following monitoring method and system for rail-mounted inspection robot
CN111324131B (en) * 2020-03-31 2023-09-01 中通服创立信息科技有限责任公司 Tracking monitoring method of track type inspection robot based on human body radar
CN111399517B (en) * 2020-03-31 2023-12-12 中通服创立信息科技有限责任公司 Following monitoring method of track type inspection robot based on UWB positioning system
CN111604888A (en) * 2020-05-29 2020-09-01 珠海格力电器股份有限公司 Inspection robot control method, inspection system, storage medium and electronic device
CN111604888B (en) * 2020-05-29 2021-09-14 珠海格力电器股份有限公司 Inspection robot control method, inspection system, storage medium and electronic device
CN115760989A (en) * 2023-01-10 2023-03-07 西安华创马科智能控制系统有限公司 Hydraulic support robot track alignment method and device

Also Published As

Publication number Publication date
CN110633612B (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN110633612B (en) Monitoring method and system for inspection robot
CN112396658B (en) Indoor personnel positioning method and system based on video
CN110516522B (en) Inspection method and system
CN110826538A (en) Abnormal off-duty identification system for electric power business hall
CN105844659B (en) The tracking and device of moving component
CN109271872B (en) Device and method for judging on-off state and diagnosing fault of high-voltage isolating switch
CN109145708B (en) Pedestrian flow statistical method based on RGB and D information fusion
CN104506819A (en) Multi-camera real-time linkage mutual feedback tracing system and method
CN109905641B (en) Target monitoring method, device, equipment and system
CN111163285A (en) High-altitude falling object monitoring method and system and computer readable storage medium
CN112911156B (en) Patrol robot and security system based on computer vision
CN112235537A (en) Transformer substation field operation safety early warning method
CN104378539A (en) Scene-adaptive video structuring semantic extraction camera and method thereof
CN104902233A (en) Comprehensive security monitoring system
CN112232211A (en) Intelligent video monitoring system based on deep learning
CN111460985A (en) On-site worker track statistical method and system based on cross-camera human body matching
CN110569770A (en) Human body intrusion behavior recognition method and device, storage medium and electronic equipment
CN112183219A (en) Public safety video monitoring method and system based on face recognition
CN106803937B (en) Double-camera video monitoring method, system and monitoring device with text log
CN113044694A (en) Construction site elevator people counting system and method based on deep neural network
CN109799844B (en) Dynamic target tracking system and method for pan-tilt camera
CN111612815A (en) Infrared thermal imaging behavior intention analysis method and system
CN110992629A (en) Method for detecting static human body based on video monitoring
CN115620192A (en) Method and device for detecting wearing of safety rope in aerial work
CN111277789A (en) Video-based community security method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant