CN113743214B - Intelligent cradle head camera - Google Patents

Intelligent cradle head camera Download PDF

Info

Publication number
CN113743214B
CN113743214B CN202110881270.8A CN202110881270A CN113743214B CN 113743214 B CN113743214 B CN 113743214B CN 202110881270 A CN202110881270 A CN 202110881270A CN 113743214 B CN113743214 B CN 113743214B
Authority
CN
China
Prior art keywords
image
target object
camera
target
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110881270.8A
Other languages
Chinese (zh)
Other versions
CN113743214A (en
Inventor
刘文涛
朱仲贤
蔡科伟
杜瑶
李世民
臧春华
刘鑫
汪伟伟
徐蒙福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Super High Voltage Branch Of State Grid Anhui Electric Power Co ltd
Original Assignee
Super High Voltage Branch Of State Grid Anhui Electric Power Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Super High Voltage Branch Of State Grid Anhui Electric Power Co ltd filed Critical Super High Voltage Branch Of State Grid Anhui Electric Power Co ltd
Priority to CN202110881270.8A priority Critical patent/CN113743214B/en
Publication of CN113743214A publication Critical patent/CN113743214A/en
Application granted granted Critical
Publication of CN113743214B publication Critical patent/CN113743214B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides an intelligent cradle head camera, and belongs to the field of monitoring and shooting. The intelligent cradle head camera comprises: the main camera is used for acquiring image information of a target area; the cradle head mechanism is used for installing the main camera and is used for starting to drive the main camera to rotate so as to acquire image information of a plurality of angles; the controller is connected with the main camera and the cradle head mechanism and is used for: determining whether a target object enters a target area; and marking and tracking the target object under the condition that the target area is determined to have the target object entering. Through the technical scheme, the intelligent cradle head camera is provided with the main camera through the cradle head mechanism to acquire target acquired image information, the controller processes the image information acquired by the main camera to determine whether a target object enters a target area, and the controller tracks the target object mark and sends out warning information under the condition that the target object is determined to enter the target area.

Description

Intelligent cradle head camera
Technical Field
The application relates to the field of monitoring camera shooting, in particular to an intelligent cradle head camera
Background
The substation is a hub of a power transmission network, and the substation equipment inspection work occupies an important position in the aspects of ensuring the normal work and the safe operation of the substation. The traditional transformer substation inspection mode adopts a manual mode, so that a large amount of manpower is consumed, the workload of staff is increased, and in addition, because the transformer substation is mostly high-voltage and high-radiation equipment, the manual inspection has great danger, and therefore, the manual inspection transformer substation inspection mode has a large number of defects.
The prior art generally has inspection robots, fixed potential cameras and unmanned aerial vehicles. However, inspection robots, fixed potential cameras, and unmanned aerial vehicles have various drawbacks. The setting process of inspection robot is loaded down with trivial details, needs a large amount of manual work to participate in the work load, and the setting of inspection point is influenced by the subjective judgement of on-the-spot configuration personnel greatly, and the standard of setting is inconsistent to lead to the quality of setting of monitoring point unable assurance, and inspection robot has the endurance not enough, the poor defect of duration. The fixed point position camera, the control of camera is easily received external factor (like human factor) and is disturbed for the skew takes place for the original fixed position of camera, leads to appearing the blind area on a plurality of camera cross monitoring visual angles, can't realize the control of panorama. Unmanned aerial vehicle patrols and examines, and general unmanned aerial vehicle patrols and examines mainly to be used in high-voltage line patrols and examines, and the transformer substation environment is complicated, is unfavorable for unmanned aerial vehicle flight execution task.
Disclosure of Invention
The embodiment of the application aims to provide an intelligent cradle head camera which can carry out inspection on a plurality of preset potentials, has higher stability and longer endurance, can monitor the behaviors of field personnel in real time, can assist in uploading images by an artificial intelligent algorithm of the cradle head camera, and can upload image information at a lower speed when no personnel or construction behaviors exist.
In order to achieve the above object, an embodiment of the present application provides an intelligent pan-tilt camera, including:
the main camera is used for acquiring image information of a target area;
the cradle head mechanism is used for installing the main camera and is used for starting to drive the main camera to rotate so as to acquire image information of a plurality of angles;
the controller is connected with the main camera and the cradle head mechanism and is used for:
determining whether a target object enters a target area;
and marking and tracking the target object under the condition that the target area is determined to have the target object entering.
Optionally, the controller is configured to calculate an IOU value of the target object according to equation (1),
IOU=C/A, (1)
wherein IOU is the IOU value, C is the intersection area of the target object and the target area, and A is the area of the target object.
Optionally, the target comprises a pedestrian;
the controller is used for sending out warning information to prompt the pedestrian under the condition that the pedestrian enters the target area.
Optionally, the controller is further configured to:
acquiring images of all pedestrians on site;
judging whether each pedestrian wears a safety helmet or not and wearing a tool respectively;
and under the condition that any pedestrian is not worn with the safety helmet/wearing tool, warning information is sent outwards to prompt the pedestrian.
Optionally, the controller is configured to:
controlling the cradle head mechanism to rotate the main camera to a plurality of preset angles;
acquiring a meter image shot by the main camera under each preset angle;
performing at least one of gauge detection, equipment status monitoring, foreign object detection, and defect detection from the gauge image;
and under the condition that any one of the meter detection, the equipment state monitoring, the foreign matter detection and the defect detection does not pass, sending out warning information to prompt management and control personnel.
Optionally, the controller is configured to:
acquiring the image information through the main camera;
performing a conversion operation on the image information according to formulas (1) to (3),
R=Y+1.4075*V, (1)
G=Y-0.3455*U-0.7169*V, (2)
B=Y+1.779*U, (3)
wherein R, G, B is a color channel in an RGB type image, Y, V and U are color channels in a YUV type image;
calculating the mean value and variance of all pixels of the image information after the conversion operation;
traversing all pixels of the image information after the conversion operation, and subtracting each pixel from the average value;
dividing the result of the subtraction by the variance to obtain a first image;
and inputting the first image into a trained neural network to determine whether the target object exists in the image information.
Optionally, the inputting the first image into a trained neural network to determine whether the target object exists in the image information includes:
inputting the first image into an NPU operator;
determining a maximum index corresponding to the first image by adopting a softmax function;
and determining the type of the target object according to the maximum index.
Optionally, the inputting the first image into a trained neural network to determine whether the target object exists in the image information includes:
inputting the first image into an NPU operator;
marking a target object under the condition that the target object exists in the first image;
classifying the target objects in the first image and reserving indexes larger than a preset threshold value;
performing a non-maximum suppression operation on the index;
determining a positioning frame of the target object according to the index after the non-maximum value inhibition operation;
and determining the current position of the target object according to the marked target object and the positioning frame.
Optionally, the controller is configured to:
inputting the first image into an NPU operator;
judging whether the first image comprises a plurality of detection types or not according to pixel output and a threshold value of the first image;
dividing the first image according to the region where each detection type is located;
and respectively adopting the trained neural network to identify each first image after segmentation.
Through the technical scheme, the intelligent cradle head camera is provided with the main camera through the cradle head mechanism to acquire target acquired image information, the controller processes the image information acquired by the main camera to determine whether a target object enters a target area, and the controller tracks the target object mark and sends out warning information under the condition that the target object is determined to enter the target area. The intelligent cradle head camera can carry out inspection on a plurality of preset points, has higher stability and longer endurance, can carry out interested searching on a target area, autonomously carries out functions such as focusing and tracking, and can upload information and send an alarm in real time when finding problems.
Additional features and advantages of embodiments of the application will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain, without limitation, the embodiments of the application. In the drawings:
FIG. 1 is a block diagram of a smart pan-tilt camera according to one embodiment of the present application;
FIG. 2 is a flow chart of foreign object detection for a smart pan-tilt camera according to one embodiment of the present application;
FIG. 3 is a flow chart of pedestrian crossing detection for a smart pan-tilt camera in accordance with one embodiment of the present application;
FIG. 4 is a flowchart of a smart pan-tilt camera protective article wear detection process according to one embodiment of the present application;
FIG. 5 is a flow chart of a patrol mode of a smart pan-tilt camera according to one embodiment of the present application;
FIG. 6 is a flow chart of image preprocessing for a smart pan-tilt camera according to an embodiment of the present application;
FIG. 7 is a flow chart of object classification for a smart pan-tilt camera according to an embodiment of the present application;
FIG. 8 is a flowchart of object positioning for a smart pan-tilt camera according to one embodiment of the present application;
FIG. 9 is a target zone segmentation flow diagram of a smart pan-tilt camera according to one embodiment of the present application;
fig. 10 is a view of a target object boundary crossing model of a smart pan-tilt camera according to an embodiment of the present application.
Description of the reference numerals
1. Controller 2 and cradle head mechanism
3. Main camera
Detailed Description
The following describes the detailed implementation of the embodiments of the present application with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the application, are not intended to limit the application.
Fig. 1 is a block diagram of a smart pan-tilt camera according to an embodiment of the present application. Fig. 2 is a foreign matter detection flowchart of a smart pan-tilt camera according to an embodiment of the present application. This intelligence cloud platform camera includes: a main camera 3, a cradle head mechanism 2 and a controller 1. The main camera 3 can acquire image information of a target area, the cradle head mechanism 2 can be provided with the main camera 3, and the main camera 3 can be driven to rotate so as to acquire image information of a plurality of angles. In one embodiment of the present application, the pan-tilt mechanism 2 may be a three-way smart pan-tilt in order for the main camera 3 mounted on the pan-tilt mechanism 2 to acquire more angular image information. The controller 1 may be connected to the main camera 3 and the pan/tilt mechanism 2, and may control rotation of the pan/tilt mechanism 2 and process image information acquired by the main camera 3. In fig. 2, after the main camera 3 acquires the image information, the controller 1 may perform foreign matter detection on the image information, including:
in step S10, foreign matter detection is started, and images of all objects on site are acquired;
in step S11, whether a target object enters the target area is detected;
in step S12, in the case where it is determined that the target object exists in the target area, the ID marking is performed on the target object and tracking is performed thereon.
The step of foreign matter detection is continued in the case where no target object is detected in the target area.
In detecting and ID-marking foreign objects, it is generally necessary to frequently mark plastic bags, which are easily blown off to a transformer substation due to their light weight. When the plastic bag is blown to the target area in the transformer substation, the controller 1 acquires the image information of the target area, and if the plastic bag is detected to exist in the target area, the plastic bag is named for the ID, and continuously tracks the plastic bag and sends warning information.
In one example of the present application, fig. 10 is a model diagram of object out of range of an intelligent pan-tilt camera according to one embodiment of the present application. Detection of a target object in a target area can calculate the IOU value of the target object by the formula (1),
IOU=C/A, (1)
wherein IOU is IOU value, C is the crossing area of the object and the object area, and A is the area of the object.
After calculating that the IOU value of the target object is larger than the default threshold value, the controller 1 sends out a warning signal to remind the staff. Therefore, the staff is reminded to pay attention in the range set by the target object invading the target area, and the problem that the staff is reminded frequently when the foreign matters appear in the shooting range by the existing camera is avoided, so that the energy of the staff is greatly consumed.
In one example of the present application, fig. 3 is a pedestrian crossing detection chart of a smart pan-tilt camera according to one embodiment of the present application. The detection of the target object in the target area comprises pedestrian detection, and the pedestrian crossing program started by the intelligent cradle head camera monitors the crossing behavior of pedestrians.
In step S20, pedestrian detection is started, and images of all pedestrians on the scene are acquired;
in step S21, detecting whether there is a boundary crossing behavior for pedestrians in the target area;
in step S22, the controller 1 issues a warning signal to prompt the pedestrian in the case of an out-of-range behavior;
in step S23, when no crossing of the pedestrian is detected in a normal state, a warning signal is not issued and the method of pedestrian detection is continuously performed.
In the process of detecting abnormal behaviors of pedestrians, the intelligent tripod head camera does not need to transmit data to the background to enable workers to judge whether the pedestrians are out of range, the intelligent tripod head camera continuously tracks and transmits the data to the background to remind workers and send warning signals to the pedestrians after judging that the pedestrians are out of range, and therefore the pedestrians can conveniently monitor the pedestrians' out of range behaviors and timely inform the pedestrians to be far away from a target area.
In one example of the present application, fig. 4 is a flowchart of a smart pan-tilt camera protective article wear detection according to one embodiment of the present application. The intelligent cradle head camera can monitor wearing of protection tools of pedestrians in a target area.
In step S30, the detection of the protective articles is started, and the images of all pedestrians on the scene are acquired;
in step S31, whether the pedestrian wears the helmet or not and wears the tool is detected;
in step S32, if it is determined that any pedestrian does not wear the helmet/wearing tool, a warning message is sent to the outside to prompt the pedestrian;
in step S33, the method of detecting the protective article is continuously performed without issuing a warning signal in the case where it is determined that the pedestrian has worn the helmet and the tool is worn.
In the detection of the pedestrian protection articles, firstly, images of all pedestrians in a target area are acquired, the wearing condition of the pedestrians in the images is detected, warning signals are sent out and information is transmitted to the background when the pedestrians are detected to not wear safety helmets or not wear tools, and the warning signals are not sent out and monitoring is continued when the pedestrians are detected to wear the protection articles in order.
In one example of the present application, fig. 5 is a flowchart of a patrol mode of a smart pan-tilt camera according to one embodiment of the present application. The intelligent cradle head camera further comprises a patrol mode. In the inspection mode, the controller 1 may perform the steps of inspection, including:
in step S40, the pan-tilt mechanism 2 is controlled to rotate the main camera 3 to a plurality of preset angles;
in step S41, a meter image captured by the main camera 3 at each preset angle is acquired;
in step S42, at least one of gauge detection, equipment status monitoring, foreign matter detection, and defect detection is performed from the gauge image;
in step S43, if any one of the meter detection, the equipment status monitoring, the foreign matter detection, and the defect detection is failed, a warning message is issued to prompt the management personnel.
In the inspection mode, an operator can preset an area to be inspected in advance and presets the content to be inspected, so that the cradle head mechanism 2 is controlled to rotate, the main camera 3 can acquire meter images with a plurality of preset angles, the main camera 3 acquires meter images with preset point positions and preset angles one by one according to the inspection task preset by the operator, then corresponding detection is carried out on the images, each meter image can correspond to different detection tasks, and can be at least one of meter detection, equipment state monitoring, foreign matter detection and defect detection, warning information is sent to remind a management and control person under the condition that any one of the meter images does not pass, and next detection content is continuously executed until the detection task is completed, and then the initial point position is returned. And after the inspection mode is finished, the intelligent cradle head camera returns to the initial point position for conventional monitoring. And the flow of the inspection task can be reserved, and the next inspection can be directly selected.
In one example of the present application, fig. 6 is an image preprocessing flow chart of a smart pan-tilt camera according to one embodiment of the present application. The controller 1 of the intelligent cradle head camera can preprocess images, and comprises:
in step S50, the user configures the interface;
in step S51, image information is acquired by the main camera 3;
in step S52, a conversion operation is performed on the image information according to formulas (1) to (3),
R=Y+1.4075*V, (1)
G=Y-0.3455*U-0.7169*V, (2)
B=Y+1.779*U, (3)
wherein R, G, B is a color channel in an RGB type image, Y, V and U are color channels in a YUV type image;
in step S53, the mean and variance of all pixels of the image information after the conversion operation are calculated;
in step S54, all pixels of the image information after the conversion operation are traversed, and each pixel is subtracted from the average value;
in step S55, the result of the subtraction is divided by the variance to obtain a first image;
in step S56, the first image is input into a trained neural network to determine whether a target is present in the image information.
In preprocessing an image, the image acquired from the main camera 3 is generally of YUV type, and the image data received by the general algorithm model is of RGB or BGR type, so that it is necessary to convert YUV data into RGB data. After the image is acquired, the general image pixel information distribution may be more divergent, and after steps S53, S54 and S55, the data distribution of the image may become more concentrated. And inputting the processed image information into a trained neural network, and determining whether a target object exists in the image.
In one example of the present application, fig. 7 is a flowchart of object classification for a smart pan-tilt camera according to one embodiment of the present application. After the image information is acquired, the images may be classified, including:
in step S60, inputting the first image into the NPU operator;
in step S61, determining a maximum index corresponding to the first image using a softmax function;
in step S62, the type of the object is determined from the maximum index.
The preprocessed image information may be brought into a neural network that has been trained to determine whether the image information is targeted. Under the condition that the target object exists in the target area, the NPU calculates the processed information, then analyzes the first image of the target area through a softmax function to obtain the maximum index, and the type of the target object in the first image can be determined after comparing the maximum index.
In one example of the present application, fig. 8 is a flowchart of object positioning of a smart pan-tilt camera according to one embodiment of the present application. After the image information is obtained, the target object in the first image may be located, including:
in step S70, inputting the first image into the NPU operator;
in step S71, in the case where it is determined that the target object exists in the first image, the target object is marked;
in step S72, classifying the target objects in the first image and retaining the index larger than the preset threshold;
in step S73, a non-maximum suppression operation is performed on the index;
in step S74, a positioning frame of the target object is determined according to the index after the non-maximum suppression operation;
in step S75, the current position of the target object is determined according to the marked target object and the positioning frame.
The pre-processed image information may be brought into a neural network that has been trained to determine whether the image information is of a target may include locating the target in the first image. The NPU calculates the first image input, and marks all the objects when the object exists in the object area. Classifying all the targets, reserving the targets larger than the threshold, performing non-maximum suppression operation on the reserved targets to obtain positioning frames of the classified targets, comparing the obtained positioning frames of the targets with all the marked targets, and determining the current position of the targets according to the marked targets and the positioning frames. And then outputting the current position of the target object, and determining the current position of the target object by the information output by the staff.
In one example of the present application, fig. 9 is a target area segmentation flowchart of a smart pan-tilt camera according to one embodiment of the present application. The first image may be segmented after the first image information is acquired, including:
in step S80, inputting the first image into the NPU operator;
in step S81, whether a plurality of detection types are included in the first image is judged by the pixel output of the first image and the threshold value;
in step S82, the first image is segmented according to the region where each detection type is located;
in step S83, the first images after segmentation are identified by using a trained neural network.
After the processed human image information is input into the NPU for operation, the NPU can judge whether the first image has a plurality of detection types according to the pixel output and the threshold value of the first image, and if the first image comprises the plurality of detection types, the area where each detection type is located can be divided. After segmentation, the trained neural network can be used for identifying each region, and then the identified information is transmitted.
Through the technical scheme, the intelligent cradle head camera is provided with the main camera through the cradle head mechanism to acquire target acquired image information, the controller processes the image information acquired by the main camera to determine whether a target object enters a target area, and the controller tracks the target object mark and sends out warning information under the condition that the target object is determined to enter the target area. The intelligent cradle head camera can carry out inspection on a plurality of preset points, has higher stability and longer endurance, can carry out interested searching on a target area, autonomously carries out functions such as focusing and tracking, and can upload information and send an alarm in real time when finding problems.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (8)

1. An intelligent cradle head camera, characterized in that, the intelligent cradle head camera includes:
the main camera is used for acquiring image information of a target area;
the cradle head mechanism is used for installing the main camera and is used for starting to drive the main camera to rotate so as to acquire image information of a plurality of angles;
the controller is connected with the main camera and the cradle head mechanism and is used for:
determining whether a target object enters a target area;
marking and tracking the target object under the condition that the target object entering exists in the target area;
the controller is used for:
acquiring the image information through the main camera;
performing a conversion operation on the image information according to formulas (1) to (3),
R=Y+1.4075*Y,(1)
G=Y-0.3455*U-0.7169*V,(2)
B=Y+1.779*U,(3)
wherein R, G, B is a color channel in an RGB type image, Y, V and U are color channels in a YUV type image;
calculating the mean value and variance of all pixels of the image information after the conversion operation;
traversing all pixels of the image information after the conversion operation, and subtracting each pixel from the average value;
dividing the result of the subtraction by the variance to obtain a first image;
and inputting the first image into a trained neural network to determine whether the target object exists in the image information.
2. The intelligent tripod head camera of claim 1, wherein said controller is configured to calculate an IOU value of said target object according to equation (1),
IOU=C/A,(1)
wherein IOU is the IOU value, C is the intersection area of the target object and the target area, and A is the area of the target object.
3. The intelligent tripod head camera of claim 1, wherein said object comprises a pedestrian;
the controller is used for sending out warning information to prompt the pedestrian under the condition that the pedestrian enters the target area.
4. The intelligent head camera of claim 3, wherein the controller is further configured to:
acquiring images of all pedestrians on site;
judging whether each pedestrian wears a safety helmet or not and wearing a tool respectively;
and under the condition that any pedestrian is not worn with the safety helmet/wearing tool, warning information is sent outwards to prompt the pedestrian.
5. The intelligent pan-tilt camera of claim 1, wherein the controller is configured to:
controlling the cradle head mechanism to rotate the main camera to a plurality of preset angles;
acquiring a meter image shot by the main camera under each preset angle;
performing at least one of gauge detection, equipment status monitoring, foreign object detection, and defect detection from the gauge image;
and when any one of the meter detection, the equipment state monitoring, the foreign matter detection and the defect detection fails, sending out a warning message to prompt a management and control personnel.
6. The intelligent tripod head camera of claim 1, wherein said inputting said first image into a trained neural network to determine whether said target is present in said image information comprises:
inputting the first image into an NPU operator;
determining a maximum index corresponding to the first image by adopting a softmax function;
and determining the type of the target object according to the maximum index.
7. The intelligent tripod head camera of claim 1, wherein said inputting said first image into a trained neural network to determine whether said target is present in said image information comprises:
inputting the first image into an NPU operator;
marking a target object under the condition that the target object exists in the first image;
classifying the target objects in the first image and reserving indexes larger than a preset threshold value;
performing a non-maximum suppression operation on the index;
determining a positioning frame of the target object according to the index after the non-maximum value inhibition operation;
and determining the current position of the target object according to the marked target object and the positioning frame.
8. The intelligent pan-tilt camera of claim 1, wherein the controller is configured to:
inputting the first image into an NPU operator;
judging whether the first image comprises a plurality of detection types or not according to pixel output and a threshold value of the first image;
dividing the first image according to the region where each detection type is located;
and respectively adopting the trained neural network to identify each first image after segmentation.
CN202110881270.8A 2021-08-02 2021-08-02 Intelligent cradle head camera Active CN113743214B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110881270.8A CN113743214B (en) 2021-08-02 2021-08-02 Intelligent cradle head camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110881270.8A CN113743214B (en) 2021-08-02 2021-08-02 Intelligent cradle head camera

Publications (2)

Publication Number Publication Date
CN113743214A CN113743214A (en) 2021-12-03
CN113743214B true CN113743214B (en) 2023-12-12

Family

ID=78729722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110881270.8A Active CN113743214B (en) 2021-08-02 2021-08-02 Intelligent cradle head camera

Country Status (1)

Country Link
CN (1) CN113743214B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332779A (en) * 2022-03-15 2022-04-12 云丁网络技术(北京)有限公司 Method for monitoring target object and related equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003134382A (en) * 2002-08-30 2003-05-09 Canon Inc Camera controller
CN103747217A (en) * 2014-01-26 2014-04-23 国家电网公司 Video monitoring method and device
CN105894702A (en) * 2016-06-21 2016-08-24 南京工业大学 Invasion detecting alarming system based on multi-camera data combination and detecting method thereof
CN106846284A (en) * 2016-12-28 2017-06-13 武汉理工大学 Active-mode intelligent sensing device and method based on cell
CN111862169A (en) * 2020-06-22 2020-10-30 上海摩象网络科技有限公司 Target follow-shooting method and device, pan-tilt camera and storage medium
CN112381778A (en) * 2020-11-10 2021-02-19 国网浙江嵊州市供电有限公司 Transformer substation safety control platform based on deep learning
CN113052107A (en) * 2021-04-01 2021-06-29 北京华夏启信科技有限公司 Method for detecting wearing condition of safety helmet, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003134382A (en) * 2002-08-30 2003-05-09 Canon Inc Camera controller
CN103747217A (en) * 2014-01-26 2014-04-23 国家电网公司 Video monitoring method and device
CN105894702A (en) * 2016-06-21 2016-08-24 南京工业大学 Invasion detecting alarming system based on multi-camera data combination and detecting method thereof
CN106846284A (en) * 2016-12-28 2017-06-13 武汉理工大学 Active-mode intelligent sensing device and method based on cell
CN111862169A (en) * 2020-06-22 2020-10-30 上海摩象网络科技有限公司 Target follow-shooting method and device, pan-tilt camera and storage medium
CN112381778A (en) * 2020-11-10 2021-02-19 国网浙江嵊州市供电有限公司 Transformer substation safety control platform based on deep learning
CN113052107A (en) * 2021-04-01 2021-06-29 北京华夏启信科技有限公司 Method for detecting wearing condition of safety helmet, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113743214A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN104036575B (en) Working-yard safety helmet wear condition monitoring method
CN109240311B (en) Outdoor electric power field construction operation supervision method based on intelligent robot
CN110850723B (en) Fault diagnosis and positioning method based on transformer substation inspection robot system
CN103613013B (en) System and method for monitoring construction safety of tower crane
CN211087009U (en) Contact net inspection device based on aircraft
CN113743214B (en) Intelligent cradle head camera
CN113569914A (en) Power transmission line inspection method and system fusing point cloud data
CN110703760B (en) Newly-added suspicious object detection method for security inspection robot
CN111753780B (en) Transformer substation violation detection system and violation detection method
CN109345787A (en) A kind of anti-outer damage monitoring and alarming system of the transmission line of electricity based on intelligent image identification technology
CN113900436A (en) Inspection control method, device, equipment and storage medium
CN113111771A (en) Method for identifying unsafe behaviors of power plant workers
CN114005088A (en) Safety rope wearing state monitoring method and system
KR101989376B1 (en) Integrated track circuit total monitoring system
CN113547500A (en) Inspection robot, inspection robot system and inspection method of inspection robot
CN112543272B (en) Transformer substation inspection camera device with light regulation function and method
CN112702570A (en) Security protection management system based on multi-dimensional behavior recognition
CN110434848A (en) Abnormal personnel's detection method in a kind of mechanical arm working region
CN116527119B (en) Intelligent analysis and understanding componentization system and method based on onboard image video
CN104417504B (en) The security protection subsystem of battery replacement of electric automobile system
CN113225525B (en) Indoor monitoring method and system
CN115690687A (en) Safe wearing standard detection system based on deep learning technology
CN115574945A (en) Submerged arc furnace inspection robot control system and method
CN108156430B (en) Guard zone projection camera and video recording method
CN104539839A (en) Control method of camera and camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: No. 397, Tongcheng South Road, Baohe District, Hefei City, Anhui Province 230061

Applicant after: Super high voltage branch of State Grid Anhui Electric Power Co.,Ltd.

Address before: No.8, jincui Road, Shuangfeng Industrial Park, Fuyang North Road, Changfeng County, Hefei City, Anhui Province

Applicant before: STATE GRID ANHUI POWER SUPPLY COMPANY OVERHAUL BRANCH

GR01 Patent grant
GR01 Patent grant