CN115361504A - Monitoring video processing method and device and electronic equipment - Google Patents

Monitoring video processing method and device and electronic equipment Download PDF

Info

Publication number
CN115361504A
CN115361504A CN202211298746.6A CN202211298746A CN115361504A CN 115361504 A CN115361504 A CN 115361504A CN 202211298746 A CN202211298746 A CN 202211298746A CN 115361504 A CN115361504 A CN 115361504A
Authority
CN
China
Prior art keywords
terminal
monitoring data
target
confidence
set value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211298746.6A
Other languages
Chinese (zh)
Other versions
CN115361504B (en
Inventor
韩丽
何杰
刘凯
蒋琦
郑洪雷
李泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tower Zhilian Technology Co ltd
China Tower Co Ltd
Original Assignee
Tower Zhilian Technology Co ltd
China Tower Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tower Zhilian Technology Co ltd, China Tower Co Ltd filed Critical Tower Zhilian Technology Co ltd
Priority to CN202211298746.6A priority Critical patent/CN115361504B/en
Publication of CN115361504A publication Critical patent/CN115361504A/en
Application granted granted Critical
Publication of CN115361504B publication Critical patent/CN115361504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides a monitoring video processing method, a monitoring video processing device and electronic equipment, and relates to the technical field of video processing, wherein the method comprises the following steps: receiving first monitoring data, a first target confidence coefficient and first warning information of a first position of the terminal when the first monitoring data are shot, wherein the first monitoring data and the first target confidence coefficient are sent by the terminal; confirming a second position of the terminal according to the first monitoring data and the first position under the condition that the first target confidence coefficient is smaller than a set value; the control terminal moves to a second position and performs three-dimensional amplification shooting to obtain amplified second monitoring data; when second alarm information sent by the terminal is received and the confidence of a second target included in the second alarm information is smaller than a set value, the terminal is continuously controlled to shoot at a second position; and releasing the control right of the terminal under the condition that the second alarm information sent by the terminal is not received or the second target confidence coefficient is greater than or equal to the set value. The invention effectively improves the identification accuracy through three-dimensional amplification shooting.

Description

Monitoring video processing method and device and electronic equipment
Technical Field
The invention relates to the technical field of video processing, in particular to a monitoring video processing method and device and electronic equipment.
Background
With the development of video services, the intelligent monitoring technology is widely applied to the monitoring field, and a target object is identified by monitoring a designated area through a monitoring video. In the related art, in the monitoring process, the monitoring shooting is not clear, and the error is large when the algorithm calculation is carried out on the shot picture or video, so that the accuracy of monitoring identification is low.
Therefore, the problem of low accuracy rate of monitoring and identifying exists in the prior art.
Disclosure of Invention
The embodiment of the invention provides a monitoring video processing method, a monitoring video processing device and electronic equipment, and aims to solve the problem that the accuracy of monitoring identification is low in the prior art.
In order to solve the problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for processing a surveillance video, including:
receiving first alarm information sent by a terminal, wherein the terminal is used for carrying out video monitoring and comprises first monitoring data, a first target confidence coefficient corresponding to the first monitoring data and a first position of the terminal when the first monitoring data is shot;
confirming a second position of the terminal according to the first monitoring data and the first position under the condition that the first target confidence degree is smaller than a set value;
acquiring the control right of the terminal;
controlling the terminal to move to the second position, and controlling the terminal to perform three-dimensional amplification shooting to obtain amplified second monitoring data;
when second alarm information sent by the terminal is received and a second target confidence degree included in the second alarm information is smaller than a set value, the terminal is continuously controlled to shoot at the second position, and the second target confidence degree is used for representing a confidence degree corresponding to the second monitoring data;
and releasing the control right of the terminal under the condition that second alarm information sent by the terminal is not received or the second target confidence degree is greater than or equal to a set value.
In a second aspect, an embodiment of the present invention further provides a method for processing a surveillance video, including:
performing confidence calculation on the shot first monitoring data to obtain a first target confidence;
sending first warning information to network equipment under the condition that the first target confidence coefficient is greater than a set value, wherein the first warning information comprises the first monitoring data, the first target confidence coefficient and a first position of the terminal when the first monitoring data is shot;
moving to a second position according to the control of the network equipment, and carrying out three-dimensional amplification shooting to obtain second monitoring data, wherein the second position is calculated on the basis of the first monitoring data and the first position after the network equipment receives the first alarm information;
performing confidence calculation on the second monitoring data to obtain a second target confidence;
and sending second alarm information to the network equipment under the condition that the second target confidence coefficient is greater than the set value, wherein the second alarm information comprises the second monitoring data, the second target confidence coefficient and the second position.
In a third aspect, an embodiment of the present invention further provides a surveillance video processing apparatus, including:
the terminal is used for carrying out video monitoring, and the first warning information comprises first monitoring data, a first target confidence coefficient corresponding to the first monitoring data and a first position of the terminal when the first monitoring data is shot;
the confirming module is used for confirming a second position of the terminal according to the first monitoring data and the first position under the condition that the first target confidence coefficient is smaller than a set value;
the acquisition module is used for acquiring the control right of the terminal;
the control module is used for controlling the terminal to move to the second position and controlling the terminal to carry out three-dimensional amplification shooting to obtain amplified second monitoring data;
the first processing module is used for continuously controlling the terminal to shoot at the second position under the condition that second alarm information sent by the terminal is received and a second target confidence coefficient included in the second alarm information is smaller than a set value, wherein the second target confidence coefficient is used for representing a confidence coefficient corresponding to the second monitoring data;
and the second processing module is used for releasing the control right of the terminal under the condition that second alarm information sent by the terminal is not received or the second target confidence degree is greater than or equal to a set value.
In a fourth aspect, an embodiment of the present invention further provides a surveillance video processing apparatus, including:
the first confirming module is used for carrying out confidence calculation on the shot first monitoring data to obtain a first target confidence;
a first sending module, configured to send first warning information to a network device when the first target confidence is greater than a set value, where the first warning information includes the first monitoring data, the first target confidence, and a first position of the terminal when the first monitoring data is captured;
the first processing module is used for moving to a second position according to the control of the network equipment and carrying out three-dimensional amplification shooting to obtain second monitoring data, wherein the second position is calculated by the network equipment based on the first monitoring data and the first position after receiving the first alarm information;
the second confirmation module is used for carrying out confidence calculation on the second monitoring data to obtain a second target confidence;
a second sending module, configured to send second warning information to the network device when the second target confidence is greater than the set value, where the second warning information includes the second monitoring data, the second target confidence, and the second location.
In a fifth aspect, embodiments of the present invention further provide an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when the computer program is executed by the processor, the method according to the first aspect is implemented, or the method according to the second aspect is implemented.
In a sixth aspect, embodiments of the present invention further provide a readable storage medium, for storing a program, where the program, when executed by a processor, implements the steps in the method according to the first aspect, or implements the steps in the method according to the second aspect.
In the embodiment of the invention, under the condition of receiving the first alarm information sent by the terminal, the second position is confirmed through the first monitoring data and the first position, and then the terminal is controlled to move to the second position for three-dimensional amplification shooting, so that the image is clearer, the terminal can more accurately identify whether the set object exists through an algorithm, and the accuracy of monitoring identification is improved.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a flowchart of a surveillance video processing method according to an embodiment of the present invention;
FIG. 2 is a schematic three-dimensional enlarged flow chart provided by an embodiment of the present invention;
fig. 3 is a flowchart of a surveillance video processing method according to an embodiment of the present invention;
fig. 4 is a structural diagram of a surveillance video processing apparatus according to an embodiment of the present invention;
fig. 5 is a block diagram of a surveillance video processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a surveillance video processing method according to an embodiment of the present invention, which is executed by a network device, and as shown in fig. 1, the method includes the following steps:
step S101, receiving first alarm information sent by a terminal, wherein the terminal is used for carrying out video monitoring, and the first alarm information comprises first monitoring data, a first target confidence degree corresponding to the first monitoring data and a first position of the terminal when the first monitoring data is shot.
The terminal is a terminal for monitoring videos and can be a monitoring camera or a mobile camera device.
It is to be understood that the first monitoring data includes at least one of a picture or a video. After the terminal shoots or shoots, the image or video obtained by shooting is analyzed and confidence degree calculation is carried out through an algorithm to obtain corresponding confidence degree. The image or video confidence degree calculation is carried out through different algorithms, the obtained confidence degrees are different, each algorithm is independently analyzed, and related data among different algorithms are independent and are not used together.
The first target confidence coefficient is directly obtained by the terminal through performing confidence coefficient calculation through an algorithm.
The terminal identifies the object through the first set value N1, the second set value N2 and the confidence coefficient. Wherein the first set value N1 and the second set value N2 have the following relationship:
0<N1<N2<1
under the condition that the confidence coefficient is smaller than a first set value N1, judging that an object needing to be identified by the algorithm is not detected at the moment;
under the condition that the confidence coefficient is greater than a first set value N1 and less than a second set value N2, judging that an object needing to be identified by the suspected detection algorithm is detected at the moment, and sending first alarm information to the network equipment by the terminal at the moment;
in the case where the confidence is greater than the second set value N2, it is determined that an object that the algorithm needs to recognize is detected at this time.
And step S102, confirming a second position of the terminal according to the first monitoring data and the first position under the condition that the first target confidence degree is smaller than a set value.
Wherein the set value is the second set value N2. As is apparent from the above description, the present application needs to improve the accuracy of recognizing the setting object by the terminal by optimizing the case where the reliability is greater than the first set value N1 and less than the second set value N2.
It should be understood that if the first target confidence is greater than the second set value N2, it can be directly determined that the set object is detected without performing recognition.
The first monitoring data comprise detected alarm objects, and the first position is a position for shooting the second monitoring data. According to the first monitoring data and the first position, the terminal can be confirmed to perform three-dimensional amplification to confirm the second position of the alarm object, which is described in the following embodiments.
And step S103, acquiring the control right of the terminal.
It should be understood that the network device obtains the control right of the terminal, and actually sends a control request to the terminal for the network device, and sends corresponding control information to the terminal after the terminal replies, so as to implement the control of moving or shooting the terminal.
And step S104, controlling the terminal to move to the second position, and controlling the terminal to perform three-dimensional amplification shooting to obtain amplified second monitoring data.
The second position is closer to the alarm object relative to the first position, the terminal performs three-dimensional amplification shooting, and specifically, the terminal moves to the position closer to the alarm object to perform shooting so as to obtain a clearer shot picture or video.
And step S105, when second alarm information sent by the terminal is received and a second target confidence coefficient included in the second alarm information is smaller than a set value, continuing to control the terminal to shoot at the second position, wherein the second target confidence coefficient is used for representing a confidence coefficient corresponding to the second monitoring data.
Wherein the second monitoring data comprises at least one of a picture or a video. And after the terminal obtains the second monitoring data through shooting, calculating the second monitoring data through an algorithm, and confirming the corresponding second target confidence coefficient.
The second target confidence may be obtained by directly performing confidence calculation on the second monitoring data through an algorithm, or may be obtained by performing weighted calculation on the second monitoring data and the first target confidence after the second monitoring data is calculated through the algorithm, which is described in the subsequent embodiments.
It should be understood that when the confidence of the second target is still between the first set value N1 and the second set value N2, it is still determined that the target is suspected to be detected, and the three-dimensional magnification confirmation needs to be performed again.
And step S106, releasing the control right of the terminal under the condition that second alarm information sent by the terminal is not received or the second target confidence coefficient is larger than or equal to a set value.
It should be understood that, in the case where the second target confidence is smaller than the first set value N1, it is determined that the set object is not detected, and at this time, the terminal does not send the second warning information to the network device.
In addition, when the confidence of the second target is greater than the second set value N2, it is determined that the set object is detected, at this time, three-dimensional amplification is no longer needed, the network device releases the control right of the terminal, and the terminal continues to perform monitoring shooting according to the set mode.
In the embodiment, the network equipment confirms the second position through the first monitoring data and the first position under the condition of receiving the first warning information sent by the terminal, and then controls the terminal to move to the second position for three-dimensional amplification shooting, so that the image is clearer, the terminal can identify whether a set object exists through an algorithm more accurately, and the accuracy of monitoring identification is improved.
It should be understood that, if the terminal does not support three-dimensional amplification, or the algorithm corresponding to the first alarm information does not support three-dimensional amplification, or the first position in the first alarm information or the numerical value of the first monitoring data is reported to be null, the condition of three-dimensional amplification is not satisfied at this time, and no three-dimensional amplification operation is performed.
It should be understood that, when the difference between the time when the network device receives the first alarm information and the shooting time of the first monitoring data is greater than the set value, it is considered that the set object is far away from the first position at this time, and at this time, the three-dimensional amplification operation is not performed any more. The setting values are flexibly configured, and the setting values corresponding to different algorithms can be independently set.
In one embodiment, the first monitoring data comprises the detected alarm object and the corresponding first target box, and the second monitoring data comprises the detected alarm object and the corresponding second target box;
after the first warning message sent by the receiving terminal, the method further comprises:
controlling the terminal not to carry out three-dimensional amplification under the condition that the first target frame is larger than a set range;
under the condition of receiving second warning information sent by the terminal, the method further comprises the following steps:
under the condition that the confidence of the second target is smaller than a set value and the second target frame is smaller than a set range, continuously controlling the terminal to shoot at the second position;
and releasing the control right of the terminal under the condition that the second target confidence is smaller than a set value and the second target frame is larger than a set range.
And the alarm object is positioned in the first target frame and/or the second target frame.
It should be understood that, if the terminal still cannot effectively determine whether the set object exists after performing three-dimensional amplification for multiple times, the three-dimensional amplification operation cannot be ended, which will cause waste of monitoring resources, and other areas cannot perform corresponding monitoring. Therefore, it is necessary to further limit the conditions for three-dimensional amplification.
In this embodiment, the three-dimensional enlarging operation is limited by the size of the first target frame and/or the second target frame. And under the condition that the target frame is larger than the set value, judging that the object can be shot clearly by the monitoring video at the moment, wherein the reason that the first target confidence coefficient and/or the second target confidence coefficient are/is positioned between the first set value N1 and the second set value N2 is irrelevant to the monitoring video, and the three-dimensional amplification operation is not carried out any more.
For example, the setting value is set to 60%, and in the case where the first object frame and/or the second object frame is larger than 60% of the size of the taken picture or video, the network device does not control the terminal, or releases the control right of the terminal.
The reason why the first target confidence coefficient and/or the second target confidence coefficient are/is located between the first set value N1 and the second set value N2 may be that algorithm optimization is insufficient, or a terminal shooting lens is damaged, and the like, and a corresponding solution needs to be performed for a specific problem.
In addition, a threshold value of the number of times of three-dimensional enlargement of the terminal may be set. And under the condition that the number of times of three-dimensional amplification of the terminal exceeds a set threshold value, the network equipment does not control the terminal to execute the three-dimensional amplification operation and releases the control right of the terminal.
In one embodiment, in the case that the first target confidence is smaller than a set value, calculating a second position of the terminal according to the first monitoring data and the first position includes:
under the condition that the first target confidence degree is smaller than a set value, calculating the central position of the first target frame according to the first target frame;
and confirming a second position of the terminal according to the central position and the first position, so that when the terminal is at the second position, the shooting center of the terminal is superposed with the central position.
Illustratively, the coordinates of the first target box in the block diagram are (x) in order 1 ,y 1 )、(x 2 ,y 1 )、(x 2 ,y 2 )、(x 1 ,y 2 ) If the length pixel value of the first monitoring data is a and the width pixel value is B, the length pixel value a and the width pixel value B of the first target frame are determined by the following formulas:
Figure 100002_DEST_PATH_IMAGE002
the center position of the first target frame is confirmed by the following formula:
Figure 100002_DEST_PATH_IMAGE004
wherein x is the abscissa of the central position, and y is the ordinate of the central position.
In this embodiment, the shooting center of the terminal coincides with the center position of the first target frame, so as to improve the probability that the alarm object is shot in a three-dimensional enlarged manner.
And the second position is PTZ position information of the terminal and is confirmed through the first position and the first monitoring data. It should be understood that when the terminal is at the second position, the photographing center thereof coincides with the center position of the first target frame at the first position.
In an embodiment, before the controlling the terminal to move to the second position and the controlling the terminal to perform three-dimensional enlarged shooting to obtain the enlarged second monitoring data, the method further includes:
recording a node of the cruise plan, including a third position of the terminal, in the case where the cruise plan is present at the terminal and the cruise plan is on;
controlling the terminal to stop the cruise plan after recording the third position;
before releasing the control right of the terminal, the method further comprises:
controlling the terminal to move to the third position, and controlling the terminal to execute the cruise plan from the node.
Wherein the cruise plan is a movement and photographing plan set for the terminal.
In this embodiment, the terminal is used for continuous periodic monitoring, and a cruise plan is usually provided for taking monitoring video shots. In the process of executing the three-dimensional amplification, the terminal cannot continue to carry out the cruise plan, the abnormal work of the terminal after the three-dimensional amplification is not avoided, and the current node needs to be recorded, so that the terminal can continue to execute the cruise plan after the three-dimensional amplification operation is finished.
It should be understood that if the terminal does not have the cruise plan, the terminal may be considered to be in the fixed-point mode at this time, and the terminal is restored to the third position after the three-dimensional amplification operation is performed, and the terminal is not controlled to execute the cruise plan.
In one embodiment, the method further comprises:
under the condition that the control right of the terminal is failed to be acquired or the cruise plan is failed to stop, controlling the terminal not to carry out three-dimensional amplification;
and under the condition that the terminal fails to be controlled to move to the second position or the terminal fails to shoot, controlling the terminal to move to the third position, controlling the terminal to execute the cruise plan from the node, and releasing the control right of the terminal.
It should be understood that in the process of controlling the terminal by the network device, each step needs to confirm whether the execution is completed, and the abnormal occurrence needs to be immediately finished, so as to reduce the long-time occupation of the terminal and influence the monitoring of other areas.
Specifically, referring to fig. 2, fig. 2 is a schematic diagram of a three-dimensional amplification process provided in an embodiment of the present invention, and as shown in fig. 2, when the network device fails to acquire the control right of the terminal or fails to stop the cruise plan, the control terminal does not perform three-dimensional amplification any more; and when the control terminal fails to move to the second position or fails to shoot, the control terminal moves to the third position, the control terminal executes the cruise plan from the node, and the control right of the terminal is released.
In one embodiment, the first target confidence and/or the second target confidence is determined by the following formula:
Figure DEST_PATH_IMAGE006
wherein i is used for representing the number of times of successfully carrying out three-dimensional amplification; mu.s i For representing the weight of the ith enlargement shot,
Figure DEST_PATH_IMAGE008
;f i the confidence coefficient is used for representing the ith magnification shooting; f is used for representing the confidence of the target after the i times of amplification shooting.
In this embodiment, in order to avoid erroneous recognition of shooting after single three-dimensional amplification and improve recognition accuracy, the above formula is provided in this embodiment to confirm the target confidence, and a more accurate target confidence is obtained by a weighted average principle.
Illustratively, in the third enlarged shot, the confidence coefficient calculated by the first algorithm is f 1 The confidence coefficient obtained by the second algorithm calculation is f 2 The confidence coefficient calculated by the third algorithm is f 3 Wherein, f 3 The confidence of the non-third target is substituted into f through the formula 1 、f 2 、f 3 And performing calculation and confirmation to obtain a third target confidence coefficient f.
Referring to fig. 3, fig. 3 is a flowchart of a monitoring video processing method according to an embodiment of the present invention, which is executed by a terminal, and as shown in fig. 3, the method includes the following steps:
step S301, performing confidence calculation on the shot first monitoring data to obtain a first target confidence;
step S302, when the first target confidence coefficient is larger than a set value, sending first warning information to network equipment, wherein the first warning information comprises the first monitoring data, the first target confidence coefficient and a first position of the terminal when the first monitoring data is shot;
step S303, moving to a second position according to the control of the network equipment, and performing three-dimensional amplification shooting to obtain second monitoring data, wherein the second position is calculated on the basis of the first monitoring data and the first position after the network equipment receives the first alarm information;
step S304, performing confidence calculation on the second monitoring data to obtain a second target confidence;
step S305, sending second warning information to the network device when the second target confidence is greater than the set value, where the second warning information includes the second monitoring data, the second target confidence, and the second location.
In one embodiment, the first monitoring data includes the detected alarm object and the corresponding first target box, and the second monitoring data includes the detected alarm object and the corresponding second target box.
In an embodiment, before the moving to the second location and performing the three-dimensional enlarged shooting to obtain the second monitoring data according to the control of the network device, the method further includes:
stopping the cruise plan if the cruise plan is on and the cruise plan is present at the terminal;
before the network device releases the control right of the terminal, the method further comprises:
moving to a third position, executing the cruise plan from a node that stops the cruise plan for the terminal recorded by the network device, the node including the third position.
In one embodiment, the first target confidence and/or the second target confidence is confirmed by the following formula:
Figure DEST_PATH_IMAGE006A
wherein i is used for representing the times of successfully carrying out three-dimensional amplification; mu.s i For representing the weight of the ith enlargement shot,
Figure DEST_PATH_IMAGE008A
;f i the confidence coefficient is used for representing the ith magnification shooting; f is used for representing the confidence of the target after the i times of amplification shooting.
The following describes a process of the above surveillance video processing method in a specific embodiment.
At present, a surveillance video processing scheme usually adopts an algorithm to identify whether a set object exists in a surveillance video, and when the set object shot by the surveillance video is small, that is, the set object is far away from a shooting position, the identification accuracy is low. The related art adopts further optimization of the algorithm to achieve recognition of smaller objects. However, different algorithms are further optimized, each optimization is high in cost, all the algorithms applied need to be optimized, and time cost is high. In the embodiment of the present application, to reduce optimization of an algorithm and improve identification accuracy of the algorithm, a surveillance video processing scheme is provided, which is described by taking a terminal as an example, and a flow of the surveillance video processing method of the embodiment is as follows:
firstly, the terminal shoots a video or a picture according to a set cruise plan, and the confidence coefficient of the shot video or picture is calculated through a preset algorithm. And under the condition that the obtained confidence is greater than the first set value N1, the terminal sends first alarm information to the network equipment, wherein the first alarm information is used for representing the suspected detected set object.
In the first alarm information, the first monitoring data represents a picture or a video shot by the terminal, the first target confidence coefficient represents a confidence coefficient obtained by performing confidence coefficient calculation on the first monitoring data by the terminal, and the first position represents a position where the first monitoring data is shot by the terminal. And after receiving the first alarm information, the network equipment performs corresponding analysis to confirm whether three-dimensional amplification is required.
Then, when the three-dimensional amplification is needed, the terminal receives a control command of the network equipment and moves to a second position for the three-dimensional amplification. And the second position is confirmed by the network equipment according to the first monitoring data and the first position.
Before the three-dimensional enlargement, if the terminal executes the cruise plan, the node of the current cruise plan needs to be recorded, and the cruise plan is stopped to be executed.
In the three-dimensional amplification process, the shooting center of the terminal is located at the center of the first target frame detected by the first monitoring data, so that the possibility of capturing the set object is improved.
In the process of three-dimensional amplification and shooting, the terminal performs confidence calculation on each amplified and shot picture or video, and the specific calculation is confirmed by the following formula:
Figure DEST_PATH_IMAGE006AA
wherein i is used for representing the number of times of successfully carrying out three-dimensional amplification; mu.s i For representing the weight of the ith enlargement shot,
Figure DEST_PATH_IMAGE008AA
;f i the confidence coefficient is used for representing the ith magnification shooting; f is used for representing the confidence of the target after the i times of amplification shooting.
And finally, after the condition of finishing the three-dimensional amplification is met, the terminal resumes execution of the cruise plan from the recorded nodes. Wherein the condition for ending the three-dimensional amplification comprises at least one of the following conditions:
finally, the confidence coefficient of the target obtained by shooting is smaller than a first set value N1 or larger than a second set value N2;
the target frame corresponding to the alarm object exceeds a set range;
the three-dimensional shooting frequency exceeds a set value;
the movement to the second position fails.
The monitoring video processing method provided by the embodiment of the invention can improve the accuracy of the monitoring video for identifying the object, solve the problem of low accuracy of the monitoring video for identifying the object and ensure the performance of the monitoring video for identifying the object; meanwhile, the optimization of the algorithm is reduced, and the cost is reduced.
Referring to fig. 4, fig. 4 is a structural diagram of a surveillance video processing apparatus according to an embodiment of the present invention, and as shown in fig. 4, the surveillance video processing apparatus 400 includes:
a receiving module 401, configured to receive first alarm information sent by a terminal, where the terminal is a terminal for performing video monitoring, and the first alarm information includes first monitoring data, a first target confidence corresponding to the first monitoring data, and a first position of the terminal when the first monitoring data is shot;
a confirming module 402, configured to confirm a second position of the terminal according to the first monitoring data and the first position when the first target confidence is smaller than a set value;
an obtaining module 403, configured to obtain a control right of the terminal;
the control module 404 is configured to control the terminal to move to the second position, and control the terminal to perform three-dimensional enlarged shooting to obtain enlarged second monitoring data;
a first processing module 405, configured to continue to control the terminal to shoot at the second position when second alarm information sent by the terminal is received and a second target confidence included in the second alarm information is smaller than a set value, where the second target confidence is used to represent a confidence corresponding to the second monitoring data;
a second processing module 406, configured to release the control right of the terminal when a second warning message sent by the terminal is not received, or the second target confidence is greater than or equal to a set value.
Optionally, the first monitoring data includes a detected alarm object and a corresponding first target frame, and the second monitoring data includes a detected alarm object and a corresponding second target frame;
after the receiving module 401, the apparatus further includes:
the third processing module is used for controlling the terminal not to carry out three-dimensional amplification;
under the condition of receiving second warning information sent by the terminal, the device further comprises:
the fourth processing module is used for continuously controlling the terminal to shoot at the second position under the condition that the second target confidence coefficient is smaller than a set value and the second target frame is smaller than a set range;
and the fifth processing module is used for releasing the control right of the terminal under the condition that the second target confidence is smaller than a set value and the second target frame is larger than a set range.
Optionally, the confirming module 402 includes:
the first confirming unit is used for calculating the central position of the first target frame according to the first target frame under the condition that the first target confidence coefficient is smaller than a set value;
and the second confirming unit is used for confirming the second position of the terminal according to the central position and the first position, so that the shooting center of the terminal is overlapped with the central position when the terminal is at the second position.
Optionally, before the control module 404, the apparatus further includes:
the terminal comprises a recording module and a judging module, wherein the recording module is used for recording nodes of the cruise plan under the condition that the cruise plan exists at the terminal and the cruise plan is on, and the nodes comprise a third position of the terminal;
a sixth processing module for controlling the terminal to stop the cruise plan after recording the third position;
before the second processing module 406, the apparatus further comprises:
and the seventh processing module is used for controlling the terminal to move to the third position and controlling the terminal to execute the cruise plan from the node.
Optionally, the first target confidence and/or the second target confidence is confirmed by the following formula:
Figure DEST_PATH_IMAGE006AAA
wherein i is used for representing the times of successfully carrying out three-dimensional amplification; mu.s i For representing the weight of the ith magnification photographing,
Figure DEST_PATH_IMAGE008AAA
;f i the confidence coefficient of the ith magnification shooting is represented; and f is used for representing the confidence of the target after the i times of magnification shooting.
The monitoring video processing device provided by the embodiment of the invention can realize each process of each embodiment of the monitoring video processing method, the technical characteristics are in one-to-one correspondence, the same technical effect can be achieved, and in order to avoid repetition, the technical characteristics are not described again.
It should be noted that the monitoring video processing apparatus in the embodiment of the present invention may be an apparatus, and may also be a component, an integrated circuit, or a chip in an electronic device.
Referring to fig. 5, fig. 5 is a structural diagram of a surveillance video processing apparatus according to an embodiment of the present invention, and as shown in fig. 5, a surveillance video processing apparatus 500 includes:
the first confirmation module 501 is configured to perform confidence calculation on the photographed first monitoring data to obtain a first target confidence;
a first sending module 502, configured to send first warning information to a network device when the first target confidence is greater than a set value, where the first warning information includes the first monitoring data, the first target confidence, and a first position of the terminal when the first monitoring data is captured;
the first processing module 503 is configured to move to a second location according to control of the network device, perform three-dimensional enlarged shooting, and obtain second monitoring data, where the second location is calculated by the network device based on the first monitoring data and the first location after receiving the first alarm information;
a second confirmation module 504, configured to perform confidence calculation on the second monitoring data to obtain a second target confidence;
a second sending module 505, configured to send second warning information to the network device when the second target confidence is greater than the set value, where the second warning information includes the second monitoring data, the second target confidence, and the second location.
Optionally, the first monitoring data includes the detected alarm object and a corresponding first target frame, and the second monitoring data includes the detected alarm object and a corresponding second target frame.
Optionally, before the first processing module 503, the apparatus further includes:
a second processing module which stops the cruise plan when the terminal has the cruise plan and the cruise plan is on;
before the network device releases the control right of the terminal, the apparatus further comprises:
a third processing module for moving to a third location, executing the cruise plan from a node that stops the cruise plan for the terminal recorded by the network device, the node comprising the third location.
Optionally, the first target confidence and/or the second target confidence is determined by the following formula:
Figure DEST_PATH_IMAGE006AAAA
wherein i is used for representing the times of successfully carrying out three-dimensional amplification; mu.s i For representing the weight of the ith magnification photographing,
Figure DEST_PATH_IMAGE008AAAA
;f i for showing the ith magnification shotA confidence in time; f is used for representing the confidence of the target after the i times of amplification shooting.
The surveillance video processing apparatus provided in the embodiments of the present invention is capable of implementing each process of each embodiment of the surveillance video processing method, and has one-to-one correspondence technical features, and can achieve the same technical effect, and is not described herein again to avoid repetition.
It should be noted that the monitoring video processing apparatus in the embodiment of the present invention may be an apparatus, and may also be a component, an integrated circuit, or a chip in an electronic device.
An embodiment of the present invention further provides an electronic device, referring to fig. 6, where fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and the electronic device includes a memory 601, a processor 602, and a program or an instruction stored in the memory 601 and running on the memory 601, and when the program or the instruction is executed by the processor 602, any step in the method embodiment corresponding to fig. 1 may be implemented and the same beneficial effect may be achieved, or any step in the method embodiment corresponding to fig. 3 may be implemented and the same beneficial effect may be achieved, which is not described herein again.
The processor 602 may be a CPU, ASIC, FPGA, or GPU, among others.
Those skilled in the art will appreciate that all or part of the steps of the method according to the above embodiments may be implemented by hardware associated with program instructions, and the program may be stored in a readable medium.
An embodiment of the present invention further provides a readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program may implement any step in the method embodiment corresponding to fig. 1 or implement any step in the method embodiment corresponding to fig. 3, and may achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The terms "first," "second," and the like in the embodiments of the present invention are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Further, the use of "and/or" in this application means that at least one of the connected objects, e.g., a and/or B and/or C, means that 7 cases are included where a alone, B alone, C alone, and both a and B are present, B and C are present, a and C are present, and a, B, and C are present.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
Through the description of the foregoing embodiments, it is obvious to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a terminal device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, and that the various changes and modifications can be effected therein by one skilled in the art without departing from the scope or spirit of the invention as defined by the appended claims.

Claims (10)

1. A monitoring video processing method is applied to network equipment and is characterized by comprising the following steps:
receiving first alarm information sent by a terminal, wherein the terminal is used for carrying out video monitoring and comprises first monitoring data, a first target confidence coefficient corresponding to the first monitoring data and a first position of the terminal when the first monitoring data is shot;
confirming a second position of the terminal according to the first monitoring data and the first position under the condition that the first target confidence degree is smaller than a set value;
acquiring the control right of the terminal;
controlling the terminal to move to the second position, and controlling the terminal to perform three-dimensional amplification shooting to obtain amplified second monitoring data;
when second alarm information sent by the terminal is received and a second target confidence coefficient included in the second alarm information is smaller than a set value, the terminal is continuously controlled to shoot at the second position, and the second target confidence coefficient is used for representing a confidence coefficient corresponding to the second monitoring data;
and releasing the control right of the terminal under the condition that second alarm information sent by the terminal is not received or the second target confidence coefficient is greater than or equal to a set value.
2. The method of claim 1, wherein the first monitoring data comprises a detected alarm object and a corresponding first target box, and the second monitoring data comprises a detected alarm object and a corresponding second target box;
after receiving the first warning message sent by the terminal, the method further includes:
controlling the terminal not to carry out three-dimensional amplification under the condition that the first target frame is larger than a set range;
under the condition of receiving second warning information sent by the terminal, the method further comprises the following steps:
when the second target confidence is smaller than a set value and the second target frame is smaller than a set range, continuously controlling the terminal to shoot at the second position;
and releasing the control right of the terminal under the condition that the second target confidence is smaller than a set value and the second target frame is larger than a set range.
3. The method of claim 2, wherein calculating a second location for validating the terminal based on the first monitoring data and the first location in the case that the first target confidence is less than a set value comprises:
under the condition that the first target confidence degree is smaller than a set value, calculating the central position of the first target frame according to the first target frame;
and confirming a second position of the terminal according to the central position and the first position, so that when the terminal is at the second position, the shooting center of the terminal is superposed with the central position.
4. The method according to claim 1, wherein before the controlling the terminal to move to the second position and the terminal to perform the three-dimensional enlarged shooting, the method further comprises:
recording a node of the cruise plan, including a third position of the terminal, in the case where the cruise plan is present at the terminal and the cruise plan is on;
controlling the terminal to stop the cruise plan after recording the third position;
before releasing the control right of the terminal, the method further comprises:
controlling the terminal to move to the third position, and controlling the terminal to execute the cruise plan from the node.
5. The method of claim 4, further comprising:
under the condition that the control right of the terminal is failed to be acquired or the cruise plan is failed to stop, controlling the terminal not to carry out three-dimensional amplification;
and under the condition that the terminal fails to be controlled to move to the second position or the terminal fails to shoot, controlling the terminal to move to the third position, controlling the terminal to execute the cruise plan from the node, and releasing the control right of the terminal.
6. The method of claim 1, wherein the first target confidence and/or the second target confidence is determined by the following formula:
Figure DEST_PATH_IMAGE002
wherein i is used for representing the number of times of successfully carrying out three-dimensional amplification; mu.s i For representing the weight of the ith enlargement shot,
Figure DEST_PATH_IMAGE004
;f i the confidence coefficient is used for representing the ith magnification shooting; f is used for representing the confidence of the target after the i times of amplification shooting.
7. A monitoring video processing method is applied to a terminal and is characterized by comprising the following steps:
performing confidence calculation on the shot first monitoring data to obtain a first target confidence;
sending first warning information to network equipment under the condition that the first target confidence coefficient is greater than a set value, wherein the first warning information comprises the first monitoring data, the first target confidence coefficient and a first position of the terminal when the first monitoring data is shot;
moving to a second position according to the control of the network equipment, and performing three-dimensional amplification shooting to obtain second monitoring data, wherein the second position is calculated on the basis of the first monitoring data and the first position after the network equipment receives the first alarm information;
performing confidence calculation on the second monitoring data to obtain a second target confidence;
and sending second alarm information to the network equipment under the condition that the second target confidence degree is greater than the set value, wherein the second alarm information comprises the second monitoring data, the second target confidence degree and the second position.
8. A surveillance video processing apparatus, comprising:
the terminal is used for carrying out video monitoring, and the first warning information comprises first monitoring data, a first target confidence coefficient corresponding to the first monitoring data and a first position of the terminal when the first monitoring data is shot;
the confirming module is used for confirming a second position of the terminal according to the first monitoring data and the first position under the condition that the first target confidence coefficient is smaller than a set value;
the acquisition module is used for acquiring the control right of the terminal;
the control module is used for controlling the terminal to move to the second position and controlling the terminal to carry out three-dimensional amplification shooting to obtain amplified second monitoring data;
the first processing module is configured to, when second alarm information sent by the terminal is received and a second target confidence included in the second alarm information is smaller than a set value, continue to control the terminal to shoot at the second position, where the second target confidence is used to represent a confidence corresponding to the second monitoring data;
and the second processing module is used for releasing the control right of the terminal under the condition that second alarm information sent by the terminal is not received or the second target confidence degree is greater than or equal to a set value.
9. A surveillance video processing apparatus, comprising:
the first confirming module is used for carrying out confidence calculation on the shot first monitoring data to obtain a first target confidence;
a first sending module, configured to send first warning information to a network device when the first target confidence is greater than a set value, where the first warning information includes the first monitoring data, the first target confidence, and a first position of a terminal when the first monitoring data is captured;
the first processing module is used for moving to a second position according to the control of the network equipment and carrying out three-dimensional amplification shooting to obtain second monitoring data, wherein the second position is calculated by the network equipment based on the first monitoring data and the first position after receiving the first alarm information;
the second confirmation module is used for performing confidence calculation on the second monitoring data to obtain a second target confidence;
a second sending module, configured to send second warning information to the network device when the second target confidence is greater than the set value, where the second warning information includes the second monitoring data, the second target confidence, and the second location.
10. An electronic device comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps in the surveillance video processing method according to any one of claims 1 to 6; or, implementing the steps in the surveillance video processing method according to claim 7.
CN202211298746.6A 2022-10-24 2022-10-24 Monitoring video processing method and device and electronic equipment Active CN115361504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211298746.6A CN115361504B (en) 2022-10-24 2022-10-24 Monitoring video processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211298746.6A CN115361504B (en) 2022-10-24 2022-10-24 Monitoring video processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN115361504A true CN115361504A (en) 2022-11-18
CN115361504B CN115361504B (en) 2023-04-28

Family

ID=84007827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211298746.6A Active CN115361504B (en) 2022-10-24 2022-10-24 Monitoring video processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115361504B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2463836A1 (en) * 2001-10-17 2003-04-24 Biodentity Systems Corporation Face imaging system for recordal and automated identity confirmation
CN107645653A (en) * 2017-11-01 2018-01-30 广东省电子技术研究所 A kind of method, apparatus, equipment and the storage medium of Camera location shooting
CN111263114A (en) * 2020-02-14 2020-06-09 北京百度网讯科技有限公司 Abnormal event alarm method and device
CN113452903A (en) * 2021-06-17 2021-09-28 浙江大华技术股份有限公司 Snapshot equipment, snapshot method and main control chip
CN114255477A (en) * 2021-12-08 2022-03-29 讯飞智元信息科技有限公司 Smoking behavior detection method and related device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2463836A1 (en) * 2001-10-17 2003-04-24 Biodentity Systems Corporation Face imaging system for recordal and automated identity confirmation
CN107645653A (en) * 2017-11-01 2018-01-30 广东省电子技术研究所 A kind of method, apparatus, equipment and the storage medium of Camera location shooting
CN111263114A (en) * 2020-02-14 2020-06-09 北京百度网讯科技有限公司 Abnormal event alarm method and device
CN113452903A (en) * 2021-06-17 2021-09-28 浙江大华技术股份有限公司 Snapshot equipment, snapshot method and main control chip
CN114255477A (en) * 2021-12-08 2022-03-29 讯飞智元信息科技有限公司 Smoking behavior detection method and related device

Also Published As

Publication number Publication date
CN115361504B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
US10417503B2 (en) Image processing apparatus and image processing method
CN110163885B (en) Target tracking method and device
CN113284168A (en) Target tracking method and device, electronic equipment and storage medium
CN111369590A (en) Multi-target tracking method and device, storage medium and electronic equipment
US11107246B2 (en) Method and device for capturing target object and video monitoring device
CN105979143B (en) Method and device for adjusting shooting parameters of dome camera
CN110473227B (en) Target tracking method, device, equipment and storage medium
US11688078B2 (en) Video object detection
CN110647818A (en) Identification method and device for shielding target object
CN115063454A (en) Multi-target tracking matching method, device, terminal and storage medium
JP6799325B2 (en) Image correction device, image correction method, attention point recognition device, attention point recognition method and abnormality detection system
JP5127692B2 (en) Imaging apparatus and tracking method thereof
CN115361504B (en) Monitoring video processing method and device and electronic equipment
CN112738387B (en) Target snapshot method, device and storage medium
CN113793365B (en) Target tracking method and device, computer equipment and readable storage medium
CN113642546B (en) Multi-face tracking method and system
CN113452903B (en) Snapshot equipment, snap method and main control chip
KR20150050224A (en) Apparatus and methdo for abnormal wandering
US20210097706A1 (en) Method and system for determining dynamism in a scene by processing depth image
CN114529858B (en) Vehicle state recognition method, electronic device, and computer-readable storage medium
CN109697386B (en) License plate recognition method and device and electronic equipment
JP6670351B2 (en) Map updating device and map updating method
CN117994947A (en) Camera shielding detection alarm method and device, electronic equipment and storage medium
CN117302237A (en) Track prediction method, vehicle, electronic equipment and storage medium
CN116782020A (en) Object tracking method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant