CN113674224A - Monitoring point position management method and device - Google Patents

Monitoring point position management method and device Download PDF

Info

Publication number
CN113674224A
CN113674224A CN202110865338.3A CN202110865338A CN113674224A CN 113674224 A CN113674224 A CN 113674224A CN 202110865338 A CN202110865338 A CN 202110865338A CN 113674224 A CN113674224 A CN 113674224A
Authority
CN
China
Prior art keywords
target
image
evaluation
quality
evaluation target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110865338.3A
Other languages
Chinese (zh)
Inventor
唐邦杰
潘华东
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110865338.3A priority Critical patent/CN113674224A/en
Publication of CN113674224A publication Critical patent/CN113674224A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a monitoring point location management method and a monitoring point location management device, wherein the monitoring point location management method comprises the following steps: acquiring a video frame sequence acquired by a monitoring point in a preset time period, and screening out a target image with image quality meeting a set image quality requirement from the video frame sequence; wherein the target image comprises an evaluation target; performing attribute evaluation on at least one key part of the evaluation target in the target image, and outputting a corresponding attribute evaluation result, wherein the attribute evaluation result is related to image semantic description information of the corresponding key part; determining the quality grade of the evaluation target by using the attribute evaluation result; and judging whether manual checking is needed to be carried out on the monitoring points or not based on the quality grades of the evaluation targets in the preset time periods. By the method, the accuracy of monitoring point position management can be effectively improved.

Description

Monitoring point position management method and device
Technical Field
The application relates to the technical field of video image analysis, in particular to a monitoring point position management method and device.
Background
With the rapid development of smart cities and artificial intelligence, the popularity of video monitoring points in cities, counties or enterprise-level ranges is higher and higher. Whether the installation and deployment of the video monitoring point positions are reasonable or not plays a decisive role in the video acquisition quality.
In the prior art, original information of a video image is often used, and global image quality abnormity diagnosis such as definition, color cast, noise, blur, jitter, brightness, occlusion and the like is performed on a video scene image through a traditional machine learning feature extraction and mode classification and image processing method.
However, the inventor of the present application finds, in long-term research and development, that only the abnormality of the monitoring point in the image dimension can be found by using the prior art, and since the factors such as the installation angle, the height, the position rationality and the like of the monitoring point are key factors directly influencing the video analysis quality, the quality diagnosis only using the image dimension cannot comprehensively describe the quality of the video monitoring point. In view of this, how to rapidly and effectively perform high-precision automatic analysis and diagnosis on the video frequency bit quality becomes an urgent problem to be solved.
Disclosure of Invention
The technical problem that this application mainly solved provides a control position treatment method and device, can effectively improve the accuracy of control position treatment.
In order to solve the technical problem, the application adopts a technical scheme that: the monitoring point location management method comprises the following steps: acquiring a video frame sequence acquired by a monitoring point in a preset time period, and screening out a target image with image quality meeting a set image quality requirement from the video frame sequence; wherein the target image comprises an evaluation target; performing attribute evaluation on at least one key part of the evaluation target in the target image to obtain a corresponding attribute evaluation result, wherein the attribute evaluation result is related to image semantic description information of the corresponding key part; determining the quality grade of an evaluation target by using the attribute evaluation result; and judging whether manual checking is needed to be carried out on the monitoring points or not based on the quality grades of the evaluation targets in the preset time periods.
Wherein the step of performing attribute evaluation on at least one key part of the evaluation target in the target image to obtain a corresponding attribute evaluation result comprises: obtaining all key parts to be evaluated based on the type of the evaluation target, and obtaining corresponding image semantic description information for each key part; judging whether all the image semantic description information is greater than or equal to a corresponding preset threshold value; if so, judging that the attribute evaluation result of the evaluation target is a first quality target; otherwise, judging that the attribute evaluation result of the evaluation target is a second quality target.
Wherein the step of obtaining all the key parts to be evaluated based on the type of the evaluation target and obtaining the corresponding image semantic description information for each key part includes: responding to the type of the evaluation target as a human body target, obtaining the key part to be evaluated, including at least one of a trunk, a head and four limbs, and obtaining corresponding image semantic description information aiming at least one of the trunk, the head and the four limbs, wherein the image semantic description information includes a human body image size, a human body image integrity, a human body image definition, a human body image detection confidence coefficient and a human body image proportion with an associated human face; and/or in response to the type of the evaluation target being a face target, obtaining that the key part to be evaluated comprises five sense organs, and obtaining corresponding image semantic description information aiming at the five sense organs, wherein the image semantic description information comprises face image size, face image integrity, face image definition, face angle, face image detection confidence and whether a mask is worn; and/or in response to the type of the evaluation target being a motor vehicle target, obtaining at least one of a vehicle logo, a vehicle light, a vehicle window and a vehicle license plate of the key part to be evaluated, and obtaining corresponding image semantic description information aiming at least one of the vehicle logo, the vehicle light, the vehicle window and the vehicle license plate, wherein the image semantic description information comprises a motor vehicle image size, a motor vehicle image integrity, a motor vehicle image definition, a motor vehicle image detection confidence and whether an associated vehicle license plate exists; and/or responding to the type of the evaluation target as a license plate target, acquiring the key part to be evaluated as including print characters, and acquiring corresponding image semantic description information aiming at the print characters, wherein the image semantic description information includes license plate image size, license plate image integrity, license plate image definition and license plate image detection confidence.
Wherein the determining of the quality level of the evaluation target using the attribute evaluation result includes: acquiring the total number of the target images of the evaluation target in a preset time period; acquiring the number of second quality targets serving as the attribute evaluation results in the evaluation targets within a preset time period; determining a quality grade of the evaluation target based on a ratio of the number of the second quality targets to the total number of the target images.
Wherein the step of determining the quality level of the evaluation target based on the ratio of the number of the second quality targets to the total number of the target images comprises: comparing the ratio to a magnitude between a first threshold and a second threshold; if the ratio is smaller than a first threshold, determining that the quality grade of the evaluation target is a high quality grade; if the ratio is greater than or equal to a first threshold and less than a second threshold, determining that the quality grade of the evaluation target is a medium quality grade; and if the ratio is larger than or equal to a second threshold, determining that the quality grade of the evaluation target is a low quality grade, wherein the first threshold is smaller than the second threshold.
The step of judging whether manual checking of the monitoring points is needed or not based on the quality levels of the evaluation targets in the preset time periods comprises the following steps: judging whether the quality levels of the evaluation targets in a plurality of preset time periods are reduced or not; if yes, triggering early warning, and manually checking the monitoring points; otherwise, acquiring the video frame sequence of the monitoring point in the next preset time period.
The types of the evaluation targets comprise at least two of human body target types, human face target types, motor vehicle target types and license plate target types; the step of determining whether the quality levels of the evaluation targets within the preset time are reduced includes: and judging whether the quality level of any one evaluation target in a plurality of preset times is reduced or not.
The method comprises the following steps of obtaining a video frame sequence collected by a monitoring point in a preset time period, and screening out a target image with image quality meeting a set image quality requirement from the video frame sequence, wherein the steps comprise: performing target tracking on each video frame in the video frame sequence to obtain a video frame where the evaluation target is located and coordinate information of the evaluation target in the current video frame; obtaining corresponding image quality scores by utilizing the coordinate information of all the evaluation targets in the current video frame; and taking the video frame with the highest image quality score as the target image.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a monitoring point location management device comprising a memory and a processor coupled to each other, wherein the memory stores program instructions for execution by the processor to implement the monitoring point location management method mentioned in any of the above embodiments.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a computer-readable storage medium storing a computer program for implementing the monitored site location governance method mentioned in any one of the above embodiments.
Different from the prior art, the beneficial effects of the application are that: the application provides a monitoring point location management method, which fully utilizes image semantic description information of key parts in an evaluation target to perform attribute evaluation on the target image so as to determine the quality grade of an evaluation target type, and judges whether manual check needs to be performed on the current monitoring point location or not based on the quality grade conditions in different time periods; the image semantic description information can evaluate the quality-related dimensionality of the target image, can comprehensively evaluate the point installation deployment dimensionality such as the installation height, the angle and the scene rationality of the monitoring point, feeds back the quality of the monitoring point comprehensively and accurately, and gives an early warning for abnormal conditions, so that more accurate monitoring point management is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart of an embodiment of a monitoring point location governance method according to the present application;
FIG. 2 is a schematic flow chart of one embodiment of step S101 in FIG. 1;
FIG. 3 is a schematic flow chart of one embodiment of step S102 in FIG. 1;
FIG. 4 is a schematic flow chart of one embodiment of step S103 in FIG. 1;
FIG. 5 is a schematic structural diagram of an embodiment of step S104 in FIG. 1;
FIG. 6 is a schematic diagram of a framework of an embodiment of the monitoring point location governance device;
FIG. 7 is a schematic structural diagram of an embodiment of the monitoring point location management device according to the present application;
FIG. 8 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a monitoring point location management method according to the present application. Specifically, the method may include the steps of:
s101: acquiring a video frame sequence acquired by a monitoring point in a preset time period, and screening out a target image with image quality meeting a set image quality requirement from the video frame sequence; wherein the target image includes an evaluation target.
Optionally, a front-end device of the monitoring point is used for acquiring a monitoring video within a preset time range, and performing framing processing on the monitoring video to obtain a video frame sequence. In this embodiment, the frame rate of the video sub-frames is set to 8-25 frames per second, the preset time is set to 24h, all frame images including the evaluation target are extracted from the video frame sequence by using a target detection and tracking algorithm, then image quality analysis is performed on each evaluation target in each frame image, and finally, when the evaluation target disappears in the video frame sequence, one frame image with the image quality meeting the preset image quality requirement in the evaluation target life cycle is selected and used as the frame target image with the best image quality. It should be noted that the image quality here is for each evaluation target, not for each frame image.
In a specific implementation scenario, please refer to fig. 2, and fig. 2 is a schematic flowchart illustrating an implementation manner of step S101 in fig. 1. The step S101 specifically includes the following steps:
s201: and performing target tracking on each video frame in the video frame sequence to obtain the video frame where the evaluation target is located and the coordinate information of the evaluation target in the current video frame.
Optionally, acquiring coordinates of the evaluation target on each video frame image by using a target tracking algorithm, and performing space-time association on the coordinates of the evaluation target between the front frame image and the rear frame image; and simultaneously, generating a unique ID (identity) corresponding to each evaluation target, and corresponding the associated coordinate information to the ID one by one.
S202: and obtaining corresponding image quality scores by utilizing the coordinate information of all the evaluation targets in the current video frame.
Optionally, a target scoring snapshot algorithm is used to perform queue caching on the coordinate information of the evaluation target and the corresponding ID identifier, and meanwhile, a historical frame track of each ID target is maintained. Based on the coordinate information of all the evaluation targets in each video frame image, the conditions of the size, the position, the occlusion and the like of the evaluation target to be scored at present are mainly concerned, and each evaluation target in each frame image is subjected to snapshot scoring.
S203: and taking the video frame with the highest image quality score as a target image.
Alternatively, when the evaluation target disappears in the video frame sequence, the highest-scoring one is selected from the video frame image period of the evaluation target and used as the target image.
Through the embodiment, the image of the frame with the best image quality of the evaluation target in the video frame sequence can be obtained and used as the evaluation basis in the subsequent attribute evaluation process, and the highest level of the current monitoring point obtaining the evaluation target can be reflected.
S102: and performing attribute evaluation on at least one key part of the evaluation target in the target image to obtain a corresponding attribute evaluation result, wherein the attribute evaluation result is related to the image semantic description information of the corresponding key part.
Specifically, attribute evaluation can be performed on the key part of the evaluation target by using a multi-task multi-label fine-grained classification recognition technology based on deep learning, the specific process refers to the prior art, and the technical principle of the classification recognition technology is not described herein again.
In a specific implementation scenario, please refer to fig. 3, and fig. 3 is a flowchart illustrating an implementation manner of step S102 in fig. 1. The step S102 may include the following steps:
s301: all key parts to be evaluated are obtained based on the type of the evaluation target, and corresponding image semantic description information is obtained for each key part.
Alternatively, when the monitoring point is applied to different scenes, the type of the evaluation target may be adjusted in a targeted manner according to the change of the application scene. The type of the evaluation target may include, but is not limited to, any one or more of a human target, a human face target, a motor vehicle target, and a license plate target. For example, for a vehicle monitoring point arranged on a motor vehicle lane, the quality of a motor vehicle and the quality of a license plate target need to be paid attention to at the same time, and the type of the evaluation target at this time comprises the motor vehicle target and the license plate target. For another example, for portrait monitoring spots arranged at entrances and exits of shopping malls, the quality of human faces and human body targets mainly needs to be paid attention to, and the types of the evaluation targets at this time include human body targets and human face targets. It should be noted that although the human face target itself belongs to the human body target, since the obtained target images of the human face target and the human body target in the video frame corresponding to the highest overall quality score may be different and have obvious difference in quality evaluation dimension, the attribute evaluation is performed on the human face target and the human body target respectively; similarly, for a motor vehicle target and a license plate target, the two targets have respective emphasis on quality evaluation dimension, and therefore attribute evaluation needs to be performed respectively.
Aiming at different types of evaluation targets, the key parts to be evaluated and the image semantic description information corresponding to each type are different. In a specific implementation scenario, the step S301 includes one or any combination of the following steps, corresponding to different types of evaluation targets:
in response to the type of the evaluation target being a human body target, obtaining at least one of a trunk, a head and four limbs of a key part to be evaluated, and obtaining corresponding image semantic description information aiming at least one of the trunk, the head and the four limbs, wherein the image semantic description information can be but is not limited to at least one of human body image size, human body image integrity, human body image definition, human body image detection confidence and human body image scale with an associated human face;
responding to the type of the evaluation target as a face target, obtaining key parts to be evaluated including five sense organs, and obtaining corresponding image semantic description information aiming at the five sense organs, wherein the image semantic description information can be but is not limited to at least one of face image size, face image integrity, face image definition, face angle, face image detection confidence and whether a mask is worn;
in response to the fact that the type of the evaluation target is a motor vehicle target, obtaining at least one of a vehicle logo, a vehicle lamp, a vehicle window and a vehicle license plate of a key part to be evaluated, and obtaining corresponding image semantic description information aiming at the at least one of the vehicle logo, the vehicle lamp, the vehicle window and the vehicle license plate, wherein the image semantic description information can be but is not limited to at least one of the size of a motor vehicle image, the integrity of the motor vehicle image, the definition of the motor vehicle image, the detection confidence of the motor vehicle image and the existence of an associated vehicle license plate;
and in response to the fact that the type of the evaluation target is a license plate target, obtaining that the key part to be evaluated comprises print characters, and obtaining corresponding image semantic description information aiming at the print characters, wherein the image semantic description information can be but is not limited to at least one of license plate image size, license plate image integrity, license plate image definition and license plate image detection confidence.
Of course, in other embodiments, the to-be-evaluated key portions corresponding to each evaluation target type may further include other key portions, and accordingly, the image semantic description information obtained for each key portion may also include other information, which is not specifically limited herein.
It should be noted that, the image semantic description information including the image size, the image integrity, the definition and the detection confidence is obtained for each key part, where the image size specifically refers to the width and the height of an image, and since an image recognition algorithm has a capability boundary, the influence of too small resolution of the image on the algorithm effect is large; in addition, the integrity and the definition of the target image determine the quality of the key part information of the image, and meanwhile, the identification precision is influenced; the detection confidence reflects the quality of the evaluation target from the other side, and the lower confidence indicates less effective information of the target contour.
S302: and judging whether all the image semantic description information is greater than or equal to the corresponding preset threshold value.
Optionally, for the results of different image semantic description information, the corresponding preset threshold should also be set. For example, for the size of the target image in the image semantic description information, a preset threshold is set as the size of the image; for the detection confidence, the threshold is a preset confidence level; for the grading results of the integrity and the definition of the target image, the preset threshold is also set as a score threshold; and for the binary result of whether the mask is worn or whether the associated license plate exists, the preset threshold value is the same as the two, if 1 represents yes, and 0 represents no.
S303: and if so, judging that the attribute evaluation result of the evaluation target is the first quality target.
Optionally, if the result of the image semantic description information of all dimensions in the image semantic description information is greater than or equal to a preset threshold, the evaluation target is considered as a first quality target, where the first attribute evaluation level indicates that the evaluation target has a higher attribute evaluation level, and the attribute evaluation result of the evaluation target is a high quality target.
S304: otherwise, judging that the attribute evaluation result of the evaluation target is a second quality target.
Optionally, in this embodiment, if an information result of any dimension in the image semantic description information is lower than a preset threshold, the evaluation target is considered to be a second quality target, where the second is an attribute evaluation level of the evaluation target, which indicates that the attribute evaluation level of the evaluation target is lower, and an attribute evaluation result of the evaluation target is a low quality target.
By the embodiment, aiming at the targeted design attribute evaluation dimensionality of different evaluation target types, the content included in the acquired image semantic description information is adaptively adjusted according to the requirement of the monitoring point location, so that the quality of a specific type can be more accurately described, and the accuracy of an attribute evaluation result is effectively improved.
S103: and determining the quality grade of the evaluation target by using the attribute evaluation result.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating an embodiment of step S103 in fig. 1. The step S103 includes the following steps:
s401: and acquiring the total number of target images of the evaluation target in a preset time period.
Optionally, in this embodiment, taking the type of the human face target as an example, if the preset time is set to 24h, it indicates that after step S203 is completed, the number of target images including the human face target is counted from the 24h video frame sequence.
S402: and acquiring the number of second quality targets serving as attribute evaluation results in the evaluation targets in a preset time period.
S403: and determining the quality grade of the evaluation target based on the ratio of the number of the second quality targets to the total number of the target images.
Specifically, in the present embodiment, the ratio is compared with the magnitude between the first threshold value and the second threshold value; if the ratio is smaller than the first threshold, judging the quality grade of the evaluation target to be a high quality grade; if the ratio is greater than or equal to the first threshold and less than the second threshold, judging the quality grade of the evaluation target to be a medium quality grade; and if the ratio is larger than or equal to a second threshold, judging the quality grade of the evaluation target to be a low quality grade, wherein the first threshold is smaller than the second threshold.
In a further specific embodiment, the first threshold is preset to be 20%, the second threshold is preset to be 40%, and if the proportion of the low-quality target in all the target images is less than 20%, the current evaluation target type is considered as a high-quality grade; if the low quality percentage is between 20% and 40%, the current evaluation target type is considered to be a medium quality grade; and if the low quality percentage is higher than 40%, the current evaluation target type is considered as a low quality grade.
Through the implementation mode, the quality grade of the evaluation target is judged according to the attribute evaluation result, the grade division reliability is effectively improved, and technical support is provided for the subsequent judgment of whether manual check is needed.
S104: and judging whether manual checking of the monitoring points is needed or not based on the quality levels of the evaluation targets in a plurality of preset time periods.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of step S104 in fig. 1. The step S104 includes:
s501: and judging whether the quality levels of the evaluation targets in a plurality of preset time periods are reduced or not.
In this embodiment, if the single preset time is 24 hours, it is necessary to compare whether the quality level of the evaluation target type is degraded within two 24 hours, that is, within two days. The reduction here means to change from an original high quality level to a medium quality level or a low quality level, or from an original medium quality level to a low quality level.
In a specific implementation scenario, for a specific monitoring point gate, for example, a human image gate or a vehicle lane gate, multiple types of evaluation targets need to be paid attention to at the same time, that is, at this time, the types of the evaluation targets include at least two of a human target type, a human face target type, a vehicle target type, and a license plate target type, and a rule for determining whether manual checking is required needs to be configured to determine whether the quality level of any one of the evaluation targets is reduced within a plurality of preset time periods. Through the embodiment, the evaluation of the professional portrait mount or the vehicle mount is more targeted.
S502: if yes, triggering early warning, and manually checking the monitoring points.
In this embodiment, the content of manual checking includes determining whether the monitoring point location has a situation that installation and deployment are unreasonable, such as a height, an angle, and an image parameter, and determining whether the monitoring point location has a situation that the monitoring point location is abnormally adjusted and moved, and performing targeted tuning in time. Through the embodiment, the point location with the remarkable abnormal change in quality can be timely alarmed, the timeliness is high, and the labor cost of point location verification is reduced to the maximum extent.
S503: otherwise, acquiring the video frame sequence of the monitoring point in the next preset time period.
By the embodiment, the attribute evaluation is performed on the target image by fully utilizing the image semantic description information of the key part in the evaluation target, so that the quality grade of the evaluation target type is determined, and whether manual check needs to be performed on the current monitoring point position is judged based on the quality grade conditions in different time periods; the image semantic description information can evaluate the quality-related dimensionality of the target image, can comprehensively evaluate the point installation deployment dimensionality such as the installation height, the angle and the scene rationality of the monitoring point, feeds back the quality of the monitoring point comprehensively and accurately, and gives an early warning for abnormal conditions, so that more accurate monitoring point management is realized.
Referring to fig. 6, fig. 6 is a schematic frame diagram of an embodiment of the monitoring point location management device according to the present application. The identification apparatus 100 includes an acquisition module 10, an attribute evaluation module 12, a quality evaluation module 14, and a determination module 16. Specifically, the obtaining module 10 is configured to obtain a video frame sequence acquired by a monitoring point within a preset time period, and screen out a target image with an image quality meeting a set image quality requirement from the video frame sequence; wherein the target image includes an evaluation target. The attribute evaluation module 12 is configured to perform attribute evaluation on at least one key portion of an evaluation target in the target image to obtain a corresponding attribute evaluation result, where the attribute evaluation result is related to image semantic description information of the corresponding key portion. The quality evaluation module 14 is used for determining the quality grade of the evaluation target by using the attribute evaluation result. The judging module 16 is configured to judge whether manual checking of the monitoring point is required based on the quality level of the evaluation target in multiple preset time periods. By the method, the attribute evaluation is performed on the target image by fully utilizing the image semantic description information of the key part in the evaluation target, so that the quality grade of the evaluation target type is determined, and whether the current monitoring point position needs to be manually checked is judged based on the quality grade conditions in different time periods; the image semantic description information can evaluate the quality-related dimensionality of the target image, can comprehensively evaluate the point installation deployment dimensionality such as the installation height, the angle and the scene rationality of the monitoring point, feeds back the quality of the monitoring point comprehensively and accurately, and gives an early warning for abnormal conditions, so that more accurate monitoring point management is realized.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of the monitoring point location management device according to the present application. The device 1000 comprises a memory 101 and a processor 102 which are coupled to each other, wherein the memory 101 stores program instructions, and the processor 102 is configured to execute the program instructions to implement the monitoring point location management method mentioned in any of the above embodiments.
Specifically, the processor 102 may also be referred to as a CPU (Central Processing Unit). The processor 102 may be an integrated circuit chip having signal processing capabilities. The Processor 102 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, processor 102 may be commonly implemented by multiple integrated circuit chips.
Referring to fig. 8, fig. 8 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application. The computer-readable storage medium 20 stores a computer program 200, which can be read by a computer, and the computer program 200 can be executed by a processor to implement the monitoring point location management method in any of the above embodiments. The computer program 200 may be stored in the computer-readable storage medium 20 in the form of a software product, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. The computer-readable storage medium 20 having a storage function may be various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or may be a terminal device, such as a computer, a server, a mobile phone, or a tablet.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A monitoring point location management method is characterized by comprising the following steps:
acquiring a video frame sequence acquired by a monitoring point in a preset time period, and screening out a target image with image quality meeting a set image quality requirement from the video frame sequence; wherein the target image comprises an evaluation target;
performing attribute evaluation on at least one key part of the evaluation target in the target image to obtain a corresponding attribute evaluation result, wherein the attribute evaluation result is related to image semantic description information of the corresponding key part;
determining the quality grade of the evaluation target by using the attribute evaluation result;
and judging whether manual checking is needed to be carried out on the monitoring points or not based on the quality grades of the evaluation targets in the preset time periods.
2. The method for governing the point location monitored according to claim 1,
the step of performing attribute evaluation on at least one key part of the evaluation target in the target image to obtain a corresponding attribute evaluation result includes:
obtaining all key parts to be evaluated based on the type of the evaluation target, and obtaining corresponding image semantic description information for each key part;
judging whether all the image semantic description information is greater than or equal to a corresponding preset threshold value;
if so, judging that the attribute evaluation result of the evaluation target is a first quality target;
otherwise, judging that the attribute evaluation result of the evaluation target is a second quality target.
3. The method for governing the point location monitored according to claim 2,
the step of obtaining all the key parts to be evaluated based on the type of the evaluation target and obtaining the corresponding image semantic description information for each key part includes:
responding to the type of the evaluation target as a human body target, obtaining the key part to be evaluated, including at least one of a trunk, a head and four limbs, and obtaining corresponding image semantic description information aiming at least one of the trunk, the head and the four limbs, wherein the image semantic description information includes a human body image size, a human body image integrity, a human body image definition, a human body image detection confidence coefficient and a human body image proportion with an associated human face; and/or the presence of a gas in the gas,
responding to the type of the evaluation target as a face target, obtaining that the key part to be evaluated comprises five sense organs, and obtaining corresponding image semantic description information aiming at the five sense organs, wherein the image semantic description information comprises face image size, face image integrity, face image definition, face angle, face image detection confidence and whether a mask is worn; and/or the presence of a gas in the gas,
responding to the type of the evaluation target as a motor vehicle target, obtaining at least one of a vehicle logo, a vehicle lamp, a vehicle window and a vehicle license plate of the key part to be evaluated, and obtaining corresponding image semantic description information aiming at the at least one of the vehicle logo, the vehicle lamp, the vehicle window and the vehicle license plate, wherein the image semantic description information comprises a motor vehicle image size, a motor vehicle image integrity, a motor vehicle image definition, a motor vehicle image detection confidence coefficient and whether an associated vehicle license plate exists; and/or the presence of a gas in the gas,
and responding to the type of the evaluation target as a license plate target, acquiring the key part to be evaluated as including print characters, and acquiring corresponding image semantic description information aiming at the print characters, wherein the image semantic description information includes license plate image size, license plate image integrity, license plate image definition and license plate image detection confidence.
4. The method for governing the point location monitored according to claim 2,
the step of determining the quality level of the evaluation target using the attribute evaluation result includes:
acquiring the total number of the target images of the evaluation target in a preset time period;
acquiring the number of the second quality targets as the attribute evaluation results in the evaluation targets in a preset time period;
determining a quality grade of the evaluation target based on a ratio of the number of the second quality targets to the total number of the target images.
5. The method for governing the point location of monitoring points according to claim 4,
the step of determining the quality level of the evaluation target based on the ratio of the number of the second quality targets to the total number of the target images comprises:
comparing the ratio to a magnitude between a first threshold and a second threshold; if the ratio is smaller than a first threshold, determining that the quality grade of the evaluation target is a high quality grade; if the ratio is greater than or equal to a first threshold and less than a second threshold, determining that the quality grade of the evaluation target is a medium quality grade; and if the ratio is larger than or equal to a second threshold, determining that the quality grade of the evaluation target is a low quality grade, wherein the first threshold is smaller than the second threshold.
6. The method for governing the point location monitored according to claim 1,
the step of judging whether manual checking of the monitoring points is needed or not based on the quality levels of the evaluation targets in the preset time periods comprises the following steps:
judging whether the quality levels of the evaluation targets in a plurality of preset time periods are reduced or not;
if yes, triggering early warning, and manually checking the monitoring points;
otherwise, acquiring the video frame sequence of the monitoring point in the next preset time period.
7. The method for governing the point location of monitoring points according to claim 6,
the types of the evaluation targets comprise at least two of human body target types, human face target types, motor vehicle target types and license plate target types;
the step of determining whether the quality levels of the evaluation targets within the preset time are reduced includes:
and judging whether the quality level of any one evaluation target in a plurality of preset time periods is reduced or not.
8. The monitoring point position management method according to claim 1, wherein the step of obtaining a sequence of video frames acquired by the monitoring point within a preset time period and screening out a target image of which the image quality meets a set image quality requirement from the sequence of video frames includes:
performing target tracking on each video frame in the video frame sequence to obtain a video frame where the evaluation target is located and coordinate information of the evaluation target in the current video frame;
obtaining corresponding image quality scores by utilizing the coordinate information of all the evaluation targets in the current video frame;
and taking the video frame with the highest image quality score as the target image.
9. A monitored point location remediation device comprising a memory and a processor coupled to one another, the memory storing program instructions for execution by the processor to implement the monitored point location remediation method of claims 1-8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for implementing the monitored site location remediation method of claims 1-8.
CN202110865338.3A 2021-07-29 2021-07-29 Monitoring point position management method and device Pending CN113674224A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110865338.3A CN113674224A (en) 2021-07-29 2021-07-29 Monitoring point position management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110865338.3A CN113674224A (en) 2021-07-29 2021-07-29 Monitoring point position management method and device

Publications (1)

Publication Number Publication Date
CN113674224A true CN113674224A (en) 2021-11-19

Family

ID=78540754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110865338.3A Pending CN113674224A (en) 2021-07-29 2021-07-29 Monitoring point position management method and device

Country Status (1)

Country Link
CN (1) CN113674224A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116473520A (en) * 2023-05-18 2023-07-25 深圳市宗匠科技有限公司 Electronic equipment and skin analysis method and device thereof

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101119482A (en) * 2007-09-28 2008-02-06 北京智安邦科技有限公司 Overall view monitoring method and apparatus
CN101216952A (en) * 2008-01-17 2008-07-09 大连大学 Dynamic spatiotemporal coupling denoise processing method for data catching of body motion
CN104780361A (en) * 2015-03-27 2015-07-15 南京邮电大学 Quality evaluation method for urban video monitoring system
CN106303403A (en) * 2015-06-12 2017-01-04 中国人民公安大学 Supervising device presetting bit setting, changing method and system
US20180339386A1 (en) * 2017-05-24 2018-11-29 Trimble Inc. Calibration approach for camera placement
CN109792829A (en) * 2016-10-11 2019-05-21 昕诺飞控股有限公司 Control system, monitoring system and the method for controlling monitoring system of monitoring system
CN109887040A (en) * 2019-02-18 2019-06-14 北京航空航天大学 The moving target actively perceive method and system of facing video monitoring
EP3506166A1 (en) * 2017-12-29 2019-07-03 Bull SAS Prediction of movement and topology for a network of cameras
CN110276277A (en) * 2019-06-03 2019-09-24 罗普特科技集团股份有限公司 Method and apparatus for detecting facial image
WO2020094091A1 (en) * 2018-11-07 2020-05-14 杭州海康威视数字技术股份有限公司 Image capturing method, monitoring camera, and monitoring system
CN112446849A (en) * 2019-08-13 2021-03-05 杭州海康威视数字技术股份有限公司 Method and device for processing picture

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101119482A (en) * 2007-09-28 2008-02-06 北京智安邦科技有限公司 Overall view monitoring method and apparatus
CN101216952A (en) * 2008-01-17 2008-07-09 大连大学 Dynamic spatiotemporal coupling denoise processing method for data catching of body motion
CN104780361A (en) * 2015-03-27 2015-07-15 南京邮电大学 Quality evaluation method for urban video monitoring system
CN106303403A (en) * 2015-06-12 2017-01-04 中国人民公安大学 Supervising device presetting bit setting, changing method and system
CN109792829A (en) * 2016-10-11 2019-05-21 昕诺飞控股有限公司 Control system, monitoring system and the method for controlling monitoring system of monitoring system
US20180339386A1 (en) * 2017-05-24 2018-11-29 Trimble Inc. Calibration approach for camera placement
EP3506166A1 (en) * 2017-12-29 2019-07-03 Bull SAS Prediction of movement and topology for a network of cameras
WO2020094091A1 (en) * 2018-11-07 2020-05-14 杭州海康威视数字技术股份有限公司 Image capturing method, monitoring camera, and monitoring system
CN109887040A (en) * 2019-02-18 2019-06-14 北京航空航天大学 The moving target actively perceive method and system of facing video monitoring
CN110276277A (en) * 2019-06-03 2019-09-24 罗普特科技集团股份有限公司 Method and apparatus for detecting facial image
CN112446849A (en) * 2019-08-13 2021-03-05 杭州海康威视数字技术股份有限公司 Method and device for processing picture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUANYUAN ZENG 等: "Measuring the effectiveness of infrastructure-level detection of large-scale botnets", 《 2011 IEEE NINETEENTH IEEE INTERNATIONAL WORKSHOP ON QUALITY OF SERVICE》, 27 June 2011 (2011-06-27) *
毕国玲: "智能视频监控系统中若干关键技术研究", 《中国博士学位论文全文数据库 (信息科技辑)》, 15 October 2015 (2015-10-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116473520A (en) * 2023-05-18 2023-07-25 深圳市宗匠科技有限公司 Electronic equipment and skin analysis method and device thereof

Similar Documents

Publication Publication Date Title
US20190138821A1 (en) Camera blockage detection for autonomous driving systems
CN102867415B (en) Video detection technology-based road jam judgement method
CN111783573B (en) High beam detection method, device and equipment
CN111401315B (en) Face recognition method based on video, recognition device and storage device
CN112257541A (en) License plate recognition method, electronic device and computer-readable storage medium
JP2019192209A (en) Learning target image packaging device and method for artificial intelligence of video movie
CN113674224A (en) Monitoring point position management method and device
CN115170851A (en) Image clustering method and device
CN114897872A (en) Method and device suitable for identifying cells in cell cluster and electronic equipment
CN116228756B (en) Method and system for detecting bad points of camera in automatic driving
CN106778765B (en) License plate recognition method and device
CN117197796A (en) Vehicle shielding recognition method and related device
CN116994084A (en) Regional intrusion detection model training method and regional intrusion detection method
CN113723282B (en) Vehicle driving prompting method, device, electronic equipment and storage medium
JP2019192201A (en) Learning object image extraction device and method for autonomous driving
CN112991397B (en) Traffic sign tracking method, apparatus, device and storage medium
CN114419531A (en) Object detection method, object detection system, and computer-readable storage medium
CN112101139B (en) Human shape detection method, device, equipment and storage medium
CN112261402B (en) Image detection method and system and camera shielding monitoring method and system
CN107886102B (en) Adaboost classifier training method and system
CN111597959B (en) Behavior detection method and device and electronic equipment
CN112906424B (en) Image recognition method, device and equipment
CN111242054B (en) Method and device for detecting capture rate of detector
CN114529858B (en) Vehicle state recognition method, electronic device, and computer-readable storage medium
CN118279677A (en) Target identification method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination