CN117478838B - Distributed video processing supervision system and method based on information security - Google Patents
Distributed video processing supervision system and method based on information security Download PDFInfo
- Publication number
- CN117478838B CN117478838B CN202311435215.1A CN202311435215A CN117478838B CN 117478838 B CN117478838 B CN 117478838B CN 202311435215 A CN202311435215 A CN 202311435215A CN 117478838 B CN117478838 B CN 117478838B
- Authority
- CN
- China
- Prior art keywords
- risk
- names
- target objects
- topic
- objects
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000005516 engineering process Methods 0.000 claims abstract description 13
- 238000007405 data analysis Methods 0.000 claims abstract description 11
- 230000000875 corresponding effect Effects 0.000 claims description 46
- 238000012544 monitoring process Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000006399 behavior Effects 0.000 claims description 9
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 4
- 230000002596 correlated effect Effects 0.000 claims description 3
- 238000003708 edge detection Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 11
- 230000000694 effects Effects 0.000 description 7
- 238000012937 correction Methods 0.000 description 4
- 230000008439 repair process Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a distributed video processing supervision system and method based on information security, and belongs to the technical field of information security. The system comprises a data acquisition module, a data analysis module, a risk identification module and an operation management module; the data acquisition module is used for acquiring the operation parameters of the live broadcast equipment and the shot video; the data analysis module is used for analyzing the video information frame by frame, dividing key areas by combining the operation parameters of the equipment, identifying target objects, calculating important indexes of the target objects, and determining video subjects according to the important indexes; the risk identification module is used for judging the association degree of the target object and the video theme, substituting the association degree into a formula to calculate the risk index of each target object, and finding out the risk object to be monitored according to the risk index; the operation management module adopts a distributed technology to control a plurality of devices to monitor the areas where different risk objects are located respectively, and timely performs image processing when illegal contents appear, so that information safety is protected.
Description
Technical Field
The invention relates to the technical field of information security, in particular to a distributed video processing supervision system and method based on information security.
Background
With the rapid development of the mobile internet, live video is becoming a popular form of media. People can watch live broadcast contents at any time and any place through mobile phones, computers and other devices. Live video covers various fields including entertainment, sports, education, news, etc., and becomes an important channel for people to acquire information and entertainment. Live video, however, also faces some problems and challenges. Firstly, the real-time requirement of live video is high, and a live video signal needs to be rapidly transmitted to a viewer terminal so as to ensure that the viewer can watch live content in real time. Second, live video typically requires some processing, such as image enhancement, noise removal, object detection, etc., to improve the video quality and viewing experience. Furthermore, there may be some sensitive information in live video, such as: personal privacy, business confidentiality, etc., are required to be protected and processed to secure information.
At present, the information security problem possibly occurring in the live broadcast process is usually solved by adopting manual observation or intelligent algorithm identification. The manual observation means that a special person is responsible for observing live broadcast field environment in the live broadcast process, and the adjustment of the shooting picture of the live broadcast camera is reminded or controlled. The method has certain defects that the reaction of people needs a certain time, the living environment is changed, and when sensitive information is leaked, people cannot respond in time to deal with the situation. And the adoption of the time-delay live broadcast picture leaves enough time for manual auditing, so that the real-time property of live broadcast can be influenced, and bad experience is brought to audience. Compared with manual observation, the intelligent algorithm identification is more efficient, and the intelligent algorithm is used for automatically detecting and shielding sensitive information by analyzing and identifying videos in real time by using a computer vision technology. Compared with manual observation, the method can greatly improve the detection efficiency and the processing speed, but also has unavoidable defects. For example: 1. the flexibility of identifying sensitive information by a manual observation method is lacking: the detection type of the sensitive information cannot be adjusted aiming at different live broadcast environments, and only mechanical screening can be performed according to the existing comparison library; 2. lack of sufficiently fine image processing capability: because live video has higher requirements on real-time performance, the video is recorded to be processed and then sent to a network, and a certain time is usually required, but in the short time, a single device cannot provide computational power support for meeting pixel-level image processing, even when the device performance is poor, even still image-level image processing cannot be completed, only frame extraction or direct shielding processing can be performed in a video fixed area, so that the experience of live broadcast effect is greatly reduced. Therefore, a more flexible and efficient technical solution for information security detection and processing is needed at present to solve the above problems.
Disclosure of Invention
The invention aims to provide a distributed video processing supervision system and method based on information security, which are used for solving the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: a distributed video processing supervision method based on information security, the method comprising the steps of:
s1, acquiring operation parameters of live broadcast equipment and shot real-time video in real time, and acquiring a latest theme library;
S2, dividing regions of the video picture by analyzing the operation parameters, identifying target objects and calculating important indexes;
S3, selecting a subject object according to the importance index, and judging the association degree of each target object;
and S4, monitoring and image processing are carried out on the target objects with different degrees of association by adopting a distributed technology.
In S1, the operation parameters refer to a focal length and a shooting distance of a camera of the live broadcast device, the shooting distance refers to a distance between the camera and a shooting target, and a distance sensor installed inside the live broadcast device performs data acquisition. The subject library is used as a comparison library of live broadcast types and is used for identifying the type of each live broadcast, the subject library contains subject sets of various live broadcast types, and the subject sets contain category names of different objects under the same subject.
To increase the degree of live type identification coverage, the topic library should contain a sufficient number of topic sets at build time to meet the identification requirements of different live types. The category names of the items in each topic collection should also cover the item categories that may occur during the live broadcast process for the corresponding live broadcast type as much as possible.
In S2, the specific steps are as follows:
S201, acquiring real-time video shot by live broadcast equipment, decomposing the video into single-frame images by using the existing OpenCV, screening all the single-frame images, and removing the blurred single-frame images by adopting an edge detection method.
S202, analyzing the rest single-frame images one by one, obtaining the focal length and the shooting distance of a live broadcast device camera under the corresponding time of each Shan Zhen image, substituting the focal length and the shooting distance into a formula, and respectively calculating the weight ratio R of each Shan Zhen image. Taking a single frame image center point as a center, taking the product of the single frame image length and the weight ratio R as a length, taking the product of the single frame image width and the weight ratio R as a width, and dividing a rectangular area as a key area; the weight ratio calculation formula is as follows:
Wherein R is the weight duty ratio, alpha is the focal length influence coefficient, J is the focal length, J max is the maximum focal length of the device, beta is the shooting distance influence coefficient, a is the distance constant, and P is the shooting distance.
S203, detecting all target objects in each Shan Zhen image by adopting a YOLOv target detection algorithm, giving identifiers, extracting characteristic information of the target objects, analyzing the type and the position of the target objects, calculating the number of pixels occupied by the target objects, marking the target objects with positions in a key area as key points, and marking other target objects as common.
S204, acquiring a position coordinate (x i,yi) of a key target object and a center point position coordinate (x z,yz) of a corresponding single-frame image, substituting the position coordinate and the pixel number of the key target object into a formula to calculate an important index, wherein each key target object corresponds to one important index; the calculation formula is as follows:
In the formula, ZY is an important index, S i is the number of pixels of a key target object, S z is the number of pixels of a key region of a single frame image, u is a distance influence coefficient, and c is a position constant.
The position coordinates of the target object select the coordinates of the central point of the area where the target object is located, the number of pixels is the number of pixels of the area where the target object is located, and the coverage area of the area where the target object is located is larger as the number of pixels is larger.
S205, performing similarity recognition between target objects in different single-frame images, associating target objects which are higher than a similarity threshold and do not belong to the same Shan Zhen image, summing important indexes of the associated key target objects to obtain a total important index, reflecting the associated target objects in different single-frame images as the same target object, and unifying identifiers of all the associated target objects.
In S3, the specific steps are as follows:
S301, taking a key target object with a total importance index larger than an importance index threshold as a subject object, and putting names of the categories to which all the subject objects belong into a subject category set, wherein { Q 1,Q2,Q3,...,Qn } is included in the set, n represents the number of category names, and Q n represents the nth category name.
S302, comparing all kinds of names in the theme class sets with all kinds of names in each theme set in the theme base respectively, judging whether the kinds of names in the theme sets are the same as the kinds of names in the theme class sets, marking the kinds of names in the corresponding theme sets if the kinds of names in the theme sets are the same, and after all the theme sets in the theme base finish comparison judgment, carrying out positive rank on the theme sets according to the number of the marked kinds of names, and selecting the theme set with the first rank as the current theme set.
S303, obtaining the category names of all the target objects, comparing the category names with all the category names in the current theme set respectively, judging whether the same category names exist in the current theme set, setting the association degree of the corresponding target objects as strong association if the same category names exist, continuously comparing the category names with all the category names in other theme sets except the current theme set in the theme library if the same category names exist, judging whether the same category names exist, setting the association degree of the corresponding target objects as weak association if the same category names exist, and setting the association degree of the corresponding target objects as no association if the same category names do not exist.
In S4, the specific steps are as follows:
s401, setting different influence coefficients for different association degrees, wherein the influence coefficients are set from large to small according to the order of no association, weak association and strong association. And obtaining the relevance of all the target objects, substituting the influence coefficient and the pixel number corresponding to the relevance of the target objects into a formula, and calculating the risk index. The formula is as follows:
FX=K×E×S
wherein FX is a risk index of the target object, K is an influence coefficient, E is a constant, and S is the number of pixels of the target object.
S402, acquiring the number I of callable devices in the distributed devices, ranking the target objects in a positive order according to the risk index, selecting the target objects with the top I name as risk objects, sequentially distributing the devices to the risk objects, and monitoring the region where the distributed devices are located.
S403, when the risk object is an illegal object or illegal behaviors are monitored, carrying out pixel-level image processing on the area where the risk object is located on the premise that the overall definition of the single-frame image is not affected, wherein the processing modes comprise mosaic pixelation, a pixel repairing method and a pixel replacing method.
Mosaic pixelation: and (3) carrying out mosaic processing on the illegal region, and pixelating the illegal region to achieve the hiding effect. Pixel repair method: and repairing the illegal part to repair the illegal part into pixels meeting the requirements, so as to achieve the effect of correction. Pixel replacement method: and the pixels of the illegal part are replaced by the pixels meeting the requirements, so that the correction effect is achieved.
The distributed video processing supervision system based on information security comprises a data acquisition module, a data analysis module, a risk identification module and an operation management module.
The data acquisition module is used for acquiring the operation parameters of the live broadcast equipment and the shot real-time video; the data analysis module analyzes the video information frame by frame, divides key areas by combining the operation parameters of the equipment, identifies target objects, calculates important indexes of the target objects, and determines video subjects according to the important indexes; the risk identification module is used for judging the association degree of the target object and the video theme, substituting the association degree into a formula to calculate the risk index of each target object, and finding out the risk object to be monitored according to the risk index; the operation management module adopts a distributed technology to control different devices to monitor the area where each risk object is located, and when the risk object is an illegal object or has illegal behaviors, the image processing is carried out to protect the information security.
The data acquisition module comprises an image information acquisition unit, an equipment information acquisition unit and a theme information acquisition unit.
The image information acquisition unit is used for acquiring real-time video shot by live broadcast equipment. The equipment information acquisition unit is used for acquiring operation parameters of live equipment during operation, wherein the operation parameters comprise a focal length and a shooting distance, and the shooting distance refers to the distance between live equipment and a shooting target. The topic information acquisition unit is used for acquiring a topic library in the system, wherein the topic library comprises topic sets of various live broadcast types, and the topic sets comprise category names of different objects under the same topic.
The data analysis module comprises a region dividing unit, an object recognition unit and a theme analysis unit.
The region dividing unit is used for dividing key regions in the video picture. Firstly, decomposing a real-time video shot by live broadcast equipment into a single-frame image; secondly, acquiring operation parameters of the live broadcast equipment under the corresponding time of each Shan Zhen image, and substituting the operation parameters into a formula to respectively calculate the weight ratio of each Shan Zhen image; and finally, taking the center point of the single frame image as the center, taking the product of the length of the single frame image and the weight ratio R as the length, taking the product of the width of the single frame image and the weight ratio R as the width, dividing a rectangular area as a key area, and each Shan Zhen image is provided with a key area.
In the live broadcast process, a live broadcast object is usually positioned in a central area of a picture, and the setting of the area size of the central area of the picture is in a positive-negative ratio relation with the focal length and the shooting distance.
Proportional relation: the focal length is large, the picture is close to the shot object, the area of the central area of the picture is larger, and the live broadcast object is reasonably contained.
Inverse relation: the shooting distance is large, the picture is far away from the shot object, the area of the central area of the picture should be smaller, and the live broadcast object is reasonably contained.
The object recognition unit is used for recognizing the key target object. And detecting a target object in each Shan Zhen image by adopting a YOLOv target detection algorithm, extracting characteristic information of the target object, analyzing the type and the position of the target object, calculating the number of pixels occupied by the target object, and taking the position in a key area as a key target object.
The theme analysis unit is used for analyzing the video theme.
Firstly, position coordinates of key target objects and position coordinates of central points of corresponding single-frame images are obtained, and the position coordinates and the pixel numbers of the key target objects are substituted into a formula to calculate an important index.
And secondly, correlating the same target object in different single-frame images, summing the important indexes of the correlated key target objects to obtain a total important index, and placing the names of the key target objects with the total important index larger than the important index threshold value into a theme type set.
And finally, comparing all kinds of names in the topic class sets with all kinds of names in each topic set in the topic library, judging whether the kinds of names in the topic set are the same as the kinds of names in the topic class sets, marking the kinds of names in the corresponding topic sets if the kinds of names are the same, and after all topic sets in the topic library are subjected to comparison judgment, carrying out positive ranking on the topic sets according to the number of marked kinds of names, and selecting the topic set with the first ranking as the current topic set.
The risk identification module comprises a relevance judgment unit and a risk judgment unit.
The relevance judging unit is used for judging the relevance of each target object and the video theme. Comparing the category names of the target objects with all category names in the current theme set, judging whether the same category names exist in the current theme set, setting the association degree of the corresponding target objects as strong association if the same category names exist, continuously comparing the category names with all category names in other theme sets except the current theme set in the theme library if the same category names do not exist, judging whether the same category names exist, setting the association degree of the corresponding target objects as weak association if the same category names exist, and setting the association degree of the corresponding target objects as no association if the same category names do not exist.
The risk judging unit is used for calculating a risk index of each target object. And setting different influence coefficients for different association degrees, setting the influence coefficients from large to small according to the order of no association, weak association and strong association, substituting the influence coefficients and the pixel number corresponding to the association degree of the target object into a formula, and calculating a risk index.
And if the influence coefficient of the association degree is high, the target object does not have much association with the video theme, and the probability of occurrence of the violation condition is higher. The larger the number of pixels is, the higher the ratio of the picture of the target object to the whole picture is, and the influence caused by the occurrence of the illegal situation is worse. The risk possibly brought by the target object can be estimated more perfectly by comprehensively considering the influence coefficient and the pixel number of the relevance.
The operation management module is used for supervising the risk objects. Firstly, acquiring the number I of callable devices in distributed devices, and carrying out positive ranking on target objects according to risk indexes; secondly, selecting the target objects with the top I names as risk objects, sequentially allocating equipment to the risk objects, and monitoring the area where the allocated equipment is located by aiming at the risk objects; and finally, when the risk object is monitored to be an illegal object or illegal behaviors occur, carrying out pixel-level image processing on the area where the risk object is located on the premise that the overall definition of the single-frame image is not affected.
Each device only monitors one risk object, and a plurality of devices monitor and process a plurality of risk objects in one Shan Zhen image.
When the system operates, the real-time video is continuously decomposed into single-frame images, after the key region division and the video theme identification, the risk objects are quickly found out, and the regions where different risk objects are located are respectively supervised by using a plurality of devices by adopting a distributed technology.
When a part of risk objects have violation conditions, pixel-level rapid image processing is performed by utilizing computing power resources of a plurality of devices, the image processing is only performed on the area where the risk objects are located, and the processed area images are rapidly combined into an original single-frame image after the processing is completed.
Compared with the prior art, the invention has the following beneficial effects:
1. Quickly identifying video topics: according to the method, the topic objects are quickly found through the division of key areas and the calculation of key indexes, and the video topics can be quickly positioned by combining the comparison analysis of the topic library, so that the flexibility and the accuracy of information security detection are improved.
2. Accurately positioning a risk object: according to the method, the relevance influence coefficient and the pixel number are comprehensively considered in the aspect of calculating the risk index of the target object, the relevance influence coefficient represents the target object violation probability, the pixel number represents the target object violation result, and the target object with high relevance influence coefficient and more pixels is preferentially selected for monitoring, so that the information safety monitoring efficiency can be improved, and the sensitive information leakage risk can be reduced.
3. Pixel level image processing: according to the application, a plurality of devices are called by a distributed technology to respectively monitor each risk object, when the violation occurs, the corresponding device only needs to process the image at the pixel level on the partial area on each transferred static image, the task is simple, the consumption of computational resources is low, and the time is not required to be too much, the processed pixel areas are combined into a new single frame image to be sent to a network, and the detail degree of image processing is improved while the real-time performance of live video is not influenced.
In summary, compared with the traditional technology, the method has the advantages of rapid identification of video subjects, accurate positioning of risk objects and pixel-level image processing, and can improve flexibility and efficiency of information security monitoring.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flow chart of a distributed video processing supervision method based on information security according to the present invention;
fig. 2 is a schematic structural diagram of a distributed video processing supervision system based on information security according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the invention provides a distributed video processing supervision method based on information security, which comprises the following steps:
s1, acquiring operation parameters of live broadcast equipment and shot real-time video in real time, and acquiring a latest theme library;
S2, dividing regions of the video picture by analyzing the operation parameters, identifying target objects and calculating important indexes;
S3, selecting a subject object according to the importance index, and judging the association degree of each target object;
and S4, monitoring and image processing are carried out on the target objects with different degrees of association by adopting a distributed technology.
In S1, the operation parameters refer to a focal length and a shooting distance of a camera of the live broadcast device, the shooting distance refers to a distance between the camera and a shooting target, and a distance sensor installed inside the live broadcast device performs data acquisition. The subject library is used as a comparison library of live broadcast types and is used for identifying the type of each live broadcast, the subject library contains subject sets of various live broadcast types, and the subject sets contain category names of different objects under the same subject.
To increase the degree of live type identification coverage, the topic library should contain a sufficient number of topic sets at build time to meet the identification requirements of different live types. The category names of the items in each topic collection should also cover the item categories that may occur during the live broadcast process for the corresponding live broadcast type as much as possible.
In S2, the specific steps are as follows:
S201, acquiring real-time video shot by live broadcast equipment, decomposing the video into single-frame images by using the existing OpenCV, screening all the single-frame images, and removing the blurred single-frame images by adopting an edge detection method.
S202, analyzing the rest single-frame images one by one, obtaining the focal length and the shooting distance of a live broadcast device camera under the corresponding time of each Shan Zhen image, substituting the focal length and the shooting distance into a formula, and respectively calculating the weight ratio R of each Shan Zhen image. Taking a single frame image center point as a center, taking the product of the single frame image length and the weight ratio R as a length, taking the product of the single frame image width and the weight ratio R as a width, and dividing a rectangular area as a key area; the weight ratio calculation formula is as follows:
Wherein R is the weight duty ratio, alpha is the focal length influence coefficient, J is the focal length, J max is the maximum focal length of the device, beta is the shooting distance influence coefficient, a is the distance constant, and P is the shooting distance.
S203, detecting all target objects in each Shan Zhen image by adopting a YOLOv target detection algorithm, giving identifiers, extracting characteristic information of the target objects, analyzing the type and the position of the target objects, calculating the number of pixels occupied by the target objects, marking the target objects with positions in a key area as key points, and marking other target objects as common.
S204, acquiring a position coordinate (x i,yi) of a key target object and a center point position coordinate (x z,yz) of a corresponding single-frame image, substituting the position coordinate and the pixel number of the key target object into a formula to calculate an important index, wherein each key target object corresponds to one important index; the calculation formula is as follows:
In the formula, ZY is an important index, S i is the number of pixels of a key target object, S z is the number of pixels of a key region of a single frame image, u is a distance influence coefficient, and c is a position constant.
The position coordinates of the target object select the coordinates of the central point of the area where the target object is located, the number of pixels is the number of pixels of the area where the target object is located, and the coverage area of the area where the target object is located is larger as the number of pixels is larger.
S205, performing similarity recognition between target objects in different single-frame images, associating target objects which are higher than a similarity threshold and do not belong to the same Shan Zhen image, summing important indexes of the associated key target objects to obtain a total important index, reflecting the associated target objects in different single-frame images as the same target object, and unifying identifiers of all the associated target objects.
In S3, the specific steps are as follows:
S301, taking a key target object with a total importance index larger than an importance index threshold as a subject object, and putting names of the categories to which all the subject objects belong into a subject category set, wherein { Q 1,Q2,Q3,...,Qn } is included in the set, n represents the number of category names, and Q n represents the nth category name.
S302, comparing all kinds of names in the theme class sets with all kinds of names in each theme set in the theme base respectively, judging whether the kinds of names in the theme sets are the same as the kinds of names in the theme class sets, marking the kinds of names in the corresponding theme sets if the kinds of names in the theme sets are the same, and after all the theme sets in the theme base finish comparison judgment, carrying out positive rank on the theme sets according to the number of the marked kinds of names, and selecting the theme set with the first rank as the current theme set.
S303, obtaining the category names of all the target objects, comparing the category names with all the category names in the current theme set respectively, judging whether the same category names exist in the current theme set, setting the association degree of the corresponding target objects as strong association if the same category names exist, continuously comparing the category names with all the category names in other theme sets except the current theme set in the theme library if the same category names exist, judging whether the same category names exist, setting the association degree of the corresponding target objects as weak association if the same category names exist, and setting the association degree of the corresponding target objects as no association if the same category names do not exist.
In S4, the specific steps are as follows:
s401, setting different influence coefficients for different association degrees, wherein the influence coefficients are set from large to small according to the order of no association, weak association and strong association. And obtaining the relevance of all the target objects, substituting the influence coefficient and the pixel number corresponding to the relevance of the target objects into a formula, and calculating the risk index. The formula is as follows:
FX=K×E×S
wherein FX is a risk index of the target object, K is an influence coefficient, E is a constant, and S is the number of pixels of the target object.
S402, acquiring the number I of callable devices in the distributed devices, ranking the target objects in a positive order according to the risk index, selecting the target objects with the top I name as risk objects, sequentially distributing the devices to the risk objects, and monitoring the region where the distributed devices are located.
S403, when the risk object is an illegal object or illegal behaviors are monitored, carrying out pixel-level image processing on the area where the risk object is located on the premise that the overall definition of the single-frame image is not affected, wherein the processing modes comprise mosaic pixelation, a pixel repairing method and a pixel replacing method.
Mosaic pixelation: and (3) carrying out mosaic processing on the illegal region, and pixelating the illegal region to achieve the hiding effect. Pixel repair method: and repairing the illegal part to repair the illegal part into pixels meeting the requirements, so as to achieve the effect of correction. Pixel replacement method: and the pixels of the illegal part are replaced by the pixels meeting the requirements, so that the correction effect is achieved.
Referring to fig. 2, the invention provides a distributed video processing supervision system based on information security, which comprises a data acquisition module, a data analysis module, a risk identification module and an operation management module.
The data acquisition module is used for acquiring the operation parameters of the live broadcast equipment and the shot real-time video; the data analysis module analyzes the video information frame by frame, divides key areas by combining the operation parameters of the equipment, identifies target objects, calculates important indexes of the target objects, and determines video subjects according to the important indexes; the risk identification module is used for judging the association degree of the target object and the video theme, substituting the association degree into a formula to calculate the risk index of each target object, and finding out the risk object to be monitored according to the risk index; the operation management module adopts a distributed technology to control different devices to monitor the area where each risk object is located, and when the risk object is an illegal object or has illegal behaviors, the image processing is carried out to protect the information security.
The data acquisition module comprises an image information acquisition unit, an equipment information acquisition unit and a theme information acquisition unit.
The image information acquisition unit is used for acquiring real-time video shot by live broadcast equipment. The equipment information acquisition unit is used for acquiring operation parameters of live equipment during operation, wherein the operation parameters comprise a focal length and a shooting distance, and the shooting distance refers to the distance between live equipment and a shooting target. The topic information acquisition unit is used for acquiring a topic library in the system, wherein the topic library comprises topic sets of various live broadcast types, and the topic sets comprise category names of different objects under the same topic.
The data analysis module comprises a region dividing unit, an object recognition unit and a theme analysis unit.
The region dividing unit is used for dividing key regions in the video picture. Firstly, decomposing a real-time video shot by live broadcast equipment into a single-frame image; secondly, acquiring operation parameters of the live broadcast equipment under the corresponding time of each Shan Zhen image, and substituting the operation parameters into a formula to respectively calculate the weight ratio of each Shan Zhen image; and finally, taking the center point of the single frame image as the center, taking the product of the length of the single frame image and the weight ratio R as the length, taking the product of the width of the single frame image and the weight ratio R as the width, dividing a rectangular area as a key area, and each Shan Zhen image is provided with a key area.
In the live broadcast process, a live broadcast object is usually positioned in a central area of a picture, and the setting of the area size of the central area of the picture is in a positive-negative ratio relation with the focal length and the shooting distance.
Proportional relation: the focal length is large, the picture is close to the shot object, the area of the central area of the picture is larger, and the live broadcast object is reasonably contained.
Inverse relation: the shooting distance is large, the picture is far away from the shot object, the area of the central area of the picture should be smaller, and the live broadcast object is reasonably contained.
The object recognition unit is used for recognizing the key target object. And detecting a target object in each Shan Zhen image by adopting a YOLOv target detection algorithm, extracting characteristic information of the target object, analyzing the type and the position of the target object, calculating the number of pixels occupied by the target object, and taking the position in a key area as a key target object.
The theme analysis unit is used for analyzing the video theme.
Firstly, position coordinates of key target objects and position coordinates of central points of corresponding single-frame images are obtained, and the position coordinates and the pixel numbers of the key target objects are substituted into a formula to calculate an important index.
And secondly, correlating the same target object in different single-frame images, summing the important indexes of the correlated key target objects to obtain a total important index, and placing the names of the key target objects with the total important index larger than the important index threshold value into a theme type set.
And finally, comparing all kinds of names in the topic class sets with all kinds of names in each topic set in the topic library, judging whether the kinds of names in the topic set are the same as the kinds of names in the topic class sets, marking the kinds of names in the corresponding topic sets if the kinds of names are the same, and after all topic sets in the topic library are subjected to comparison judgment, carrying out positive ranking on the topic sets according to the number of marked kinds of names, and selecting the topic set with the first ranking as the current topic set.
The risk identification module comprises a relevance judgment unit and a risk judgment unit.
The relevance judging unit is used for judging the relevance of each target object and the video theme. Comparing the category names of the target objects with all category names in the current theme set, judging whether the same category names exist in the current theme set, setting the association degree of the corresponding target objects as strong association if the same category names exist, continuously comparing the category names with all category names in other theme sets except the current theme set in the theme library if the same category names do not exist, judging whether the same category names exist, setting the association degree of the corresponding target objects as weak association if the same category names exist, and setting the association degree of the corresponding target objects as no association if the same category names do not exist.
The risk judging unit is used for calculating a risk index of each target object. And setting different influence coefficients for different association degrees, setting the influence coefficients from large to small according to the order of no association, weak association and strong association, substituting the influence coefficients and the pixel number corresponding to the association degree of the target object into a formula, and calculating a risk index.
And if the influence coefficient of the association degree is high, the target object does not have much association with the video theme, and the probability of occurrence of the violation condition is higher. The larger the number of pixels is, the higher the ratio of the picture of the target object to the whole picture is, and the influence caused by the occurrence of the illegal situation is worse. The risk possibly brought by the target object can be estimated more perfectly by comprehensively considering the influence coefficient and the pixel number of the relevance.
The operation management module is used for supervising the risk objects. Firstly, acquiring the number I of callable devices in distributed devices, and carrying out positive ranking on target objects according to risk indexes; secondly, selecting the target objects with the top I names as risk objects, sequentially allocating equipment to the risk objects, and monitoring the area where the allocated equipment is located by aiming at the risk objects; and finally, when the risk object is monitored to be an illegal object or illegal behaviors occur, carrying out pixel-level image processing on the area where the risk object is located on the premise that the overall definition of the single-frame image is not affected.
Each device only monitors one risk object, and a plurality of devices monitor and process a plurality of risk objects in one Shan Zhen image.
When the system operates, the real-time video is continuously decomposed into single-frame images, after the key region division and the video theme identification, the risk objects are quickly found out, and the regions where different risk objects are located are respectively supervised by using a plurality of devices by adopting a distributed technology.
When a part of risk objects have violation conditions, pixel-level rapid image processing is performed by utilizing computing power resources of a plurality of devices, the image processing is only performed on the area where the risk objects are located, and the processed area images are rapidly combined into an original single-frame image after the processing is completed.
Embodiment one:
Assuming that the focal length of a live broadcast equipment camera is 1.5mm, the shooting distance is 2m, the focal length influence coefficient is 0.4 under the condition that a certain Shan Zhen image corresponds to time, the maximum focal length of the live broadcast equipment camera is 3mm, the shooting distance influence coefficient is 0.2, the distance constant is 10, and the weight ratio of the single frame image is calculated by substituting the formula:
weight ratio:
Embodiment two:
Assuming that the position coordinates of a certain key target object are (250, 500), the pixel number is 1200; the position coordinates of the center point of the corresponding single-frame image are (500 ), and the pixel number of the key area is 25000; the distance influence coefficient is 0.0001, the position constant is 1, and the important index of the key target object is calculated by substituting the formula:
important index:
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (2)
1. The distributed video processing supervision method based on information security is characterized by comprising the following steps of: the method comprises the following steps:
s1, acquiring operation parameters of live broadcast equipment and shot real-time video in real time, and acquiring a latest theme library;
S2, dividing regions of the video picture by analyzing the operation parameters, identifying target objects and calculating important indexes;
S3, selecting a subject object according to the importance index, and judging the association degree of each target object;
S4, monitoring and image processing are carried out on target objects with different degrees of association by adopting a distributed technology;
In S1, the operation parameters refer to the focal length and shooting distance of a camera of the live broadcast equipment, the shooting distance refers to the distance between the camera and a shooting target, and a distance sensor arranged inside the live broadcast equipment is used for data acquisition; the subject library is used as a comparison library of live broadcast types and is used for identifying the type of each live broadcast, the subject library contains subject sets of various live broadcast types, and the subject sets contain category names of different objects under the same subject;
In S2, the specific steps are as follows:
S201, acquiring a real-time video shot by live broadcast equipment, decomposing the video into single-frame images by using the existing OpenCV, screening all the single-frame images, and removing the blurred single-frame images by adopting an edge detection method;
s202, analyzing the rest single-frame images one by one, obtaining the focal length and shooting distance of a live broadcast equipment camera under the corresponding time of each Shan Zhen image, substituting the focal length and shooting distance into a formula, and respectively calculating the weight ratio R of each Shan Zhen image; taking a single frame image center point as a center, taking the product of the single frame image length and the weight ratio R as a length, taking the product of the single frame image width and the weight ratio R as a width, and dividing a rectangular area as a key area; the weight ratio calculation formula is as follows:
Wherein R is weight duty ratio, alpha is focal length influence coefficient, J is focal length, J max is equipment maximum focal length, beta is shooting distance influence coefficient, a is distance constant, and P is shooting distance;
S203, detecting all target objects in each Shan Zhen image by adopting a YOLOv target detection algorithm, giving identifiers, extracting characteristic information of the target objects, analyzing the type and the position of the target objects, calculating the number of pixels occupied by the target objects, marking the target objects with positions in a key area as key points, and marking other target objects as common;
S204, acquiring a position coordinate (x i,yi) of a key target object and a center point position coordinate (x z,yz) of a corresponding single-frame image, substituting the position coordinate and the pixel number of the key target object into a formula to calculate an important index, wherein each key target object corresponds to one important index; the calculation formula is as follows:
Wherein ZY is an important index, S i is the number of pixels of a key target object, S z is the number of pixels of a key region of a single-frame image, u is a distance influence coefficient, and c is a position constant;
S205, carrying out similarity recognition between target objects in different single-frame images, associating target objects which are higher than a similarity threshold and do not belong to the same Shan Zhen image, summing important indexes of the associated key target objects to obtain a total important index, reflecting the associated target objects in different single-frame images as the same target object, and unifying identifiers of all the associated target objects;
In S3, the specific steps are as follows:
S301, taking a key target object with a total importance index larger than an importance index threshold as a subject object, and putting names of the categories to which all the subject objects belong into a subject category set, wherein the set comprises { Q 1,Q2,Q3,...,Qn }, n represents the number of category names, and Q n represents the nth category name;
S302, comparing all kinds of names in the topic class sets with all kinds of names in each topic set in the topic library respectively, judging whether the kinds of names in the topic sets are the same as the kinds of names in the topic class sets, marking the kinds of names in the corresponding topic sets if the kinds of names in the topic sets are the same, and after all topic sets in the topic library are subjected to comparison judgment, carrying out positive ranking on the topic sets according to the number of marked kinds of names, and selecting the topic set with the first ranking as the current topic set;
S303, obtaining the category names of all the target objects, respectively comparing the category names with all the category names in the current theme set, judging whether the same category names exist in the current theme set, setting the association degree of the corresponding target objects as strong association if the same category names exist, continuously comparing the category names with all the category names in other theme sets except the current theme set in the theme library if the same category names exist, judging whether the same category names exist, setting the association degree of the corresponding target objects as weak association if the same category names exist, and setting the association degree of the corresponding target objects as no association if the same category names do not exist;
in S4, the specific steps are as follows:
S401, setting different influence coefficients for different association degrees, wherein the influence coefficients are set from large to small according to the sequence of no association, weak association and strong association; acquiring the relevance of all the target objects, substituting the influence coefficient and the pixel number corresponding to the relevance of the target objects into a formula, and calculating a risk index; the formula is as follows:
FX=K×E×S
Wherein FX is a risk index of the target object, K is an influence coefficient, E is a constant, and S is the number of pixels of the target object;
S402, acquiring the number I of callable devices in the distributed devices, carrying out positive sequence ranking on target objects according to risk indexes, selecting the target objects with the top I name as risk objects, sequentially distributing devices to the risk objects, and monitoring the distributed devices aiming at the areas where the risk objects are located;
S403, when the risk object is an illegal object or illegal behaviors are monitored, carrying out pixel-level image processing on the area where the risk object is located on the premise that the overall definition of the single-frame image is not affected, wherein the processing modes comprise mosaic pixelation, a pixel repairing method and a pixel replacing method.
2. The utility model provides a distributed video processing supervisory systems based on information security which characterized in that: the system comprises a data acquisition module, a data analysis module, a risk identification module and an operation management module;
The data acquisition module is used for acquiring the operation parameters of the live broadcast equipment and the shot real-time video; the data analysis module analyzes the video information frame by frame, divides key areas by combining the operation parameters of the equipment, identifies target objects, calculates important indexes of the target objects, and determines video subjects according to the important indexes; the risk identification module is used for judging the association degree of the target object and the video theme, substituting the association degree into a formula to calculate the risk index of each target object, and finding out the risk object to be monitored according to the risk index; the operation management module adopts a distributed technology to control different devices to monitor the area where each risk object is located, and when the risk object is an illegal object or has illegal behaviors, the image processing is carried out to protect the information security;
the data acquisition module comprises an image information acquisition unit, an equipment information acquisition unit and a theme information acquisition unit;
The image information acquisition unit is used for acquiring real-time video shot by live broadcast equipment; the equipment information acquisition unit is used for acquiring operation parameters of live equipment during working, wherein the operation parameters comprise a focal length and a shooting distance, and the shooting distance refers to the distance between the live equipment and a shooting target; the topic information acquisition unit is used for acquiring a topic library in the system, wherein the topic library comprises topic sets of various live broadcast types, and the topic sets comprise category names of different objects under the same topic;
The data analysis module comprises a region dividing unit, an object recognition unit and a theme analysis unit;
The region dividing unit is used for dividing key regions in the video picture; firstly, decomposing a real-time video shot by live broadcast equipment into a single-frame image; secondly, acquiring operation parameters of the live broadcast equipment under the corresponding time of each Shan Zhen image, and substituting the operation parameters into a formula to respectively calculate the weight ratio of each Shan Zhen image; finally, taking the center point of the single frame image as the center, taking the product of the length of the single frame image and the weight ratio R as the length, taking the product of the width of the single frame image and the weight ratio R as the width, dividing a rectangular area as a key area, and each Shan Zhen image is provided with a key area;
The weight ratio calculation formula is: Wherein alpha is a focal length influence coefficient, J is a focal length, J max is a maximum focal length of the device, beta is a shooting distance influence coefficient, a is a distance constant, and P is a shooting distance;
The object recognition unit is used for recognizing key target objects; detecting target objects in each Shan Zhen image by adopting YOLOv target detection algorithm, extracting characteristic information of the target objects, analyzing the type and the position of the target objects, calculating the number of pixels occupied by the target objects, and taking the positions in a key area as key target objects;
The theme analysis unit is used for analyzing the video theme;
Firstly, acquiring position coordinates of an important target object and position coordinates of a central point of a corresponding single-frame image, substituting the position coordinates and the pixel number of the important target object into a formula to calculate an important index;
The calculation formula of the important index is as follows: Wherein S i is the number of pixels of the key target object, S z is the number of pixels of the key region of the single-frame image, (x i,yi) is the position coordinate of the key target object, (x z,yz) is the position coordinate of the central point of the single-frame image, u is the distance influence coefficient, and c is the position constant;
Secondly, correlating the same target object in different single-frame images, summing important indexes of correlated key target objects to obtain a total important index, and placing names of key target objects with the total important index being greater than an important index threshold value into a theme type set;
Finally, comparing all kinds of names in the topic class sets with all kinds of names in each topic set in the topic library respectively, judging whether the kinds of names in the topic sets are the same as the kinds of names in the topic class sets, marking the kinds of names in the corresponding topic sets if the kinds of names in the topic sets are the same, and after all topic sets in the topic library are subjected to comparison judgment, ranking the topic sets in positive order according to the number of marked kinds of names, and selecting the topic set with the first ranking as the current topic set;
The risk identification module comprises a relevance judgment unit and a risk judgment unit;
The relevance judging unit is used for judging the relevance of each target object and the video theme; comparing the category names of the target objects with all category names in the current theme set, judging whether the same category names exist in the current theme set, setting the association degree of the corresponding target objects as strong association if the same category names exist, continuously comparing the category names with all category names in other theme sets except the current theme set in the theme library if the same category names exist, judging whether the same category names exist, setting the association degree of the corresponding target objects as weak association if the same category names exist, and setting the association degree of the corresponding target objects as no association if the same category names do not exist;
the risk judging unit is used for calculating a risk index of each target object; setting different influence coefficients for different association degrees, setting the influence coefficients from large to small according to the sequence of no association, weak association and strong association, substituting the influence coefficients and the pixel number corresponding to the association degree of the target object into a formula, and calculating a risk index;
the risk index calculation formula is: k is an influence coefficient, E is a constant, and S is the number of pixels of the target object;
The operation management module is used for supervising the risk objects; firstly, acquiring the number I of callable devices in distributed devices, and carrying out positive ranking on target objects according to risk indexes; secondly, selecting the target objects with the top I names as risk objects, sequentially allocating equipment to the risk objects, and monitoring the area where the allocated equipment is located by aiming at the risk objects; and finally, when the risk object is monitored to be an illegal object or illegal behaviors occur, carrying out pixel-level image processing on the area where the risk object is located on the premise that the overall definition of the single-frame image is not affected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311435215.1A CN117478838B (en) | 2023-11-01 | 2023-11-01 | Distributed video processing supervision system and method based on information security |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311435215.1A CN117478838B (en) | 2023-11-01 | 2023-11-01 | Distributed video processing supervision system and method based on information security |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117478838A CN117478838A (en) | 2024-01-30 |
CN117478838B true CN117478838B (en) | 2024-05-28 |
Family
ID=89626937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311435215.1A Active CN117478838B (en) | 2023-11-01 | 2023-11-01 | Distributed video processing supervision system and method based on information security |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117478838B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118473084A (en) * | 2024-04-23 | 2024-08-09 | 南京威联达自动化技术有限公司 | Intelligent monitoring system and method for distribution network equipment based on artificial intelligence |
CN118233680B (en) * | 2024-05-22 | 2024-07-26 | 珠海经济特区伟思有限公司 | Intelligent load management system and method based on video data analysis |
CN118631973A (en) * | 2024-08-12 | 2024-09-10 | 深圳天健电子科技有限公司 | Monitoring picture real-time optimization method based on multi-target positioning analysis |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107040795A (en) * | 2017-04-27 | 2017-08-11 | 北京奇虎科技有限公司 | The monitoring method and device of a kind of live video |
CN109033072A (en) * | 2018-06-27 | 2018-12-18 | 广东省新闻出版广电局 | A kind of audiovisual material supervisory systems Internet-based |
CN110012302A (en) * | 2018-01-05 | 2019-07-12 | 阿里巴巴集团控股有限公司 | A kind of network direct broadcasting monitoring method and device, data processing method |
CN110418161A (en) * | 2019-08-02 | 2019-11-05 | 广州虎牙科技有限公司 | Video reviewing method and device, electronic equipment and readable storage medium storing program for executing |
CN112465596A (en) * | 2020-12-01 | 2021-03-09 | 南京翰氜信息科技有限公司 | Image information processing cloud computing platform based on electronic commerce live broadcast |
KR20220079428A (en) * | 2020-12-04 | 2022-06-13 | 삼성전자주식회사 | Method and apparatus for detecting object in video |
CN114745558A (en) * | 2021-01-07 | 2022-07-12 | 北京字节跳动网络技术有限公司 | Live broadcast monitoring method, device, system, equipment and medium |
WO2022148378A1 (en) * | 2021-01-05 | 2022-07-14 | 百果园技术(新加坡)有限公司 | Rule-violating user processing method and apparatus, and electronic device |
CN115019390A (en) * | 2022-05-26 | 2022-09-06 | 北京百度网讯科技有限公司 | Video data processing method and device and electronic equipment |
CN115775363A (en) * | 2022-04-27 | 2023-03-10 | 中国科学院沈阳计算技术研究所有限公司 | Illegal video detection method based on text and video fusion |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010144566A1 (en) * | 2009-06-09 | 2010-12-16 | Wayne State University | Automated video surveillance systems |
-
2023
- 2023-11-01 CN CN202311435215.1A patent/CN117478838B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107040795A (en) * | 2017-04-27 | 2017-08-11 | 北京奇虎科技有限公司 | The monitoring method and device of a kind of live video |
CN110012302A (en) * | 2018-01-05 | 2019-07-12 | 阿里巴巴集团控股有限公司 | A kind of network direct broadcasting monitoring method and device, data processing method |
CN109033072A (en) * | 2018-06-27 | 2018-12-18 | 广东省新闻出版广电局 | A kind of audiovisual material supervisory systems Internet-based |
CN110418161A (en) * | 2019-08-02 | 2019-11-05 | 广州虎牙科技有限公司 | Video reviewing method and device, electronic equipment and readable storage medium storing program for executing |
CN112465596A (en) * | 2020-12-01 | 2021-03-09 | 南京翰氜信息科技有限公司 | Image information processing cloud computing platform based on electronic commerce live broadcast |
KR20220079428A (en) * | 2020-12-04 | 2022-06-13 | 삼성전자주식회사 | Method and apparatus for detecting object in video |
WO2022148378A1 (en) * | 2021-01-05 | 2022-07-14 | 百果园技术(新加坡)有限公司 | Rule-violating user processing method and apparatus, and electronic device |
CN114745558A (en) * | 2021-01-07 | 2022-07-12 | 北京字节跳动网络技术有限公司 | Live broadcast monitoring method, device, system, equipment and medium |
CN115775363A (en) * | 2022-04-27 | 2023-03-10 | 中国科学院沈阳计算技术研究所有限公司 | Illegal video detection method based on text and video fusion |
CN115019390A (en) * | 2022-05-26 | 2022-09-06 | 北京百度网讯科技有限公司 | Video data processing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN117478838A (en) | 2024-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117478838B (en) | Distributed video processing supervision system and method based on information security | |
CN107292240B (en) | Person finding method and system based on face and body recognition | |
Han et al. | Fast saliency-aware multi-modality image fusion | |
CN108040230B (en) | Monitoring method and device for protecting privacy | |
EP2580738A1 (en) | Region of interest based video synopsis | |
CN111091098A (en) | Training method and detection method of detection model and related device | |
CN103020275A (en) | Video analysis method based on video abstraction and video retrieval | |
CN104392461B (en) | A kind of video tracing method based on textural characteristics | |
CN112422909B (en) | Video behavior analysis management system based on artificial intelligence | |
Gardella et al. | Noisesniffer: a fully automatic image forgery detector based on noise analysis | |
Li et al. | A patch-based saliency detection method for assessing the visual privacy levels of objects in photos | |
CN114885119A (en) | Intelligent monitoring alarm system and method based on computer vision | |
CN118351487A (en) | Employee violation behavior identification method based on AI | |
Zeeshan et al. | A newly developed ground truth dataset for visual saliency in videos | |
CN103607558A (en) | Video monitoring system, target matching method and apparatus thereof | |
Singleton et al. | Gun identification using tensorflow | |
CN111479168B (en) | Method, device, server and medium for marking multimedia content hot spot | |
CN113947795A (en) | Mask wearing detection method, device, equipment and storage medium | |
CN103425958A (en) | Method for detecting non-movable objects in video | |
CN111708907A (en) | Target person query method, device, equipment and storage medium | |
CN116681568A (en) | 5G network-based information security protection supervision system and method | |
CN115331152B (en) | Fire fighting identification method and system | |
CN115984973A (en) | Human body abnormal behavior monitoring method for peeping-proof screen | |
CN112668357A (en) | Monitoring method and device | |
Babaryka et al. | Technologies for building intelligent video surveillance systems and methods for background subtraction in video sequences |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |