CN117478838B - A distributed video processing supervision system and method based on information security - Google Patents
A distributed video processing supervision system and method based on information security Download PDFInfo
- Publication number
- CN117478838B CN117478838B CN202311435215.1A CN202311435215A CN117478838B CN 117478838 B CN117478838 B CN 117478838B CN 202311435215 A CN202311435215 A CN 202311435215A CN 117478838 B CN117478838 B CN 117478838B
- Authority
- CN
- China
- Prior art keywords
- risk
- target objects
- names
- objects
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 42
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000005516 engineering process Methods 0.000 claims abstract description 14
- 238000007405 data analysis Methods 0.000 claims abstract description 11
- 238000007726 management method Methods 0.000 claims abstract description 11
- 238000001514 detection method Methods 0.000 claims description 11
- 230000006399 behavior Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 6
- 238000003708 edge detection Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 claims 14
- 230000002596 correlated effect Effects 0.000 claims 1
- 230000008569 process Effects 0.000 description 9
- 230000008439 repair process Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 230000001174 ascending effect Effects 0.000 description 4
- 238000012937 correction Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010223 real-time analysis Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及信息安全技术领域,具体为一种基于信息安全的分布式视频处理监管系统及方法。The present invention relates to the field of information security technology, and in particular to a distributed video processing supervision system and method based on information security.
背景技术Background technique
随着移动互联网的快速发展,直播视频成为了一种流行的媒体形式。人们可以通过手机、电脑等设备随时随地观看直播内容。直播视频涵盖了各种领域,包括娱乐、体育、教育、新闻等,成为了人们获取信息和娱乐的重要渠道。然而,直播视频也面临着一些问题和挑战。首先,直播视频的实时性要求较高,需要将现场的视频信号迅速传输到观众端,以保证观众能够实时观看直播内容。其次,直播视频通常需要进行一些处理,如图像增强、噪声去除、目标检测等,以提高视频质量和观看体验。此外,直播视频中可能存在一些敏感信息,比如:个人隐私或商业机密等,需要进行保护和处理,以确保信息安全。With the rapid development of mobile Internet, live video has become a popular form of media. People can watch live content anytime and anywhere through mobile phones, computers and other devices. Live video covers various fields, including entertainment, sports, education, news, etc., and has become an important channel for people to obtain information and entertainment. However, live video also faces some problems and challenges. First, the real-time requirements of live video are high, and the video signal on the scene needs to be quickly transmitted to the audience end to ensure that the audience can watch the live content in real time. Secondly, live video usually needs some processing, such as image enhancement, noise removal, target detection, etc., to improve video quality and viewing experience. In addition, there may be some sensitive information in the live video, such as personal privacy or commercial secrets, which needs to be protected and processed to ensure information security.
现阶段,对于直播过程中有可能出现的信息安全问题通常采用人工观测或智能算法识别来解决。人工观测是指在直播过程中有专人负责观测直播现场环境,提醒或控制直播摄像头拍摄画面的调整。这种方法存在一定缺陷,由于人的反应需要一定时间,而直播环境千变万化,在敏感信息泄露时,人往往不能及时做出反应来应对。而采用延时直播画面留足够时间来进行人工审核又会影响到直播的实时性,给观众带来不好的体验。相比于人工观测而言智能算法识别则更加高效,智能算法是指利用计算机视觉技术,通过对视频实时分析和识别,自动检测和遮挡敏感信息。这种方法相比于人工观测能够大大提高检测效率与处理速度,但同样也存在不可避免的缺陷。例如:1、缺乏人工观测方法识别敏感信息的灵活性:无法针对不同直播环境调整敏感信息检测类型,只能根据已有的对比库进行机械式筛选;2、缺乏足够细腻的图像处理能力:由于直播视频对实时性要求较高,视频从录制到处理再到发送至网络,通常有一定时间要求,而如此短的时间内单台设备无法提供满足像素级别图像处理的算力支持,甚至设备性能较差时,连静态图像级别的图像处理都无法完成,只能抽帧或直接在视频固定区域进行遮挡处理,导致直播效果体验感大幅度下降。所以现阶段需要一种更加灵活、高效的信息安全检测与处理的技术方案来解决上述问题。At present, information security issues that may arise during live broadcasts are usually solved by manual observation or intelligent algorithm recognition. Manual observation means that during the live broadcast process, there is a dedicated person responsible for observing the live broadcast environment, reminding or controlling the adjustment of the live camera shooting picture. This method has certain defects. Since human reaction takes a certain amount of time, and the live broadcast environment is ever-changing, when sensitive information is leaked, people often cannot react in time to deal with it. The use of delayed live broadcast pictures to leave enough time for manual review will affect the real-time nature of the live broadcast and bring a bad experience to the audience. Compared with manual observation, intelligent algorithm recognition is more efficient. Intelligent algorithm refers to the use of computer vision technology to automatically detect and block sensitive information through real-time analysis and recognition of videos. Compared with manual observation, this method can greatly improve detection efficiency and processing speed, but it also has inevitable defects. For example: 1. Lack of flexibility in identifying sensitive information through manual observation methods: It is impossible to adjust the type of sensitive information detection for different live broadcast environments, and it can only be mechanically screened based on the existing comparison library; 2. Lack of sufficiently delicate image processing capabilities: Due to the high real-time requirements of live broadcast videos, there is usually a certain time requirement from recording to processing and then sending the video to the network. In such a short period of time, a single device cannot provide computing power support to meet pixel-level image processing. Even when the performance of the device is poor, even static image-level image processing cannot be completed. It can only extract frames or directly block processing in fixed areas of the video, resulting in a significant decrease in the live broadcast effect experience. Therefore, at this stage, a more flexible and efficient technical solution for information security detection and processing is needed to solve the above problems.
发明内容Summary of the invention
本发明的目的在于提供一种基于信息安全的分布式视频处理监管系统及方法,以解决上述背景技术中提出的问题。The purpose of the present invention is to provide a distributed video processing supervision system and method based on information security to solve the problems raised in the above background technology.
为了解决上述技术问题,本发明提供如下技术方案:一种基于信息安全的分布式视频处理监管方法,该方法包括以下步骤:In order to solve the above technical problems, the present invention provides the following technical solutions: a distributed video processing supervision method based on information security, the method comprising the following steps:
S1、实时采集直播设备的运行参数和拍摄的实时视频,获取最新的主题库;S1. Real-time collection of operating parameters of live broadcast equipment and real-time video, and acquisition of the latest theme library;
S2、通过分析运行参数对视频画面划分区域,识别目标对象并计算重要指数;S2, dividing the video screen into regions by analyzing the operating parameters, identifying the target object and calculating the important index;
S3、根据重要指数选定主题对象,判断每个目标对象的关联度;S3, select the subject object according to the important index and determine the relevance of each target object;
S4、采用分布式技术对不同关联度目标对象进行监测和图像处理。S4. Use distributed technology to monitor and process images of target objects with different associations.
在S1中,运行参数是指直播设备摄像头的焦距和拍摄距离,拍摄距离是指摄像头距离拍摄目标的距离,由安装在直播设备内部的距离传感器进行数据采集。主题库作为直播类型的对比库,用于识别每场直播的类型,主题库内包含了各种直播类型的主题集合,主题集合内包括同种主题下不同物品的种类名称。In S1, the operating parameters refer to the focal length and shooting distance of the live broadcast device camera. The shooting distance refers to the distance between the camera and the shooting target, and the distance sensor installed inside the live broadcast device collects data. The theme library is used as a comparison library for live broadcast types to identify the type of each live broadcast. The theme library contains theme collections of various live broadcast types, and the theme collection includes the names of different items under the same theme.
为提高直播类型识别覆盖程度,主题库在建立时应该包含足够多数量的主题集合,以满足不同直播类型的识别需求。每个主题集合中物品的种类名称也应该尽可能覆盖对应直播类型在直播过程中有可能出现的物品种类。In order to improve the coverage of live broadcast type recognition, the theme library should contain a sufficient number of theme sets when it is established to meet the recognition requirements of different live broadcast types. The name of the type of items in each theme set should also cover as many types of items as possible that may appear during the live broadcast of the corresponding live broadcast type.
在S2中,具体步骤如下:In S2, the specific steps are as follows:
S201、获取直播设备拍摄到的实时视频,使用现有的OpenCV将视频分解成单帧图像,对所有单帧图像进行筛选,采用边缘检测法去除模糊的单帧图像。S201, obtaining the real-time video captured by the live broadcast device, using the existing OpenCV to decompose the video into single-frame images, screening all the single-frame images, and using the edge detection method to remove blurred single-frame images.
S202、逐个分析剩余的单帧图像,获取每张单帧图像对应时间下直播设备摄像头的焦距和拍摄距离,代入公式中,分别计算每张单帧图像的权重占比R。以单帧图像中心点为中心,将单帧图像长度与权重占比R的乘积作为长,单帧图像宽度与权重占比R的乘积作为宽,划分一块矩形区域作为重点区域;权重占比计算公式如下:S202, analyze the remaining single-frame images one by one, obtain the focal length and shooting distance of the live broadcast device camera at the corresponding time of each single-frame image, substitute them into the formula, and calculate the weight ratio R of each single-frame image respectively. Taking the center point of the single-frame image as the center, take the product of the length of the single-frame image and the weight ratio R as the length, and the product of the width of the single-frame image and the weight ratio R as the width, and divide a rectangular area as the key area; the weight ratio calculation formula is as follows:
式中,R为权重占比,α为焦距影响系数,j为焦距,Jmax为设备最大焦距,β为拍摄距离影响系数,a为距离常数,P为拍摄距离。Where R is the weight ratio, α is the focal length influence coefficient, j is the focal length, J max is the maximum focal length of the device, β is the shooting distance influence coefficient, a is the distance constant, and P is the shooting distance.
S203、采用YOLOv3目标检测算法检测每张单帧图像中所有目标对象并赋予标识符,提取目标对象的特征信息并分析所属种类和位置,以及计算目标对象占用的像素数,将位置处于重点区域内的目标对象标记为重点,其他目标对象标记为普通。S203, using the YOLOv3 target detection algorithm to detect all target objects in each single-frame image and assign identifiers, extracting feature information of the target objects and analyzing their types and positions, and calculating the number of pixels occupied by the target objects, marking target objects located in key areas as key areas, and marking other target objects as ordinary.
S204、获取重点目标对象的位置坐标(xi,yi)与对应单帧图像中心点位置坐标(xz,yz),与重点目标对象的像素数代入公式计算重要指数,每个重点目标对象对应一个重要指数;计算公式如下:S204, obtaining the position coordinates ( xi , yi ) of the key target object and the position coordinates ( xz , yz ) of the center point of the corresponding single-frame image, and substituting the number of pixels of the key target object into the formula to calculate the importance index, each key target object corresponds to an importance index; the calculation formula is as follows:
式中,ZY为重要指数,Si为重点目标对象的像素数,Sz为单帧图像重点区域的像素数,u为距离影响系数,c为位置常数。Where ZY is the importance index, Si is the number of pixels of the key target object, Sz is the number of pixels in the key area of a single frame image, u is the distance influence coefficient, and c is the position constant.
目标对象的位置坐标选择其所在区域的中心点坐标,像素数则为其所在区域的像素个数,像素数越多,其所在区域覆盖面积越大。The location coordinates of the target object are the center coordinates of the area where it is located, and the number of pixels is the number of pixels in the area where it is located. The more pixels there are, the larger the coverage area of the area where it is located.
S205、不同单帧图像中的目标对象之间进行相似度识别,将相似度高于相似度阈值且不属于同一张单帧图像的目标对象进行关联,相互关联的重点目标对象的重要指数进行求和后得到总重要指数,相互关联的目标对象作为同一个目标对象在不同单帧图像中的体现,将所有相互关联的目标对象的标识符进行统一。S205. Perform similarity recognition between target objects in different single-frame images, associate target objects whose similarity is higher than a similarity threshold and do not belong to the same single-frame image, sum the importance indexes of the mutually associated key target objects to obtain a total importance index, and use the mutually associated target objects as the embodiment of the same target object in different single-frame images, and unify the identifiers of all mutually associated target objects.
在S3中,具体步骤如下:In S3, the specific steps are as follows:
S301、总重要指数大于重要指数阈值的重点目标对象作为主题对象,将所有主题对象所属种类的名称放入主题种类集合中,集合内包括{Q1,Q2,Q3,...,Qn},其中,n表示种类名称个数,Qn表示第n个种类名称。S301. The key target objects whose total importance index is greater than the importance index threshold are taken as subject objects, and the names of the categories to which all subject objects belong are put into a subject category set, which includes {Q 1 , Q 2 , Q 3 , ..., Q n }, where n represents the number of category names and Q n represents the nth category name.
S302、将主题种类集合中所有种类名称分别与主题库中每个主题集合中所有种类名称进行对比,判断主题集合中的种类名称与主题种类集合中的种类名称是否相同,相同则标记对应主题集合中的种类名称,主题库中所有主题集合都完成对比判断后,按照标记种类名称个数对主题集合进行正序排名,选择排名第一的主题集合作为当前主题集合。S302. Compare all category names in the subject category set with all category names in each subject set in the subject library, determine whether the category names in the subject set are the same as the category names in the subject category set, and if they are the same, mark the category names in the corresponding subject set. After all subject sets in the subject library have completed the comparison and judgment, rank the subject sets in ascending order according to the number of marked category names, and select the subject set ranked first as the current subject set.
S303、获取所有目标对象的种类名称,分别与当前主题集合中所有种类名称进行对比,判断当前主题集合中是否存在相同种类名称,存在则将对应目标对象的关联度设定为强关联,不存在则继续与主题库中除当前主题集合外的其他主题集合中所有种类名称进行对比,判断是否存在相同种类名称,存在则将对应目标对象的关联度设定为弱关联,不存在则将对应目标对象的关联度设定为无关联。S303. Obtain the category names of all target objects, and compare them with all category names in the current subject set respectively to determine whether the same category names exist in the current subject set. If so, set the association degree of the corresponding target object to strong association. If not, continue to compare with all category names in other subject sets in the subject library except the current subject set to determine whether the same category names exist. If so, set the association degree of the corresponding target object to weak association. If not, set the association degree of the corresponding target object to no association.
在S4中,具体步骤如下:In S4, the specific steps are as follows:
S401、为不同关联度设置不同的影响系数,影响系数按照无关联、弱关联和强关联的顺序从大到小进行设定。获取所有目标对象的关联度,将目标对象关联度对应的影响系数和像素数代入公式中计算风险指数。公式如下:S401. Set different influence coefficients for different associations. The influence coefficients are set in the order of no association, weak association, and strong association from large to small. Obtain the association of all target objects, and substitute the influence coefficient and the number of pixels corresponding to the association of the target object into the formula to calculate the risk index. The formula is as follows:
FX=K×E×SFX=K×E×S
式中,FX为目标对象的风险指数,K为影响系数,E为常数,S为目标对象的像素数。Where FX is the risk index of the target object, K is the influence coefficient, E is a constant, and S is the number of pixels of the target object.
S402、获取分布式设备中可调用设备数量I,按照风险指数对目标对象进行正序排名,选择排名前I名的目标对象作为风险对象,依次给这些风险对象分配设备,被分配的设备针对风险对象所在区域进行监控。S402. Obtain the number of callable devices I in the distributed devices, rank the target objects in positive order according to the risk index, select the top I target objects as risk objects, and assign devices to these risk objects in turn. The assigned devices monitor the areas where the risk objects are located.
S403、当监控到风险对象为违规物品或出现违规行为时,在不影响单帧图像整体清晰度的前提下对风险对象所在区域进行像素级别的图像处理,处理方式包括马赛克像素化、像素修复法、像素替换法。S403. When the risk object is monitored to be an illegal item or an illegal behavior occurs, pixel-level image processing is performed on the area where the risk object is located without affecting the overall clarity of the single-frame image. The processing methods include mosaic pixelization, pixel repair method, and pixel replacement method.
马赛克像素化:对违规区域进行马赛克处理,将其像素化,达到隐藏的效果。像素修复法:对违规部分进行修复处理,将其修复成符合要求的像素,达到纠正的效果。像素替换法:将违规部分的像素替换成符合要求的像素,达到纠正的效果。Mosaic pixelation: Mosaic the illegal area and pixelate it to achieve the effect of hiding. Pixel repair method: Repair the illegal part and repair it into pixels that meet the requirements to achieve the effect of correction. Pixel replacement method: Replace the pixels of the illegal part with pixels that meet the requirements to achieve the effect of correction.
一种基于信息安全的分布式视频处理监管系统,系统包括数据采集模块、数据分析模块、风险识别模块和运行管理模块。A distributed video processing and supervision system based on information security includes a data acquisition module, a data analysis module, a risk identification module and an operation management module.
数据采集模块用于采集直播设备的运行参数和拍摄的实时视频;数据分析模块对视频信息进行逐帧分析,结合设备的运行参数划分出重点区域并识别出目标对象,计算目标对象的重要指数,根据重要指数确定视频主题;风险识别模块用于判断目标对象与视频主题的关联度,将关联度代入公式中计算每个目标对象的风险指数,根据风险指数找出需要进行监控的风险对象;运行管理模块采用分布式技术控制不同设备对每个风险对象所在区域进行监控,当风险对象为违规物品或出现违规行为时进行图像处理,保护信息安全。The data acquisition module is used to collect the operating parameters of the live broadcast equipment and the real-time video taken; the data analysis module analyzes the video information frame by frame, divides the key areas and identifies the target objects based on the operating parameters of the equipment, calculates the importance index of the target objects, and determines the video theme based on the importance index; the risk identification module is used to determine the correlation between the target object and the video theme, substitutes the correlation into the formula to calculate the risk index of each target object, and finds out the risk objects that need to be monitored based on the risk index; the operation management module uses distributed technology to control different devices to monitor the area where each risk object is located, and performs image processing when the risk object is an illegal item or an illegal behavior occurs to protect information security.
数据采集模块包括图像信息采集单元、设备信息采集单元和主题信息采集单元。The data acquisition module includes an image information acquisition unit, a device information acquisition unit and a subject information acquisition unit.
图像信息采集单元用于采集直播设备拍摄的实时视频。设备信息采集单元用于采集直播设备工作时的运行参数,运行参数包括焦距和拍摄距离,拍摄距离是指直播设备距离拍摄目标的距离。主题信息采集单元用于采集系统中的主题库,主题库内包含了各种直播类型的主题集合,主题集合内包括同种主题下不同物品的种类名称。The image information acquisition unit is used to collect real-time video captured by the live broadcast device. The device information acquisition unit is used to collect the operating parameters of the live broadcast device when it is working. The operating parameters include focal length and shooting distance. The shooting distance refers to the distance between the live broadcast device and the shooting target. The theme information acquisition unit is used to collect the theme library in the system. The theme library contains theme collections of various live broadcast types, and the theme collection includes the type names of different items under the same theme.
数据分析模块包括区域划分单元、对象识别单元和主题分析单元。The data analysis module includes a region division unit, an object recognition unit and a subject analysis unit.
区域划分单元用于在视频画面中划分出重点区域。首先,将直播设备拍摄的实时视频分解为单帧图像;其次,获取每张单帧图像对应时间下直播设备的运行参数,代入公式中分别计算每张单帧图像的权重占比;最后,以单帧图像中心点为中心,将单帧图像长度与权重占比R的乘积作为长,单帧图像宽度与权重占比R的乘积作为宽,划分一块矩形区域作为重点区域,每张单帧图像具有一个重点区域。The area division unit is used to divide the key areas in the video screen. First, the real-time video shot by the live broadcast device is decomposed into single-frame images; second, the operating parameters of the live broadcast device at the corresponding time of each single-frame image are obtained, and the weight ratio of each single-frame image is calculated in the formula; finally, with the center point of the single-frame image as the center, the product of the length of the single-frame image and the weight ratio R is used as the length, and the product of the width of the single-frame image and the weight ratio R is used as the width, and a rectangular area is divided as the key area. Each single-frame image has a key area.
在直播过程中,直播对象通常位于画面中心区域,而画面中心区域面积大小的设定则与焦距和拍摄距离成正反比关系。During the live broadcast process, the live broadcast object is usually located in the center area of the picture, and the size of the center area of the picture is directly and inversely proportional to the focal length and the shooting distance.
正比关系:焦距大,画面距离被摄物体近,画面中心区域面积应该设置更大,合理包含直播对象。Proportional relationship: The larger the focal length, the closer the screen is to the subject, and the larger the center area of the screen should be to reasonably include the live object.
反比关系:拍摄距离大,画面距离被摄物体远,画面中心区域面积应该设置更小,合理包含直播对象。Inverse relationship: the larger the shooting distance is, the farther the screen is from the subject, and the smaller the center area of the screen should be to reasonably include the live object.
对象识别单元用于识别出重点目标对象。采用YOLOv3目标检测算法检测每张单帧图像中的目标对象,提取目标对象的特征信息并分析所属种类和位置,以及计算目标对象占用的像素数,将位置处于重点区域内的作为重点目标对象。The object recognition unit is used to identify key target objects. The YOLOv3 target detection algorithm is used to detect the target object in each single frame image, extract the feature information of the target object, analyze its type and position, and calculate the number of pixels occupied by the target object. The target object located in the key area is regarded as the key target object.
主题分析单元用于分析视频主题。The topic analysis unit is used to analyze the video topic.
首先,获取重点目标对象的位置坐标与对应单帧图像中心点位置坐标,与重点目标对象的像素数代入公式计算重要指数。First, the position coordinates of the key target object and the position coordinates of the center point of the corresponding single-frame image are obtained, and the number of pixels of the key target object is substituted into the formula to calculate the importance index.
其次,将不同单帧图像中同一个目标对象进行关联,相互关联的重点目标对象的重要指数进行求和后得到总重要指数,将总重要指数大于重要指数阈值的重点目标对象的名称放入主题种类集合中。Secondly, the same target object in different single-frame images is associated, and the importance indexes of the associated key target objects are summed up to obtain the total importance index, and the names of the key target objects whose total importance index is greater than the importance index threshold are put into the subject category set.
最后,将主题种类集合中所有种类名称分别与主题库中每个主题集合中所有种类名称进行对比,判断主题集合中的种类名称与主题种类集合中的种类名称是否相同,相同则标记对应主题集合中的种类名称,主题库中所有主题集合都完成对比判断后,按照标记种类名称个数对主题集合进行正序排名,选择排名第一的主题集合作为当前主题集合。Finally, all the category names in the subject category set are compared with all the category names in each subject set in the subject library to determine whether the category names in the subject set are the same as the category names in the subject category set. If they are the same, mark the category names in the corresponding subject set. After all the subject sets in the subject library have completed the comparison and judgment, rank the subject sets in ascending order according to the number of marked category names, and select the subject set ranked first as the current subject set.
风险识别模块包括关联性判断单元和风险性判断单元。The risk identification module includes a relevance judgment unit and a risk judgment unit.
关联性判断单元用于判断每个目标对象与视频主题的关联度。将目标对象的种类名称与当前主题集合中所有种类名称进行对比,判断当前主题集合中是否存在相同种类名称,存在则将对应目标对象的关联度设定为强关联,不存在则继续与主题库中除当前主题集合外的其他主题集合中所有种类名称进行对比,判断是否存在相同种类名称,存在则将对应目标对象的关联度设定为弱关联,不存在则将对应目标对象的关联度设定为无关联。The relevance judgment unit is used to judge the relevance of each target object with the video theme. The category name of the target object is compared with all category names in the current theme set to determine whether the same category name exists in the current theme set. If so, the relevance of the corresponding target object is set to strong relevance. If not, it continues to be compared with all category names in other theme sets in the theme library except the current theme set to determine whether the same category name exists. If so, the relevance of the corresponding target object is set to weak relevance. If not, the relevance of the corresponding target object is set to no relevance.
风险性判断单元用于计算每个目标对象的风险指数。为不同关联度设置不同的影响系数,影响系数按照无关联、弱关联和强关联的顺序从大到小进行设定,将目标对象关联度对应的影响系数和像素数代入公式中计算风险指数。The risk judgment unit is used to calculate the risk index of each target object. Different influence coefficients are set for different associations. The influence coefficients are set in the order of no association, weak association and strong association from large to small. The influence coefficient and the number of pixels corresponding to the association degree of the target object are substituted into the formula to calculate the risk index.
关联度的影响系数高则说明目标对象与视频主题不具备太多关联,出现违规情况的概率更大。像素数越大意味着目标对象的画面在整个画面中占比也越高,出现违规情况造成的影响更加恶劣。综合考虑关联度的影响系数与像素数能够更完善的评估目标对象的可能带来的风险。A high correlation coefficient means that the target object is not closely related to the subject of the video, and the probability of a violation is higher. A larger number of pixels means that the target object's image accounts for a higher proportion of the entire image, and the impact of a violation is more severe. Comprehensively considering the correlation coefficient and the number of pixels can better assess the possible risks of the target object.
运行管理模块用于对风险对象进行监管。首先,获取分布式设备中可调用设备数量I,按照风险指数对目标对象进行正序排名;其次,选择排名前I名的目标对象作为风险对象,依次给这些风险对象分配设备,被分配的设备针对风险对象所在区域进行监控;最后,当监控到风险对象为违规物品或出现违规行为时,在不影响单帧图像整体清晰度的前提下对风险对象所在区域进行像素级别的图像处理。The operation management module is used to supervise risk objects. First, the number of callable devices in the distributed devices is obtained, and the target objects are ranked in positive order according to the risk index; second, the top I target objects are selected as risk objects, and devices are assigned to these risk objects in turn. The assigned devices monitor the areas where the risk objects are located; finally, when the risk objects are monitored as illegal items or illegal behaviors, pixel-level image processing is performed on the areas where the risk objects are located without affecting the overall clarity of the single-frame image.
每台设备只监管一个风险对象,多台设备实现对一张单帧图像中多个风险对象的监控和图像处理。Each device only monitors one risk object, and multiple devices can monitor and process multiple risk objects in a single frame image.
当系统运行时,实时视频不断被分解为单帧图像,重点区域划分与视频主题识别后,快速找出风险对象并采用分布式技术利用多台设备对不同风险对象所在区域分别进行监管。When the system is running, real-time video is continuously decomposed into single-frame images. After key area division and video subject identification, risk objects are quickly identified and distributed technology is used to use multiple devices to supervise the areas where different risk objects are located.
当部分风险对象出现违规情况时,利用多台设备算力资源进行像素级别的快速图像处理,图像处理只针对风险对象所在区域,处理完成后将处理后区域的图像快速组合到原始单帧图像中。When some risk objects violate regulations, the computing power resources of multiple devices are used to perform fast image processing at the pixel level. The image processing is only for the area where the risk objects are located. After the processing is completed, the image of the processed area is quickly combined into the original single-frame image.
与现有技术相比,本发明所达到的有益效果是:Compared with the prior art, the beneficial effects achieved by the present invention are:
1、快速识别视频主题:本发明通过重点区域的划分以及重点指数的计算,快速找到主题对象,结合主题库的对比分析,能够快速定位视频主题,提高信息安全检测的灵活性和精准度。1. Rapidly identify video themes: The present invention can quickly find the subject object by dividing the key areas and calculating the key index. Combined with the comparative analysis of the subject library, it can quickly locate the video subject and improve the flexibility and accuracy of information security detection.
2、精准定位风险对象:本申请对目标对象的风险指数计算方面综合考虑了关联度影响系数和像素数,关联度影响系数表示目标对象违规概率,像素数表示目标对象违规后果,优先选择关联度影响系数高像素数多的目标对象进行监测,既能提高信息安全监测效率,也降低敏感信息泄露风险。2. Accurately locate risk objects: This application comprehensively considers the correlation influence coefficient and the number of pixels in calculating the risk index of the target object. The correlation influence coefficient represents the probability of violation of the target object, and the number of pixels represents the consequence of the violation of the target object. Target objects with high correlation influence coefficient and large number of pixels are given priority for monitoring, which can not only improve the efficiency of information security monitoring, but also reduce the risk of sensitive information leakage.
3、像素级别图像处理:本申请采用分布式技术调用多台设备分别对每个风险对象进行监管,在违规情况发生时,对应设备只需要对传递过来的每张静态图像上的部分区域进行像素级别的图像处理,任务简单消耗算力资源低且不用花费太多时间,处理后的像素区域组合成为新的单帧图像发送至网络,在不影响直播视频实时性的同时,提高了图像处理的细节程度。3. Pixel-level image processing: This application uses distributed technology to call multiple devices to supervise each risk object separately. When a violation occurs, the corresponding device only needs to perform pixel-level image processing on part of the area on each static image transmitted. The task is simple, consumes low computing resources and does not take too much time. The processed pixel areas are combined into a new single-frame image and sent to the network, which improves the level of detail of image processing without affecting the real-time performance of the live video.
综上所述,本发明相比于传统技术具有快速识别视频主题、精准定位风险对象和像素级别图像处理的优势,能够提高信息安全监测的灵活性和效率。In summary, compared with traditional technologies, the present invention has the advantages of rapid identification of video subjects, accurate positioning of risk objects and pixel-level image processing, and can improve the flexibility and efficiency of information security monitoring.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:The accompanying drawings are used to provide a further understanding of the present invention and constitute a part of the specification. Together with the embodiments of the present invention, they are used to explain the present invention and do not constitute a limitation of the present invention. In the accompanying drawings:
图1是本发明一种基于信息安全的分布式视频处理监管方法的流程示意图;FIG1 is a flow chart of a distributed video processing supervision method based on information security according to the present invention;
图2是本发明一种基于信息安全的分布式视频处理监管系统的结构示意图。FIG. 2 is a schematic diagram of the structure of a distributed video processing and monitoring system based on information security according to the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.
请参阅图1,本发明提供一种基于信息安全的分布式视频处理监管方法,该方法包括以下步骤:Referring to FIG. 1 , the present invention provides a distributed video processing supervision method based on information security, the method comprising the following steps:
S1、实时采集直播设备的运行参数和拍摄的实时视频,获取最新的主题库;S1. Real-time collection of operating parameters of live broadcast equipment and real-time video, and acquisition of the latest theme library;
S2、通过分析运行参数对视频画面划分区域,识别目标对象并计算重要指数;S2, dividing the video screen into regions by analyzing the operating parameters, identifying the target object and calculating the importance index;
S3、根据重要指数选定主题对象,判断每个目标对象的关联度;S3, select the subject object according to the important index and determine the relevance of each target object;
S4、采用分布式技术对不同关联度目标对象进行监测和图像处理。S4. Use distributed technology to monitor and process images of target objects with different associations.
在S1中,运行参数是指直播设备摄像头的焦距和拍摄距离,拍摄距离是指摄像头距离拍摄目标的距离,由安装在直播设备内部的距离传感器进行数据采集。主题库作为直播类型的对比库,用于识别每场直播的类型,主题库内包含了各种直播类型的主题集合,主题集合内包括同种主题下不同物品的种类名称。In S1, the operating parameters refer to the focal length and shooting distance of the camera of the live broadcast device. The shooting distance refers to the distance between the camera and the shooting target, and the distance sensor installed inside the live broadcast device collects data. The theme library is used as a comparison library for live broadcast types to identify the type of each live broadcast. The theme library contains theme collections of various live broadcast types, and the theme collection includes the names of different items under the same theme.
为提高直播类型识别覆盖程度,主题库在建立时应该包含足够多数量的主题集合,以满足不同直播类型的识别需求。每个主题集合中物品的种类名称也应该尽可能覆盖对应直播类型在直播过程中有可能出现的物品种类。In order to improve the coverage of live broadcast type recognition, the theme library should contain a sufficient number of theme sets when it is established to meet the recognition requirements of different live broadcast types. The name of the type of items in each theme set should also cover as many types of items as possible that may appear during the live broadcast of the corresponding live broadcast type.
在S2中,具体步骤如下:In S2, the specific steps are as follows:
S201、获取直播设备拍摄到的实时视频,使用现有的OpenCV将视频分解成单帧图像,对所有单帧图像进行筛选,采用边缘检测法去除模糊的单帧图像。S201, obtaining the real-time video captured by the live broadcast device, using the existing OpenCV to decompose the video into single-frame images, screening all the single-frame images, and using the edge detection method to remove blurred single-frame images.
S202、逐个分析剩余的单帧图像,获取每张单帧图像对应时间下直播设备摄像头的焦距和拍摄距离,代入公式中,分别计算每张单帧图像的权重占比R。以单帧图像中心点为中心,将单帧图像长度与权重占比R的乘积作为长,单帧图像宽度与权重占比R的乘积作为宽,划分一块矩形区域作为重点区域;权重占比计算公式如下:S202, analyze the remaining single-frame images one by one, obtain the focal length and shooting distance of the live broadcast device camera at the corresponding time of each single-frame image, substitute them into the formula, and calculate the weight ratio R of each single-frame image respectively. Taking the center point of the single-frame image as the center, take the product of the length of the single-frame image and the weight ratio R as the length, and the product of the width of the single-frame image and the weight ratio R as the width, and divide a rectangular area as the key area; the weight ratio calculation formula is as follows:
式中,R为权重占比,α为焦距影响系数,j为焦距,Jmax为设备最大焦距,β为拍摄距离影响系数,a为距离常数,P为拍摄距离。Where R is the weight ratio, α is the focal length influence coefficient, j is the focal length, J max is the maximum focal length of the device, β is the shooting distance influence coefficient, a is the distance constant, and P is the shooting distance.
S203、采用YOLOv3目标检测算法检测每张单帧图像中所有目标对象并赋予标识符,提取目标对象的特征信息并分析所属种类和位置,以及计算目标对象占用的像素数,将位置处于重点区域内的目标对象标记为重点,其他目标对象标记为普通。S203, using the YOLOv3 target detection algorithm to detect all target objects in each single-frame image and assign identifiers, extracting feature information of the target objects and analyzing their types and positions, and calculating the number of pixels occupied by the target objects, marking target objects located in key areas as key areas, and marking other target objects as ordinary.
S204、获取重点目标对象的位置坐标(xi,yi)与对应单帧图像中心点位置坐标(xz,yz),与重点目标对象的像素数代入公式计算重要指数,每个重点目标对象对应一个重要指数;计算公式如下:S204, obtaining the position coordinates ( xi , yi ) of the key target object and the position coordinates ( xz , yz ) of the center point of the corresponding single-frame image, and substituting the number of pixels of the key target object into the formula to calculate the importance index, each key target object corresponds to an importance index; the calculation formula is as follows:
式中,ZY为重要指数,Si为重点目标对象的像素数,Sz为单帧图像重点区域的像素数,u为距离影响系数,c为位置常数。Where ZY is the importance index, Si is the number of pixels of the key target object, Sz is the number of pixels in the key area of a single frame image, u is the distance influence coefficient, and c is the position constant.
目标对象的位置坐标选择其所在区域的中心点坐标,像素数则为其所在区域的像素个数,像素数越多,其所在区域覆盖面积越大。The location coordinates of the target object are the center coordinates of the area where it is located, and the number of pixels is the number of pixels in the area where it is located. The more pixels there are, the larger the coverage area of the area where it is located.
S205、不同单帧图像中的目标对象之间进行相似度识别,将相似度高于相似度阈值且不属于同一张单帧图像的目标对象进行关联,相互关联的重点目标对象的重要指数进行求和后得到总重要指数,相互关联的目标对象作为同一个目标对象在不同单帧图像中的体现,将所有相互关联的目标对象的标识符进行统一。S205. Perform similarity recognition between target objects in different single-frame images, associate target objects whose similarity is higher than a similarity threshold and do not belong to the same single-frame image, sum the importance indexes of the mutually associated key target objects to obtain a total importance index, and use the mutually associated target objects as the embodiment of the same target object in different single-frame images, and unify the identifiers of all mutually associated target objects.
在S3中,具体步骤如下:In S3, the specific steps are as follows:
S301、总重要指数大于重要指数阈值的重点目标对象作为主题对象,将所有主题对象所属种类的名称放入主题种类集合中,集合内包括{Q1,Q2,Q3,...,Qn},其中,n表示种类名称个数,Qn表示第n个种类名称。S301. The key target objects whose total importance index is greater than the importance index threshold are taken as subject objects, and the names of the categories to which all subject objects belong are put into a subject category set, which includes {Q 1 , Q 2 , Q 3 , ..., Q n }, where n represents the number of category names and Q n represents the nth category name.
S302、将主题种类集合中所有种类名称分别与主题库中每个主题集合中所有种类名称进行对比,判断主题集合中的种类名称与主题种类集合中的种类名称是否相同,相同则标记对应主题集合中的种类名称,主题库中所有主题集合都完成对比判断后,按照标记种类名称个数对主题集合进行正序排名,选择排名第一的主题集合作为当前主题集合。S302. Compare all the category names in the subject category set with all the category names in each subject set in the subject library, and determine whether the category names in the subject set are the same as the category names in the subject category set. If they are the same, mark the category names in the corresponding subject set. After all the subject sets in the subject library have completed the comparison and judgment, rank the subject sets in ascending order according to the number of marked category names, and select the subject set ranked first as the current subject set.
S303、获取所有目标对象的种类名称,分别与当前主题集合中所有种类名称进行对比,判断当前主题集合中是否存在相同种类名称,存在则将对应目标对象的关联度设定为强关联,不存在则继续与主题库中除当前主题集合外的其他主题集合中所有种类名称进行对比,判断是否存在相同种类名称,存在则将对应目标对象的关联度设定为弱关联,不存在则将对应目标对象的关联度设定为无关联。S303. Obtain the category names of all target objects, and compare them with all category names in the current subject set respectively to determine whether the same category names exist in the current subject set. If so, set the association degree of the corresponding target object to strong association. If not, continue to compare with all category names in other subject sets in the subject library except the current subject set to determine whether the same category names exist. If so, set the association degree of the corresponding target object to weak association. If not, set the association degree of the corresponding target object to no association.
在S4中,具体步骤如下:In S4, the specific steps are as follows:
S401、为不同关联度设置不同的影响系数,影响系数按照无关联、弱关联和强关联的顺序从大到小进行设定。获取所有目标对象的关联度,将目标对象关联度对应的影响系数和像素数代入公式中计算风险指数。公式如下:S401. Set different influence coefficients for different associations. The influence coefficients are set in the order of no association, weak association, and strong association from large to small. Obtain the association of all target objects, and substitute the influence coefficient and the number of pixels corresponding to the association of the target object into the formula to calculate the risk index. The formula is as follows:
FX=K×E×SFX=K×E×S
式中,FX为目标对象的风险指数,K为影响系数,E为常数,S为目标对象的像素数。Where FX is the risk index of the target object, K is the influence coefficient, E is a constant, and S is the number of pixels of the target object.
S402、获取分布式设备中可调用设备数量I,按照风险指数对目标对象进行正序排名,选择排名前I名的目标对象作为风险对象,依次给这些风险对象分配设备,被分配的设备针对风险对象所在区域进行监控。S402. Obtain the number of callable devices I in the distributed devices, rank the target objects in positive order according to the risk index, select the top I target objects as risk objects, and assign devices to these risk objects in turn. The assigned devices monitor the areas where the risk objects are located.
S403、当监控到风险对象为违规物品或出现违规行为时,在不影响单帧图像整体清晰度的前提下对风险对象所在区域进行像素级别的图像处理,处理方式包括马赛克像素化、像素修复法、像素替换法。S403. When the risk object is monitored to be an illegal item or an illegal behavior occurs, pixel-level image processing is performed on the area where the risk object is located without affecting the overall clarity of the single-frame image. The processing methods include mosaic pixelization, pixel repair method, and pixel replacement method.
马赛克像素化:对违规区域进行马赛克处理,将其像素化,达到隐藏的效果。像素修复法:对违规部分进行修复处理,将其修复成符合要求的像素,达到纠正的效果。像素替换法:将违规部分的像素替换成符合要求的像素,达到纠正的效果。Mosaic pixelation: Mosaic the illegal area and pixelate it to achieve the effect of hiding. Pixel repair method: Repair the illegal part and repair it into pixels that meet the requirements to achieve the effect of correction. Pixel replacement method: Replace the pixels of the illegal part with pixels that meet the requirements to achieve the effect of correction.
请参阅图2,本发明提供一种基于信息安全的分布式视频处理监管系统,系统包括数据采集模块、数据分析模块、风险识别模块和运行管理模块。Please refer to FIG. 2 . The present invention provides a distributed video processing and supervision system based on information security. The system includes a data acquisition module, a data analysis module, a risk identification module and an operation management module.
数据采集模块用于采集直播设备的运行参数和拍摄的实时视频;数据分析模块对视频信息进行逐帧分析,结合设备的运行参数划分出重点区域并识别出目标对象,计算目标对象的重要指数,根据重要指数确定视频主题;风险识别模块用于判断目标对象与视频主题的关联度,将关联度代入公式中计算每个目标对象的风险指数,根据风险指数找出需要进行监控的风险对象;运行管理模块采用分布式技术控制不同设备对每个风险对象所在区域进行监控,当风险对象为违规物品或出现违规行为时进行图像处理,保护信息安全。The data acquisition module is used to collect the operating parameters of the live broadcast equipment and the real-time video taken; the data analysis module analyzes the video information frame by frame, divides the key areas and identifies the target objects based on the operating parameters of the equipment, calculates the importance index of the target objects, and determines the video theme based on the importance index; the risk identification module is used to determine the correlation between the target object and the video theme, substitutes the correlation into the formula to calculate the risk index of each target object, and finds out the risk objects that need to be monitored based on the risk index; the operation management module uses distributed technology to control different devices to monitor the area where each risk object is located, and performs image processing when the risk object is an illegal item or an illegal behavior occurs to protect information security.
数据采集模块包括图像信息采集单元、设备信息采集单元和主题信息采集单元。The data acquisition module includes an image information acquisition unit, a device information acquisition unit and a subject information acquisition unit.
图像信息采集单元用于采集直播设备拍摄的实时视频。设备信息采集单元用于采集直播设备工作时的运行参数,运行参数包括焦距和拍摄距离,拍摄距离是指直播设备距离拍摄目标的距离。主题信息采集单元用于采集系统中的主题库,主题库内包含了各种直播类型的主题集合,主题集合内包括同种主题下不同物品的种类名称。The image information acquisition unit is used to collect real-time video captured by the live broadcast device. The device information acquisition unit is used to collect the operating parameters of the live broadcast device when it is working. The operating parameters include focal length and shooting distance. The shooting distance refers to the distance between the live broadcast device and the shooting target. The theme information acquisition unit is used to collect the theme library in the system. The theme library contains theme collections of various live broadcast types, and the theme collection includes the type names of different items under the same theme.
数据分析模块包括区域划分单元、对象识别单元和主题分析单元。The data analysis module includes a region division unit, an object recognition unit and a subject analysis unit.
区域划分单元用于在视频画面中划分出重点区域。首先,将直播设备拍摄的实时视频分解为单帧图像;其次,获取每张单帧图像对应时间下直播设备的运行参数,代入公式中分别计算每张单帧图像的权重占比;最后,以单帧图像中心点为中心,将单帧图像长度与权重占比R的乘积作为长,单帧图像宽度与权重占比R的乘积作为宽,划分一块矩形区域作为重点区域,每张单帧图像具有一个重点区域。The area division unit is used to divide the key areas in the video screen. First, the real-time video shot by the live broadcast device is decomposed into single-frame images; second, the operating parameters of the live broadcast device at the corresponding time of each single-frame image are obtained, and the weight ratio of each single-frame image is calculated in the formula; finally, with the center point of the single-frame image as the center, the product of the length of the single-frame image and the weight ratio R is used as the length, and the product of the width of the single-frame image and the weight ratio R is used as the width, and a rectangular area is divided as the key area. Each single-frame image has a key area.
在直播过程中,直播对象通常位于画面中心区域,而画面中心区域面积大小的设定则与焦距和拍摄距离成正反比关系。During the live broadcast process, the live broadcast object is usually located in the center area of the picture, and the size of the center area of the picture is directly and inversely proportional to the focal length and the shooting distance.
正比关系:焦距大,画面距离被摄物体近,画面中心区域面积应该设置更大,合理包含直播对象。Proportional relationship: The larger the focal length, the closer the screen is to the subject, and the larger the center area of the screen should be to reasonably include the live object.
反比关系:拍摄距离大,画面距离被摄物体远,画面中心区域面积应该设置更小,合理包含直播对象。Inverse relationship: the larger the shooting distance is, the farther the screen is from the subject, and the smaller the center area of the screen should be to reasonably include the live object.
对象识别单元用于识别出重点目标对象。采用YOLOv3目标检测算法检测每张单帧图像中的目标对象,提取目标对象的特征信息并分析所属种类和位置,以及计算目标对象占用的像素数,将位置处于重点区域内的作为重点目标对象。The object recognition unit is used to identify key target objects. The YOLOv3 target detection algorithm is used to detect the target object in each single frame image, extract the feature information of the target object, analyze its type and position, and calculate the number of pixels occupied by the target object. The target object located in the key area is regarded as the key target object.
主题分析单元用于分析视频主题。The topic analysis unit is used to analyze the topic of the video.
首先,获取重点目标对象的位置坐标与对应单帧图像中心点位置坐标,与重点目标对象的像素数代入公式计算重要指数。First, the position coordinates of the key target object and the position coordinates of the center point of the corresponding single-frame image are obtained, and the number of pixels of the key target object is substituted into the formula to calculate the importance index.
其次,将不同单帧图像中同一个目标对象进行关联,相互关联的重点目标对象的重要指数进行求和后得到总重要指数,将总重要指数大于重要指数阈值的重点目标对象的名称放入主题种类集合中。Secondly, the same target object in different single-frame images is associated, and the importance indexes of the associated key target objects are summed up to obtain the total importance index, and the names of the key target objects whose total importance index is greater than the importance index threshold are put into the subject category set.
最后,将主题种类集合中所有种类名称分别与主题库中每个主题集合中所有种类名称进行对比,判断主题集合中的种类名称与主题种类集合中的种类名称是否相同,相同则标记对应主题集合中的种类名称,主题库中所有主题集合都完成对比判断后,按照标记种类名称个数对主题集合进行正序排名,选择排名第一的主题集合作为当前主题集合。Finally, all the category names in the subject category set are compared with all the category names in each subject set in the subject library to determine whether the category names in the subject set are the same as the category names in the subject category set. If they are the same, mark the category names in the corresponding subject set. After all the subject sets in the subject library have completed the comparison and judgment, rank the subject sets in ascending order according to the number of marked category names, and select the subject set ranked first as the current subject set.
风险识别模块包括关联性判断单元和风险性判断单元。The risk identification module includes a relevance judgment unit and a risk judgment unit.
关联性判断单元用于判断每个目标对象与视频主题的关联度。将目标对象的种类名称与当前主题集合中所有种类名称进行对比,判断当前主题集合中是否存在相同种类名称,存在则将对应目标对象的关联度设定为强关联,不存在则继续与主题库中除当前主题集合外的其他主题集合中所有种类名称进行对比,判断是否存在相同种类名称,存在则将对应目标对象的关联度设定为弱关联,不存在则将对应目标对象的关联度设定为无关联。The relevance judgment unit is used to judge the relevance of each target object with the video theme. The category name of the target object is compared with all category names in the current theme set to determine whether the same category name exists in the current theme set. If so, the relevance of the corresponding target object is set to strong relevance. If not, it continues to be compared with all category names in other theme sets in the theme library except the current theme set to determine whether the same category name exists. If so, the relevance of the corresponding target object is set to weak relevance. If not, the relevance of the corresponding target object is set to no relevance.
风险性判断单元用于计算每个目标对象的风险指数。为不同关联度设置不同的影响系数,影响系数按照无关联、弱关联和强关联的顺序从大到小进行设定,将目标对象关联度对应的影响系数和像素数代入公式中计算风险指数。The risk judgment unit is used to calculate the risk index of each target object. Different influence coefficients are set for different associations. The influence coefficients are set in the order of no association, weak association and strong association from large to small. The influence coefficient and the number of pixels corresponding to the association degree of the target object are substituted into the formula to calculate the risk index.
关联度的影响系数高则说明目标对象与视频主题不具备太多关联,出现违规情况的概率更大。像素数越大意味着目标对象的画面在整个画面中占比也越高,出现违规情况造成的影响更加恶劣。综合考虑关联度的影响系数与像素数能够更完善的评估目标对象的可能带来的风险。A high correlation coefficient means that the target object is not closely related to the subject of the video, and the probability of a violation is higher. A larger number of pixels means that the target object's image accounts for a higher proportion of the entire image, and the impact of a violation is more severe. Comprehensively considering the correlation coefficient and the number of pixels can better assess the possible risks of the target object.
运行管理模块用于对风险对象进行监管。首先,获取分布式设备中可调用设备数量I,按照风险指数对目标对象进行正序排名;其次,选择排名前I名的目标对象作为风险对象,依次给这些风险对象分配设备,被分配的设备针对风险对象所在区域进行监控;最后,当监控到风险对象为违规物品或出现违规行为时,在不影响单帧图像整体清晰度的前提下对风险对象所在区域进行像素级别的图像处理。The operation management module is used to supervise risk objects. First, the number of callable devices in the distributed devices is obtained, and the target objects are ranked in positive order according to the risk index; second, the top I target objects are selected as risk objects, and devices are assigned to these risk objects in turn. The assigned devices monitor the areas where the risk objects are located; finally, when the risk objects are monitored as illegal items or illegal behaviors, pixel-level image processing is performed on the areas where the risk objects are located without affecting the overall clarity of the single-frame image.
每台设备只监管一个风险对象,多台设备实现对一张单帧图像中多个风险对象的监控和图像处理。Each device only monitors one risk object, and multiple devices can monitor and process multiple risk objects in a single frame image.
当系统运行时,实时视频不断被分解为单帧图像,重点区域划分与视频主题识别后,快速找出风险对象并采用分布式技术利用多台设备对不同风险对象所在区域分别进行监管。When the system is running, real-time video is continuously decomposed into single-frame images. After key area division and video subject identification, risk objects are quickly identified and distributed technology is used to use multiple devices to supervise the areas where different risk objects are located.
当部分风险对象出现违规情况时,利用多台设备算力资源进行像素级别的快速图像处理,图像处理只针对风险对象所在区域,处理完成后将处理后区域的图像快速组合到原始单帧图像中。When some risk objects violate regulations, the computing power resources of multiple devices are used to perform fast image processing at the pixel level. The image processing is only for the area where the risk objects are located. After the processing is completed, the image of the processed area is quickly combined into the original single-frame image.
实施例一:Embodiment 1:
假设某张单帧图像对应时间下直播设备摄像头的焦距为1.5mm,拍摄距离为2m,焦距影响系数为0.4,该直播设备摄像头最大焦距为3mm,拍摄距离影响系数为0.2,距离常数为10,代入公式中,计算该单帧图像的权重占比:Assume that the focal length of the live broadcast device camera at the corresponding time of a single-frame image is 1.5mm, the shooting distance is 2m, the focal length influence coefficient is 0.4, the maximum focal length of the live broadcast device camera is 3mm, the shooting distance influence coefficient is 0.2, and the distance constant is 10. Substitute it into the formula to calculate the weight ratio of the single-frame image:
权重占比: Weight ratio:
实施例二:Embodiment 2:
假设某重点目标对象的位置坐标为(250,500),像素数为1200;对应单帧图像中心点位置坐标为(500,500),重点区域的像素数为25000;距离影响系数为0.0001,位置常数为1,代入公式计算该重点目标对象的重要指数:Assume that the position coordinates of a key target object are (250,500) and the number of pixels is 1200; the corresponding single-frame image center point position coordinates are (500,500) and the number of pixels in the key area is 25000; the distance influence coefficient is 0.0001 and the position constant is 1. Substitute the formula to calculate the importance index of the key target object:
重要指数: Important index:
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。It should be noted that, in this article, relational terms such as first and second, etc. are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any such actual relationship or order between these entities or operations. Moreover, the terms "include", "comprise" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device including a series of elements includes not only those elements, but also other elements not explicitly listed, or also includes elements inherent to such process, method, article or device.
最后应说明的是:以上所述仅为本发明的优选实施例而已,并不用于限制本发明,尽管参照前述实施例对本发明进行了详细的说明,对于本领域的技术人员来说,其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。Finally, it should be noted that the above is only a preferred embodiment of the present invention and is not intended to limit the present invention. Although the present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art can still modify the technical solutions described in the aforementioned embodiments or replace some of the technical features therein by equivalents. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included in the protection scope of the present invention.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311435215.1A CN117478838B (en) | 2023-11-01 | 2023-11-01 | A distributed video processing supervision system and method based on information security |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311435215.1A CN117478838B (en) | 2023-11-01 | 2023-11-01 | A distributed video processing supervision system and method based on information security |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117478838A CN117478838A (en) | 2024-01-30 |
CN117478838B true CN117478838B (en) | 2024-05-28 |
Family
ID=89626937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311435215.1A Active CN117478838B (en) | 2023-11-01 | 2023-11-01 | A distributed video processing supervision system and method based on information security |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117478838B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118473084A (en) * | 2024-04-23 | 2024-08-09 | 南京威联达自动化技术有限公司 | Intelligent monitoring system and method for distribution network equipment based on artificial intelligence |
CN118233680B (en) * | 2024-05-22 | 2024-07-26 | 珠海经济特区伟思有限公司 | Intelligent load management system and method based on video data analysis |
CN118631973B (en) * | 2024-08-12 | 2024-11-22 | 深圳天健电子科技有限公司 | Monitoring picture real-time optimization method based on multi-target positioning analysis |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107040795A (en) * | 2017-04-27 | 2017-08-11 | 北京奇虎科技有限公司 | The monitoring method and device of a kind of live video |
CN109033072A (en) * | 2018-06-27 | 2018-12-18 | 广东省新闻出版广电局 | A kind of audiovisual material supervisory systems Internet-based |
CN110012302A (en) * | 2018-01-05 | 2019-07-12 | 阿里巴巴集团控股有限公司 | A kind of network direct broadcasting monitoring method and device, data processing method |
CN110418161A (en) * | 2019-08-02 | 2019-11-05 | 广州虎牙科技有限公司 | Video reviewing method and device, electronic equipment and readable storage medium storing program for executing |
CN112465596A (en) * | 2020-12-01 | 2021-03-09 | 南京翰氜信息科技有限公司 | Image information processing cloud computing platform based on electronic commerce live broadcast |
KR20220079428A (en) * | 2020-12-04 | 2022-06-13 | 삼성전자주식회사 | Method and apparatus for detecting object in video |
CN114745558A (en) * | 2021-01-07 | 2022-07-12 | 北京字节跳动网络技术有限公司 | Live broadcast monitoring method, device, system, equipment and medium |
WO2022148378A1 (en) * | 2021-01-05 | 2022-07-14 | 百果园技术(新加坡)有限公司 | Rule-violating user processing method and apparatus, and electronic device |
CN115019390A (en) * | 2022-05-26 | 2022-09-06 | 北京百度网讯科技有限公司 | Video data processing method and device and electronic equipment |
CN115775363A (en) * | 2022-04-27 | 2023-03-10 | 中国科学院沈阳计算技术研究所有限公司 | Illegal video detection method based on text and video fusion |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010144566A1 (en) * | 2009-06-09 | 2010-12-16 | Wayne State University | Automated video surveillance systems |
-
2023
- 2023-11-01 CN CN202311435215.1A patent/CN117478838B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107040795A (en) * | 2017-04-27 | 2017-08-11 | 北京奇虎科技有限公司 | The monitoring method and device of a kind of live video |
CN110012302A (en) * | 2018-01-05 | 2019-07-12 | 阿里巴巴集团控股有限公司 | A kind of network direct broadcasting monitoring method and device, data processing method |
CN109033072A (en) * | 2018-06-27 | 2018-12-18 | 广东省新闻出版广电局 | A kind of audiovisual material supervisory systems Internet-based |
CN110418161A (en) * | 2019-08-02 | 2019-11-05 | 广州虎牙科技有限公司 | Video reviewing method and device, electronic equipment and readable storage medium storing program for executing |
CN112465596A (en) * | 2020-12-01 | 2021-03-09 | 南京翰氜信息科技有限公司 | Image information processing cloud computing platform based on electronic commerce live broadcast |
KR20220079428A (en) * | 2020-12-04 | 2022-06-13 | 삼성전자주식회사 | Method and apparatus for detecting object in video |
WO2022148378A1 (en) * | 2021-01-05 | 2022-07-14 | 百果园技术(新加坡)有限公司 | Rule-violating user processing method and apparatus, and electronic device |
CN114745558A (en) * | 2021-01-07 | 2022-07-12 | 北京字节跳动网络技术有限公司 | Live broadcast monitoring method, device, system, equipment and medium |
CN115775363A (en) * | 2022-04-27 | 2023-03-10 | 中国科学院沈阳计算技术研究所有限公司 | Illegal video detection method based on text and video fusion |
CN115019390A (en) * | 2022-05-26 | 2022-09-06 | 北京百度网讯科技有限公司 | Video data processing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN117478838A (en) | 2024-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117478838B (en) | A distributed video processing supervision system and method based on information security | |
US11380232B2 (en) | Display screen quality detection method, apparatus, electronic device and storage medium | |
CN107292240B (en) | Person finding method and system based on face and body recognition | |
US20200005468A1 (en) | Method and system of event-driven object segmentation for image processing | |
CN110225299B (en) | Video monitoring method and device, computer equipment and storage medium | |
CN107016322B (en) | Method and device for analyzing followed person | |
EP2580738A1 (en) | Region of interest based video synopsis | |
CN111462155B (en) | Motion detection method, device, computer equipment and storage medium | |
CN108764181B (en) | Passenger flow statistical method and device and computer readable storage medium | |
CN109145771A (en) | A kind of face snap method and device | |
CN113660484B (en) | Audio and video attribute comparison method, system, terminal and medium based on audio and video content | |
CN110619308A (en) | Aisle sundry detection method, device, system and equipment | |
CN111476160A (en) | Loss function optimization method, model training method, target detection method, and medium | |
WO2022142414A1 (en) | High-rise littering monitoring method and apparatus, electronic device, and storage medium | |
CN113132695A (en) | Lens shadow correction method and device and electronic equipment | |
CN111708907B (en) | Target person query method, device, equipment and storage medium | |
EP4272458A1 (en) | Method and electronic device for capturing media using under display camera | |
WO2021049855A1 (en) | Method and electronic device for capturing roi | |
CN111479168B (en) | Method, device, server and medium for marking multimedia content hot spot | |
CN117809221A (en) | Object detection method and device, electronic equipment and storage medium | |
CN112528854A (en) | Vibration object monitoring method and device, computer equipment and storage medium | |
WO2022012573A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN112907206B (en) | Business auditing method, device and equipment based on video object identification | |
CN114863337A (en) | Novel screen anti-photographing recognition method | |
CN111553408B (en) | Automatic test method for video recognition software |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: A distributed video processing supervision system and method based on information security Granted publication date: 20240528 Pledgee: Zhuhai China Resources Bank Co.,Ltd. Zhuhai Branch Pledgor: ZHUHAI VICTORY IDEA Co.,Ltd. Registration number: Y2024980045055 |