CN112633313A - Bad information identification method of network terminal and local area network terminal equipment - Google Patents

Bad information identification method of network terminal and local area network terminal equipment Download PDF

Info

Publication number
CN112633313A
CN112633313A CN202011092717.5A CN202011092717A CN112633313A CN 112633313 A CN112633313 A CN 112633313A CN 202011092717 A CN202011092717 A CN 202011092717A CN 112633313 A CN112633313 A CN 112633313A
Authority
CN
China
Prior art keywords
image
network terminal
information
training
bad
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011092717.5A
Other languages
Chinese (zh)
Other versions
CN112633313B (en
Inventor
张乐平
张博
侯磊
李海峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Deepctrl Co ltd
Original Assignee
Beijing Deepctrl Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Deepctrl Co ltd filed Critical Beijing Deepctrl Co ltd
Priority to CN202011092717.5A priority Critical patent/CN112633313B/en
Publication of CN112633313A publication Critical patent/CN112633313A/en
Application granted granted Critical
Publication of CN112633313B publication Critical patent/CN112633313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/30Network architectures or network communication protocols for network security for supporting lawful interception, monitoring or retaining of communications or communication related information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Technology Law (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a bad information identification method of a network terminal and local area network terminal equipment, wherein the bad information identification method of the network terminal is applied to the local area network terminal equipment connected with the network terminal through a local area network, and the bad information identification method comprises the following steps: receiving screenshot information periodically sent by a network terminal, and taking the screenshot information as an image to be identified; extracting edge gradient information of an image to be identified; extracting an image area corresponding to edge gradient information which accords with an edge threshold range and/or a preset shape characteristic to serve as sub-image information to be analyzed; inputting the image to be identified and/or the sub-image information to be analyzed into a deep learning model for analysis, and obtaining an analysis result; and if the analysis result is that the current picture belongs to the bad category, sending a turn-off instruction to the network terminal, and controlling the network terminal to stop displaying the current picture. Therefore, the display image of the network terminal can be identified badly, the image containing bad information can be intercepted in time, and the user experience is improved.

Description

Bad information identification method of network terminal and local area network terminal equipment
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a bad information identification method of a network terminal and local area network terminal equipment.
Background
Aiming at some bad information displayed on a network terminal, the existing means is mainly to protect through a personnel management system, a network security protection means and a professional broadcasting control software control, and the key control target is 'behavior'.
(1) Personnel management: by formulating some management methods, the responsibility is clarified, and the accidents are reduced.
(2) Network security protection: the method can partially solve the bad content display accidents caused by the network attacks.
(3) Professional broadcasting and controlling software: the information issuing software (generally arranged at an information issuing server end) is provided with a plurality of safety control mechanisms, so that accidents of issuing bad information through the broadcasting control software can be solved.
The main problems are that: low efficiency and poor sustainable operability; after a hacker invades the broadcasting control software through a backdoor, the security mechanism can be invalid; the information can be released (screen projection and the like) to the network terminal without information release software, and bad information can be released.
Disclosure of Invention
In view of the above problems, the present invention proposes a bad information identification method of a network terminal and a local area network terminal device in order to overcome the above problems or at least partially solve the above problems.
According to a first aspect of the embodiments of the present invention, there is provided a method for identifying bad information of a network terminal, which is applied to a local area network terminal device connected to the network terminal through a local area network, the method including:
receiving screenshot information periodically sent by a network terminal, and taking the screenshot information as an image to be identified;
extracting edge gradient information of the image to be identified;
extracting an image area corresponding to edge gradient information which accords with an edge threshold range and/or a preset shape characteristic to serve as sub-image information to be analyzed;
inputting the image to be recognized and/or the sub-image information to be analyzed into a deep learning model for analysis, and obtaining an analysis result, wherein the analysis result is whether the image belongs to a bad category, and the deep learning model is obtained by a convolutional neural network through learning training by utilizing a sample image;
and if the analysis result is of a bad category, sending a turn-off instruction to the network terminal, and controlling the network terminal to stop displaying the current picture.
Further, the receiving screenshot information periodically sent by the network terminal, and taking the screenshot information as an image to be identified specifically includes:
carrying out multi-scale zooming on the screenshot information to obtain a plurality of zoomed images;
filling blanks in the zooming images, filling the images with the size same as that of the screenshot information, and taking the filled zooming images and the screenshot information as images to be identified, wherein the number of the images to be identified is multiple;
the deep learning model outputs an analysis result for each image to be recognized;
and judging the proportion of the bad categories in the analysis result, if the proportion exceeds a preset proportion threshold value, determining that the screenshot information contains bad information, sending a shutdown instruction to a corresponding network terminal, and controlling the network terminal to stop displaying.
Further, before the image to be recognized and/or the sub-image information to be analyzed are input into a deep learning model for analysis and an analysis result is obtained, the method further includes:
acquiring a predetermined number of sample images;
and after adding a mark type into the sample image, inputting the sample image into a convolutional neural network for training and identification, if the training and identification result is different from the mark type, adjusting the parameters of the convolutional neural network until the training and identification result is the same as the mark type, then inputting the next sample image into the convolutional neural network for training and identification, and taking the convolutional neural network after the sample image is completely trained as a deep learning model.
Further, after adding a label type to the sample image, inputting the sample image to a convolutional neural network for training and recognition, if a training recognition result is different from the label type, adjusting parameters of the convolutional neural network until the training recognition result is the same as the label type, then inputting the next sample image to the convolutional neural network for training and recognition, and taking the convolutional neural network after all sample images are trained as a deep learning model, which specifically comprises:
adding a target area marker to the sample image;
inputting the sample image into a first convolution neural network for training, if the target area obtained by training is different from the target area label, adjusting the parameters of the first convolution neural network until the target area obtained by training is the same as the target area label, and then inputting the next sample image into the first convolution neural network for training;
taking the trained first convolution neural network as a target area detection model;
acquiring a predetermined number of target area image samples, and adding a corresponding classification label to each target area image sample to be used as a classification training sample;
inputting the classification training sample into a second convolutional neural network for training, if the classification result output by training is different from the classification label, adjusting the parameter of the second convolutional neural network until the classification result output by training is the same as the classification label, and then inputting the next classification training sample into the second convolutional neural network for training;
adding a judgment layer for judging whether the output class belongs to the bad class or not on the output layer of the trained second convolutional neural network, and taking the added second convolutional neural network as a target classification model;
and combining the target region detection model and the target classification model to serve as a deep learning model.
Further, if the analysis result is of a bad category, a shutdown instruction is sent to the network terminal to control the network terminal to stop displaying the current picture, specifically:
and if the analysis result is of a bad category, sending a turn-off instruction to a network terminal, integrating a pre-stored preset image with the turn-off instruction, sending the integrated preset image to the network terminal, controlling the network terminal to stop displaying the current image, and simultaneously displaying the preset image.
Further, if the analysis result is of a bad category, sending a shutdown instruction to the network terminal to control the network terminal to stop displaying the current picture, specifically including:
if the analysis result belongs to the bad category, adding 1 to the corresponding count accumulation;
and when the accumulated count exceeds a preset threshold value, sending a shutdown instruction to the network terminal, and controlling the network terminal to stop displaying.
Further, the extracting edge gradient information of the image to be recognized specifically includes:
carrying out gray level processing on the image to be identified;
extracting gradient information in the horizontal direction and the vertical direction in the image to be identified by utilizing a gradient operator and an image convolution algorithm;
searching image edges from the gradient information in the horizontal direction and the vertical direction by using a gradient local non-maximum suppression algorithm, and taking the searched image edges as edge gradient information;
then, extracting an image region corresponding to the edge gradient information that meets the edge threshold range and/or meets the preset shape feature as sub-image information to be analyzed specifically includes:
and taking the image area corresponding to the edge gradient information which can be combined into a regular rectangular area as the sub-image information to be analyzed.
According to a second aspect of the embodiments of the present invention, there is provided a local area network terminal device connected to a network terminal through a local area network, including:
the receiving module is used for receiving screenshot information sent by a network terminal periodically and taking the screenshot information as an image to be identified;
the extraction module is used for extracting the edge gradient information of the image to be identified; extracting an image area corresponding to edge gradient information which accords with an edge threshold range and/or a preset shape characteristic to serve as sub-image information to be analyzed;
the analysis module is used for inputting the image to be recognized and/or the sub-image information to be analyzed into a deep learning model for analysis and obtaining an analysis result, wherein the analysis result is whether the image belongs to a bad category, and the deep learning model is obtained by a convolutional neural network through learning training by utilizing a sample image;
and the control module is used for sending a turn-off instruction to the network terminal if the analysis result belongs to the bad category, and controlling the network terminal to stop displaying.
According to a third aspect of embodiments of the present invention, there is provided a computer device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the step of the bad information identification method of the network terminal according to the first aspect when executing the computer program.
According to a fourth aspect of embodiments of the present invention, there is provided a computer storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the steps of the bad information identification method of the network terminal according to the first aspect.
The method for identifying the bad information of the network terminal and the local area network terminal equipment provided by the embodiment of the invention have the following beneficial effects:
by the technical scheme, the screenshot information sent by the network terminal periodically through the local area network can be analyzed and judged by using the deep learning model, whether the screenshot has bad content or not is determined, if yes, a turn-off instruction is sent to the network terminal, and the network terminal stops displaying the current page. Therefore, each network terminal can be managed and controlled in the range of the local area network, adverse information is prevented from affecting users, in addition, the deep learning model is obtained based on convolutional neural network training, the function of relearning is achieved, and the recognition accuracy of the deep learning model can be improved by continuously learning.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
The invention will be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of a bad information identification method of a network terminal according to an embodiment of the present invention;
fig. 2 is a block diagram of a structure of a part of a local area network terminal device according to an embodiment of the present invention;
fig. 3 is a schematic connection diagram of a local area network terminal device and a terminal device according to an embodiment of the present invention;
fig. 4 is a schematic diagram of information transmission of a terminal device according to an embodiment of the present invention;
fig. 5 is a schematic diagram of information transmission of a local area network terminal device according to an embodiment of the present invention;
FIG. 6 is an image processed in an embodiment of the invention;
FIG. 7 is an analysis diagram of the deep learning model of the present invention during processing;
fig. 8 is a block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
As shown in fig. 1, a method for identifying bad information of a network terminal is applied to a local area network terminal device connected with the network terminal through a local area network, and includes the steps of:
step 101, receiving screenshot information periodically sent by a network terminal, and taking the screenshot information as an image to be identified.
In this step, since one lan terminal device may be connected to multiple network terminals at the same time, after the network terminal generates the screenshot, the pixel points in the screenshot are encoded to generate corresponding screenshot information. And adding the corresponding identification code of the network terminal in the screenshot information. The situation that a plurality of network terminals are confused in identification is avoided.
The network terminal and the local area network terminal device can be connected through wires or wirelessly.
Step 102, extracting edge gradient information of the image to be identified.
In the step, a plurality of foreground images may be present in one image to be recognized, and edge setting is performed on each foreground image, so that each foreground image has a plurality of gradients, and an edge frame which can completely cover the foreground image and has the smallest area is selected from the plurality of gradients and serves as edge gradient information of the image to be recognized.
And 103, extracting an image area corresponding to the edge gradient information which accords with the edge threshold range and/or the preset shape characteristic to serve as sub-image information to be analyzed.
In the step, the edge gradient information is compared with the edge threshold range, and some foreground images which do not conform to the edge threshold range are deleted, so that the influence on the recognition effect is avoided. And comparing the shape of the object in the image area corresponding to the edge gradient information with preset shape features (for example, the shape features of a human body, the shape of a gun body and the like with bad tendencies), and extracting the foreground image conforming to the preset shape features. This allows foreground images (e.g., cloud shapes, tree shapes) that are determined not to be objectionable to be deleted, avoiding the images affecting the recognition rate.
And extracting an image area (namely, a foreground image) corresponding to the edge gradient information meeting the requirement, and further possibly extracting one or more pieces of sub-image information to be analyzed correspondingly from each screenshot information. If the sub-image information (for example, a landscape image) to be analyzed which meets the requirements does not exist in the screenshot information, the sub-image information is directly determined as a normal image, no processing is performed on the network terminal, and the next screenshot is directly analyzed.
And 104, inputting the image to be recognized and/or the sub-image information to be analyzed into a deep learning model for analysis, and obtaining an analysis result, wherein the analysis result is whether the image belongs to a bad category, and the deep learning model is obtained by a convolutional neural network through learning training by utilizing a sample image.
And 105, if the analysis result is that the current picture belongs to the bad category, sending a shutdown instruction to the network terminal, and controlling the network terminal to stop displaying the current picture.
In the above steps, if the analysis result is that the network terminal belongs to the bad category, the identification code of the corresponding network terminal is obtained, and the turn-off instruction is sent to the network terminal corresponding to the identification code. And further controlling the network terminal to stop playing the current content.
The identification results which are obtained by the deep learning model and belong to the bad categories are recorded and stored, a designer can check the recorded identification results regularly, if the identification error condition is found, the screenshot information of the identification error is extracted, and the screenshot information of the identification error is utilized to carry out further learning training on the deep learning model, so that the deep learning model can obtain the correct identification result aiming at the screenshot information.
In addition, if the number of times that the analysis result of the deep learning model is continuously in the normal category exceeds a preset number of times (for example, 3 times), an opening command is sent to the network terminal, and the network terminal is controlled to normally display.
By the technical scheme, the screenshot information sent by the network terminal periodically through the local area network can be analyzed and judged by using the deep learning model, whether the screenshot has bad content or not is determined, if yes, a turn-off instruction is sent to the network terminal, and the network terminal stops displaying the current page. Therefore, each network terminal can be managed and controlled in the range of the local area network, adverse information is prevented from affecting users, in addition, the deep learning model is obtained based on convolutional neural network training, the function of relearning is achieved, and the recognition accuracy of the deep learning model can be improved by continuously learning.
The scheme of the embodiment can be applied to a local area network of a school, or a local area network of a company, or a local area network of a home.
In a specific embodiment, step 101 specifically includes:
and step 1011, carrying out multi-scale zooming on the screenshot information to obtain a plurality of zoomed images.
And 1012, performing blank filling on the multiple zoomed images, filling the zoomed images into images with the same size as the screenshot information, and taking the zoomed images and the screenshot information after filling as the images to be identified, wherein the images to be identified are multiple.
In the above steps, the scaling and the number of the screenshot information zooming can be set by the user according to the actual needs. When blank filling is performed on a plurality of scaled images, blank filling may be performed by selecting the scaled images to be placed at a center position, an upper left, a lower left, an upper right, or a lower right. The blank filling is preferably done with the zoomed image placed in the center position.
In addition, the blank filling can be performed by white, or by black, or by using other colors as background, and the blank filling is preferably performed by white as background in the scheme.
The deep learning model outputs an analysis result for each image to be recognized after passing through steps 102 to 104.
The corresponding step 105 is specifically: and judging the proportion of the bad categories in the analysis result, if the proportion exceeds a preset proportion threshold value, determining that the screenshot information contains bad information, sending a shutdown instruction to the corresponding network terminal, and controlling the network terminal to stop displaying.
In the above step, since there are a plurality of images to be recognized, a plurality of analysis results are correspondingly output, and the plurality of analysis results may be the same or different, in order to avoid a situation of control error, a user may set a predetermined ratio threshold according to actual needs, determine that the content of the bad content is included according to a ratio belonging to a bad category in the analysis results, and need to turn off the network terminal when the ratio exceeds the predetermined ratio threshold.
By means of the scheme, the screenshot information is subjected to multi-scale zooming, different observation visual angles are applied to the same picture, the situation that objects which are too large or too small under a certain fixed resolution are not good in identification can be avoided, a plurality of analysis results can be obtained, whether the screenshot information contains bad content or not can be determined more accurately according to the analysis results, and therefore the identification precision is further improved.
In a specific embodiment, before step 104, the method further includes:
step 1041, acquiring a predetermined number of sample images.
And 1042, adding a mark type into the sample image, inputting the sample image into a convolutional neural network for training and recognition, if the training and recognition result is different from the mark type, adjusting parameters of the convolutional neural network until the training and recognition result is the same as the mark type, inputting the next sample image into the convolutional neural network for training and recognition, and taking the convolutional neural network after the sample image is completely trained as a deep learning model.
In the above scheme, some of the acquired sample images are images containing bad contents, some of the acquired sample images are normal images, the images are labeled with corresponding categories, and after the images are mixed, the images are input into a Convolutional Neural Network (CNN) for learning training. The parameters of the convolutional neural network are continuously adjusted in the training process, so that the identification precision of the convolutional neural network is continuously improved. The Convolutional neural network (FCN) obtained after training is finished directly outputs full-image pixel-level prediction results, so that the processing flow is simplified, and the calculation speed can be improved.
The convolutional neural network comprises a plurality of hidden layers, wherein after the first hidden layer processes an input image, a processing result is input to the next hidden layer for processing, and the like, until the last hidden layer obtains a corresponding prediction recognition result, and the prediction recognition result is output through an output layer. The number of hidden layers is specifically set according to the complexity of the recognition process.
In a specific embodiment, step 1042 specifically includes:
in step 10421, a target area marker is added to the sample image.
Step 10422, inputting the sample image to the first convolutional neural network for training, if the target area obtained by training is different from the target area label, adjusting the parameter of the first convolutional neural network until the target area obtained by training is the same as the target area label, and then inputting the next sub-graph information to be trained to the first convolutional neural network for training.
And step 10423, using the trained first convolutional neural network as an Object Region Detection Model (ORDM).
Step 10424, acquiring a predetermined number of target area image samples, and adding a corresponding classification label to each target area image sample as a classification training sample. The target area image sample may be a target area image output in the target area detection model training process. Or it may be an image of the target area obtained from the network or other device.
Step 10425, inputting the classification training sample to the second convolutional neural network for training, if the classification result output by the training is different from the classification label, adjusting the parameter of the second convolutional neural network until the classification result output by the training is the same as the classification label, and then inputting the next classification training sample to the second convolutional neural network for training.
Step 10426, adding a judgment layer for judging whether the output type belongs to the bad type to the output layer of the trained second convolutional neural network, and using the added second convolutional neural network as an Object Classification Model (OCM).
And step 10427, combining the target region detection model and the target classification model to serve as a deep learning model.
In the above scheme, the obtained target region detection model can perform region identification on the image, and the target classification model can classify the corresponding region image and determine the category of the region image. According to the scheme, the hidden layers corresponding to the two models are not integrated into a whole in a centralized mode, but are separately learned and trained, so that the two models can be trained simultaneously, the scales of the two convolutional neural networks after separation are relatively small, the training speed is higher, and the accuracy of the obtained output result is higher.
And after the target classification model obtains the classification result, judging whether the corresponding class belongs to the bad class by using the last layer of the target classification model, if so, outputting the result that the corresponding class belongs to the bad class, and if not, outputting the result that the corresponding class belongs to the normal class.
Through the scheme, the target region detection model and the target classification model are combined to obtain the deep learning model, so that the training process can be quicker, and the obtained deep learning model has higher identification precision.
In a specific embodiment, step 105 specifically includes: and if the analysis result is that the image belongs to the bad category, sending a turn-off instruction to the network terminal, integrating the pre-stored preset image with the turn-off instruction, sending the integrated preset image to the network terminal, controlling the network terminal to stop displaying the current image, and simultaneously displaying the preset image.
Through the scheme, if the display interface of the network terminal is directly closed, the user cannot know what reason to close the display, so that the preset image can be drawn in advance, and the preset image contains information of 'containing bad content', and the user experience can be improved.
In a specific embodiment, step 105 specifically includes: if the analysis result belongs to the bad category, adding 1 to the corresponding count accumulation; and when the accumulated count exceeds a preset threshold value, sending a shutdown instruction to the network terminal, and controlling the network terminal to stop displaying.
Through the scheme, in order to avoid the situation of error identification of the bad content, counting accumulation can be carried out on the analysis result, if the continuous counting accumulation exceeds the preset threshold (3 times), the network terminal is proved to display the bad content, and a turn-off instruction is sent to the network terminal.
In addition, if the analysis result is the normal category, the count accumulation is cleared.
In a specific embodiment, step 102 specifically includes:
step 1021, performing gray processing on the image to be recognized.
And step 1022, extracting gradient information in the horizontal direction and the vertical direction in the image to be identified by using a gradient operator and an image convolution algorithm.
And step 1023, searching image edges from the gradient information in the horizontal direction and the vertical direction by using a gradient local non-maximum suppression algorithm, and taking the searched image edges as edge gradient information.
Then, step 103 specifically includes:
and taking the image area corresponding to the edge gradient information which can be combined into a regular rectangular area as the sub-image information to be analyzed.
In the above scheme, edge recognition can be performed on each foreground image in the image to be recognized, and the obtained edge gradient information is closest to the edge of the foreground image. And using the area surrounded by the edge gradients combined into the rectangle as the sub-image information to be analyzed, so as to perform bad information identification and judgment on the sub-image information to be analyzed according to the steps 104 and 105 and the expansion step corresponding to the step 104. And then determining whether the sub-graph information to be analyzed contains bad information.
One image to be identified can correspondingly extract one or more sub-image information to be analyzed.
By the scheme of the embodiment, the screenshot information sent by the network terminal periodically through the local area network can be analyzed and judged by using the deep learning model, whether the screenshot has bad content or not is determined, if yes, a turn-off instruction is sent to the network terminal, and the network terminal stops displaying the current page. Therefore, each network terminal can be managed and controlled in the range of the local area network, adverse information is prevented from affecting users, in addition, the deep learning model is obtained based on convolutional neural network training, the function of relearning is achieved, and the recognition accuracy of the deep learning model can be improved by continuously learning.
According to the method of the foregoing embodiment, this embodiment provides a local area network terminal device, connected to a network terminal through a local area network, as shown in fig. 2, including:
the receiving module 21 is configured to receive screenshot information periodically sent by a network terminal, and use the screenshot information as an image to be identified;
an extraction module 22, configured to extract edge gradient information of an image to be identified; extracting an image area corresponding to edge gradient information which accords with an edge threshold range and/or a preset shape characteristic to serve as sub-image information to be analyzed;
the analysis module 23 is configured to input the image to be recognized and/or the sub-image information to be analyzed into a deep learning model for analysis, and obtain an analysis result, where the analysis result is whether the image belongs to a bad category, and the deep learning model is obtained by performing learning training on a sample image by using a convolutional neural network;
and the control module 24 is configured to send a shutdown instruction to the network terminal if the analysis result is that the network terminal belongs to the bad category, and control the network terminal to stop displaying.
In a specific embodiment, the receiving module 21 specifically includes:
the zooming unit is used for carrying out multi-scale zooming on the screenshot information to obtain a plurality of zoomed images;
the device comprises a filling unit, a recognition unit and a processing unit, wherein the filling unit is used for carrying out blank filling on a plurality of zoom images, filling the zoom images into images with the same size as the screenshot information, and taking the zoom images and the screenshot information after filling as images to be recognized, wherein the number of the images to be recognized is multiple;
the analysis module 23 outputs an analysis result for each image to be recognized;
and the control module 24 is configured to determine a proportion ratio belonging to a bad category in the analysis result, determine that the screenshot information contains bad information if the proportion exceeds a predetermined proportion threshold, send a shutdown instruction to the corresponding network terminal, and control the network terminal to stop displaying.
In a particular embodiment, the apparatus further comprises:
the acquisition module is used for acquiring a preset number of sample images;
and the training module is used for adding a mark type into the sample image, inputting the sample image into the convolutional neural network for training and identification, if the training and identification result is different from the mark type, adjusting the parameters of the convolutional neural network until the training and identification result is the same as the mark type, inputting the next sample image into the convolutional neural network for training and identification, and taking the convolutional neural network after the sample image is completely trained as a deep learning model.
In a specific embodiment, the training module specifically includes:
a marking unit for adding a target area mark to the sample image;
the first training unit is used for inputting a sample image into the first convolutional neural network for training, if a target area obtained by training is different from a target area label, adjusting parameters of the first convolutional neural network until the target area obtained by training is the same as the target area label, and then inputting the next sample image into the first convolutional neural network for training;
the model determining unit is used for taking the trained first convolution neural network as a target area detection model;
the marking unit is also used for acquiring a predetermined number of target area image samples, and adding a corresponding classification label for each target area image sample to serve as a classification training sample;
the second training unit is used for inputting the classified training samples into a second convolutional neural network for training, if the classification result output by training is different from the classification label, adjusting the parameters of the second convolutional neural network until the classification result output by training is the same as the classification label, and then inputting the next classified training sample into the second convolutional neural network for training;
the model determining unit is further used for adding a judging layer for judging whether the output class belongs to the bad class or not in the output layer of the trained second convolutional neural network, and taking the added second convolutional neural network as a target classification model;
and the combination unit is used for combining the target area detection model and the target classification model to be used as a deep learning model.
In a specific embodiment, the control module 24 is specifically configured to send a shutdown instruction to the network terminal if the analysis result is that the image belongs to the bad category, integrate the pre-stored preset image with the shutdown instruction, send the integrated image to the network terminal, and control the network terminal to stop displaying the current picture and simultaneously display the preset image.
In a specific embodiment, the control module 24 is further configured to add 1 to the corresponding count accumulation if the analysis result is that the analysis result belongs to the bad category; and when the accumulated count exceeds a preset threshold value, sending a shutdown instruction to the network terminal, and controlling the network terminal to stop displaying.
In a specific embodiment, the extraction module 22 specifically includes:
the gray processing unit is used for carrying out gray processing on the image to be recognized;
the edge gradient determining unit is used for extracting gradient information in the horizontal direction and the vertical direction in the image to be identified by a gradient operator and an image convolution algorithm; and searching image edges from the gradient information in the horizontal direction and the vertical direction by using a gradient local non-maximum suppression algorithm, and taking the searched image edges as edge gradient information.
And the extraction module is further configured to use an image region corresponding to the edge gradient information that can be combined into the regular rectangular region as the sub-image information to be analyzed.
A local area network terminal device according to another embodiment of the present application is connected to a network terminal through a local area network, as shown in fig. 3.
A network terminal: corresponding monitoring software/APP is installed on each network all-in-one machine (i.e., network terminal), and the information transmission process is as shown in fig. 4, and its main functions are as follows:
(1) screen capture is carried out on the display picture of the network all-in-one machine according to the period (such as 1 second and 5 seconds);
(2) transmitting the screenshot picture to an AIFT computing center (namely, a local area network terminal device) through a local area network;
(3) receiving screenshot judgment results and control commands fed back by the AIFT calculation center (such as normal continuous display of screenshot, shutdown display of bad contents in the screenshot, conversion of the bad contents in the screenshot into preset pictures and the like);
(4) and continuously monitoring the original output video stream (inside) of the all-in-one machine, uploading the periodic screenshot to the AIFT, and if the video stream is recovered to be normal, the AIFT feedback screenshot is recovered from poor condition to be normal and the screen display is required to be recovered, recovering the screen to be normal.
The software/APP will eventually be embodied as a process/service on a web tablet that acts silently in the background.
Local area network AI computing center (AIFT), the information transfer process is shown in fig. 5, and the main functions are:
and receiving the screenshots uploaded by the detection software/APP on all the network integrated machines in the local area network, intelligently identifying images and character information in the screenshots, sending a judgment result and a control command to the monitoring software/APP if bad content is found, and requiring the software/APP to close the screen display of the integrated machine or to switch to a default picture.
And if the screenshot content is normal, sending the judgment result and the control command to the monitoring software/APP, and requiring the software/APP to recover or keep normal playing of the all-in-one machine.
The 'bad information identification algorithm' is detailed as follows:
the bad information identification algorithm is developed based on image processing and deep machine learning technology.
After receiving a picture to be identified, the 'bad information identification algorithm' module firstly preprocesses the picture, adaptively adjusts the brightness and contrast of the picture and smoothes high-frequency noise, then detects whether the picture contains a sub-picture region with obvious characteristics, and can extract edge information of the picture (as shown in figure 6) by an image gradient local non-maximum inhibition method, wherein the region of which the image edge gradient meets a threshold value condition and preset shape characteristics is extracted to be used as a sub-picture to be analyzed. The original image and the subgraph are analyzed simultaneously, and the accuracy of content identification is improved.
After image processing, the image to be identified is segmented into a plurality of image regions to be analyzed.
And carrying out subsequent analysis on the image area to be analyzed by using a deep learning algorithm. Deep learning is a new field in machine learning research, a deep learning model has excellent characteristic learning ability, the deep learning model simulates the mechanism of human brain to explain data, and the learned characteristics are more essential to describe the data, thereby being beneficial to visualization or classification.
The deep learning model for analyzing the content of the image is constructed based on a convolutional neural network, and the model consists of two layers of sub-models: an Object Region Detection Model (ORDM) and an Object Classification Model (OCM). The former is used for finding a foreground object in the image and extracting a coordinate area, and the latter analyzes a target area to obtain a target classification result. The ORDM model and the OCM model both construct a Full Convolution Network (FCN) based on the CNN, the size of an input image can be self-adapted, a full image pixel-level prediction result is directly output from a single input image, the processing flow is simplified, and the calculation speed can be improved.
The region detection model and the target classification model are trained by using different labeled data sets. The former is used for extracting the edge characteristics of a foreground object, and the data set takes the foreground, the edge and the background as classification labels; the latter is used to extract object classes, and the dataset is labeled with a target real class. The separated models pay attention to more determined scenes respectively, higher accuracy can be achieved by adopting smaller-scale network training, the whole resource occupation is smaller, and the calculation time is faster.
In order to further improve the accuracy of the actual prediction stage, the input image of the model is subjected to multi-scale scaling, namely different observation visual angles are applied to the same picture, so that the situation that an object which is too large or too small under a certain fixed resolution is not well recognized can be avoided. Blank areas are added to images with different sizes to construct a picture group with the same size, and batch acceleration of multi-scale images in an FCN (deep learning model) can be realized.
The process analysis of the obtained deep learning model is shown in fig. 7.
Based on the methods of the foregoing embodiments, the present embodiment provides a computer device, as shown in fig. 8, including a memory 82 and a processor 81, where the memory 82 and the processor 81 are both disposed on a bus 83, the memory 82 stores a computer program, and the processor 81 executes the computer program to implement the bad information identification method of the network terminal shown in fig. 1.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile memory (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the implementation scenarios of the present application.
Optionally, the device may also be connected to a user interface, a network interface, a camera, Radio Frequency (RF) circuitry, sensors, audio circuitry, a WI-FI module, and so forth. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., a bluetooth interface, WI-FI interface), etc.
Those skilled in the art will appreciate that the configuration of a computer device provided in the present embodiment does not constitute a limitation of the physical device, and may include more or less components, or some components in combination, or a different arrangement of components.
Based on the above embodiments of the method shown in fig. 1 and the apparatus shown in fig. 2, correspondingly, the present application further provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for identifying abnormal behavior shown in fig. 1 is implemented.
The storage medium may further include an operating system and a network communication module. An operating system is a program that manages the hardware and software resources of a computer device, supporting the operation of information handling programs, as well as other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and other hardware and software in the computer equipment.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware.
By applying the technical scheme of the application, the screenshot information sent by the network terminal periodically through the local area network can be analyzed and judged by using the deep learning model, whether the screenshot has bad content or not is determined, if yes, a turn-off instruction is sent to the network terminal, and the network terminal stops displaying the current page. Therefore, each network terminal can be managed and controlled in the range of the local area network, adverse information is prevented from affecting users, in addition, the deep learning model is obtained based on convolutional neural network training, the function of relearning is achieved, and the recognition accuracy of the deep learning model can be improved by continuously learning.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (10)

1. A bad information identification method of a network terminal is characterized in that the bad information identification method is applied to a local area network terminal device connected with the network terminal through a local area network, and comprises the following steps:
receiving screenshot information periodically sent by a network terminal, and taking the screenshot information as an image to be identified;
extracting edge gradient information of the image to be identified;
extracting an image area corresponding to edge gradient information which accords with an edge threshold range and/or a preset shape characteristic to serve as sub-image information to be analyzed;
inputting the image to be recognized and/or the sub-image information to be analyzed into a deep learning model for analysis, and obtaining an analysis result, wherein the analysis result is whether the image belongs to a bad category, and the deep learning model is obtained by a convolutional neural network through learning training by utilizing a sample image;
and if the analysis result is of a bad category, sending a turn-off instruction to the network terminal, and controlling the network terminal to stop displaying the current picture.
2. The method according to claim 1, wherein the receiving screenshot information periodically sent by the network terminal, and taking the screenshot information as an image to be identified specifically includes:
carrying out multi-scale zooming on the screenshot information to obtain a plurality of zoomed images;
filling blanks in the zooming images, filling the images with the size same as that of the screenshot information, and taking the filled zooming images and the screenshot information as images to be identified, wherein the number of the images to be identified is multiple;
the deep learning model outputs an analysis result for each image to be recognized;
and judging the proportion of the bad categories in the analysis result, if the proportion exceeds a preset proportion threshold value, determining that the screenshot information contains bad information, sending a shutdown instruction to a corresponding network terminal, and controlling the network terminal to stop displaying.
3. The method according to claim 1, before inputting the image to be recognized and/or the sub-image information to be analyzed into a deep learning model for analysis and obtaining an analysis result, further comprising:
acquiring a predetermined number of sample images;
and after adding a mark type into the sample image, inputting the sample image into a convolutional neural network for training and identification, if the training and identification result is different from the mark type, adjusting the parameters of the convolutional neural network until the training and identification result is the same as the mark type, then inputting the next sample image into the convolutional neural network for training and identification, and taking the convolutional neural network after the sample image is completely trained as a deep learning model.
4. The method according to claim 3, wherein after adding a label type to the sample image, inputting the sample image to a convolutional neural network for training and recognition, if a training and recognition result is different from the label type, adjusting parameters of the convolutional neural network until the training and recognition result is the same as the label type, then inputting a next sample image to the convolutional neural network for training and recognition, and taking the convolutional neural network after all sample images are trained as a deep learning model, specifically comprising:
adding a target area marker to the sample image;
inputting the sample image into a first convolution neural network for training, if the target area obtained by training is different from the target area label, adjusting the parameters of the first convolution neural network until the target area obtained by training is the same as the target area label, and then inputting the next sample image into the first convolution neural network for training;
taking the trained first convolution neural network as a target area detection model;
acquiring a predetermined number of target area image samples, and adding a corresponding classification label to each target area image sample to be used as a classification training sample;
inputting the classification training sample into a second convolutional neural network for training, if the classification result output by training is different from the classification label, adjusting the parameter of the second convolutional neural network until the classification result output by training is the same as the classification label, and then inputting the next classification training sample into the second convolutional neural network for training;
adding a judgment layer for judging whether the output class belongs to the bad class or not on the output layer of the trained second convolutional neural network, and taking the added second convolutional neural network as a target classification model;
and combining the target region detection model and the target classification model to serve as a deep learning model.
5. The method according to claim 1, wherein if the analysis result is of a bad category, sending a shutdown instruction to the network terminal to control the network terminal to stop displaying the current screen, specifically:
and if the analysis result is of a bad category, sending a turn-off instruction to a network terminal, integrating a pre-stored preset image with the turn-off instruction, sending the integrated preset image to the network terminal, controlling the network terminal to stop displaying the current image, and simultaneously displaying the preset image.
6. The method according to claim 1, wherein if the analysis result is of a bad category, sending a shutdown instruction to the network terminal to control the network terminal to stop displaying the current screen, specifically comprising:
if the analysis result belongs to the bad category, adding 1 to the corresponding count accumulation;
and when the accumulated count exceeds a preset threshold value, sending a shutdown instruction to the network terminal, and controlling the network terminal to stop displaying.
7. The method according to claim 1, wherein the extracting edge gradient information of the image to be recognized specifically includes:
carrying out gray level processing on the image to be identified;
extracting gradient information in the horizontal direction and the vertical direction in the image to be identified by utilizing a gradient operator and an image convolution algorithm;
searching image edges from the gradient information in the horizontal direction and the vertical direction by using a gradient local non-maximum suppression algorithm, and taking the searched image edges as edge gradient information;
then, extracting an image region corresponding to the edge gradient information that meets the edge threshold range and/or meets the preset shape feature as sub-image information to be analyzed specifically includes:
and taking the image area corresponding to the edge gradient information which can be combined into a regular rectangular area as the sub-image information to be analyzed.
8. A lan terminal device, connected to a network terminal via a lan, comprising:
the receiving module is used for receiving screenshot information sent by a network terminal periodically and taking the screenshot information as an image to be identified;
the extraction module is used for extracting the edge gradient information of the image to be identified; extracting an image area corresponding to edge gradient information which accords with an edge threshold range and/or a preset shape characteristic to serve as sub-image information to be analyzed;
the analysis module is used for inputting the image to be recognized and/or the sub-image information to be analyzed into a deep learning model for analysis and obtaining an analysis result, wherein the analysis result is whether the image belongs to a bad category, and the deep learning model is obtained by a convolutional neural network through learning training by utilizing a sample image;
and the control module is used for sending a turn-off instruction to the network terminal if the analysis result belongs to the bad category, and controlling the network terminal to stop displaying.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the bad information identification method of the network terminal according to any one of claims 1 to 7.
10. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the bad information identification method of the network terminal according to any one of claims 1 to 7.
CN202011092717.5A 2020-10-13 2020-10-13 Bad information identification method of network terminal and local area network terminal equipment Active CN112633313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011092717.5A CN112633313B (en) 2020-10-13 2020-10-13 Bad information identification method of network terminal and local area network terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011092717.5A CN112633313B (en) 2020-10-13 2020-10-13 Bad information identification method of network terminal and local area network terminal equipment

Publications (2)

Publication Number Publication Date
CN112633313A true CN112633313A (en) 2021-04-09
CN112633313B CN112633313B (en) 2021-12-03

Family

ID=75302785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011092717.5A Active CN112633313B (en) 2020-10-13 2020-10-13 Bad information identification method of network terminal and local area network terminal equipment

Country Status (1)

Country Link
CN (1) CN112633313B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113568592A (en) * 2021-06-25 2021-10-29 展讯半导体(南京)有限公司 Display control method of terminal, terminal and storage medium
CN113608984A (en) * 2021-07-30 2021-11-05 北京北信源软件股份有限公司 Display content analysis method, device, server and storage medium
CN113610120A (en) * 2021-07-21 2021-11-05 燕山大学 App image content safety detection method based on weak supervised learning
CN116419008A (en) * 2023-03-15 2023-07-11 苏州匠数科技有限公司 Real-time detection method and system for operation playing video and electronic equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030126214A1 (en) * 2001-10-04 2003-07-03 Mike Oliszewski Document management system
US20080256176A1 (en) * 2007-04-13 2008-10-16 Kuo-Hsiu Huang Internet radio system and the broadcasting method thereof
CN102143120A (en) * 2010-02-01 2011-08-03 中兴通讯股份有限公司 Method and device for realizing grading multi-media files
CN103748998B (en) * 2010-06-09 2012-02-08 北京理工大学 The supervisory control system of flame and violation service in large-scale internal network
CN103595723A (en) * 2013-11-20 2014-02-19 吉林大学 Junk mail recognition method and device based on message bodies and safety grading
CN106258010A (en) * 2014-02-04 2016-12-28 弗劳恩霍夫应用研究促进协会 2D image dissector
CN106803930A (en) * 2017-02-10 2017-06-06 上海斐讯数据通信技术有限公司 A kind of intelligent video monitoring method and intelligent router based on router
CN106886757A (en) * 2017-01-19 2017-06-23 华中科技大学 A kind of multiclass traffic lights detection method and system based on prior probability image
CN107403200A (en) * 2017-08-10 2017-11-28 北京亚鸿世纪科技发展有限公司 Improve the multiple imperfect picture sorting technique of image segmentation algorithm combination deep learning
CN107566903A (en) * 2017-09-11 2018-01-09 北京匠数科技有限公司 A kind of video filtering device and method, video display system
CN107613225A (en) * 2017-09-11 2018-01-19 北京匠数科技有限公司 A kind of track traffic displaying device for filtering information and method, information display system
CN109491970A (en) * 2018-10-11 2019-03-19 平安科技(深圳)有限公司 Imperfect picture detection method, device and storage medium towards cloud storage
CN110020651A (en) * 2019-04-19 2019-07-16 福州大学 Car plate detection localization method based on deep learning network
CN110781849A (en) * 2019-10-30 2020-02-11 北京锐安科技有限公司 Image processing method, device, equipment and storage medium
CN111522987A (en) * 2020-04-24 2020-08-11 腾讯科技(深圳)有限公司 Image auditing method and device and computer readable storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030126214A1 (en) * 2001-10-04 2003-07-03 Mike Oliszewski Document management system
US20080256176A1 (en) * 2007-04-13 2008-10-16 Kuo-Hsiu Huang Internet radio system and the broadcasting method thereof
CN102143120A (en) * 2010-02-01 2011-08-03 中兴通讯股份有限公司 Method and device for realizing grading multi-media files
CN103748998B (en) * 2010-06-09 2012-02-08 北京理工大学 The supervisory control system of flame and violation service in large-scale internal network
CN103595723A (en) * 2013-11-20 2014-02-19 吉林大学 Junk mail recognition method and device based on message bodies and safety grading
CN106258010A (en) * 2014-02-04 2016-12-28 弗劳恩霍夫应用研究促进协会 2D image dissector
CN106886757A (en) * 2017-01-19 2017-06-23 华中科技大学 A kind of multiclass traffic lights detection method and system based on prior probability image
CN106803930A (en) * 2017-02-10 2017-06-06 上海斐讯数据通信技术有限公司 A kind of intelligent video monitoring method and intelligent router based on router
CN107403200A (en) * 2017-08-10 2017-11-28 北京亚鸿世纪科技发展有限公司 Improve the multiple imperfect picture sorting technique of image segmentation algorithm combination deep learning
CN107566903A (en) * 2017-09-11 2018-01-09 北京匠数科技有限公司 A kind of video filtering device and method, video display system
CN107613225A (en) * 2017-09-11 2018-01-19 北京匠数科技有限公司 A kind of track traffic displaying device for filtering information and method, information display system
CN109491970A (en) * 2018-10-11 2019-03-19 平安科技(深圳)有限公司 Imperfect picture detection method, device and storage medium towards cloud storage
CN110020651A (en) * 2019-04-19 2019-07-16 福州大学 Car plate detection localization method based on deep learning network
CN110781849A (en) * 2019-10-30 2020-02-11 北京锐安科技有限公司 Image processing method, device, equipment and storage medium
CN111522987A (en) * 2020-04-24 2020-08-11 腾讯科技(深圳)有限公司 Image auditing method and device and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐征: "网络不良信息检测系统的设计与实现", 《中国优秀博硕士学位论文全文数据库(硕士)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113568592A (en) * 2021-06-25 2021-10-29 展讯半导体(南京)有限公司 Display control method of terminal, terminal and storage medium
CN113610120A (en) * 2021-07-21 2021-11-05 燕山大学 App image content safety detection method based on weak supervised learning
CN113610120B (en) * 2021-07-21 2023-09-29 燕山大学 App image content safety detection method based on weak supervision learning
CN113608984A (en) * 2021-07-30 2021-11-05 北京北信源软件股份有限公司 Display content analysis method, device, server and storage medium
CN116419008A (en) * 2023-03-15 2023-07-11 苏州匠数科技有限公司 Real-time detection method and system for operation playing video and electronic equipment
CN116419008B (en) * 2023-03-15 2024-05-10 苏州匠数科技有限公司 Real-time detection method and system for operation playing video and electronic equipment

Also Published As

Publication number Publication date
CN112633313B (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN112633313B (en) Bad information identification method of network terminal and local area network terminal equipment
US10489660B2 (en) Video processing with object identification
JP6994588B2 (en) Face feature extraction model training method, face feature extraction method, equipment, equipment and storage medium
US8750573B2 (en) Hand gesture detection
EP3848853A2 (en) Image detection method, apparatus, electronic device and storage medium
US20120027252A1 (en) Hand gesture detection
CN112200081A (en) Abnormal behavior identification method and device, electronic equipment and storage medium
CN112215171B (en) Target detection method, device, equipment and computer readable storage medium
CN108304839B (en) Image data processing method and device
CN110163241B (en) Data sample generation method and device, computer equipment and storage medium
WO2016201683A1 (en) Cloud platform with multi camera synchronization
JP6334767B1 (en) Information processing apparatus, program, and information processing method
CN112102929A (en) Medical image labeling method and device, storage medium and electronic equipment
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
JP2017045438A (en) Image analyzer, image analysis method, image analysis program and image analysis system
WO2022033306A1 (en) Target tracking method and apparatus
CN109740527B (en) Image processing method in video frame
CN108900895B (en) Method and device for shielding target area of video stream
US9311523B1 (en) Method and apparatus for supporting object recognition
CN106682669A (en) Image processing method and mobile terminal
CN112883827A (en) Method and device for identifying designated target in image, electronic equipment and storage medium
CN112131957A (en) Document type picture identification method and device and storage medium
CN113569771B (en) Video analysis method and device, electronic equipment and storage medium
CN108647600A (en) Face identification method, equipment and computer readable storage medium
EP4184424A1 (en) Method and device for improving video quality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant