CN111199172A - Terminal screen recording-based processing method and device and storage medium - Google Patents
Terminal screen recording-based processing method and device and storage medium Download PDFInfo
- Publication number
- CN111199172A CN111199172A CN201811375939.0A CN201811375939A CN111199172A CN 111199172 A CN111199172 A CN 111199172A CN 201811375939 A CN201811375939 A CN 201811375939A CN 111199172 A CN111199172 A CN 111199172A
- Authority
- CN
- China
- Prior art keywords
- terminal
- screen recording
- processing unit
- acquiring
- target identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 83
- 238000012544 monitoring process Methods 0.000 claims abstract description 23
- 230000006399 behavior Effects 0.000 claims description 45
- 238000000034 method Methods 0.000 claims description 28
- 238000001514 detection method Methods 0.000 claims description 13
- 238000007405 data analysis Methods 0.000 claims description 5
- 238000004458 analytical method Methods 0.000 description 32
- 238000012549 training Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 11
- 230000000007 visual effect Effects 0.000 description 10
- 238000012550 audit Methods 0.000 description 8
- 238000003058 natural language processing Methods 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 238000010219 correlation analysis Methods 0.000 description 3
- 230000007123 defense Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/48—Matching video sequences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention discloses a terminal-based screen recording processing method, which comprises the following steps: acquiring screen recording data of a terminal; acquiring target identification content according to the screen recording data; and monitoring the terminal according to the target identification content. Meanwhile, the invention also discloses a processing device based on the terminal screen recording and a computer storage medium.
Description
Technical Field
The invention relates to the field of data processing, in particular to a processing method and device based on terminal screen recording and a storage medium.
Background
The safety auditing method and the product are still in the development stage at present, and most auditing methods are carried out based on the log in the text format. All manufacturers improve how to quickly collect and inquire the logs, and the mode has more obvious effect and is easier to produce, and is more like a tool. However, the essence of security audit is how to help other security systems to establish a more intelligent and more refined security defense system, which plays a role of a brain and can effectively help users to make dynamic defense strategies, thereby realizing precise defense.
Log-based auditing methods are limited. On the one hand, log system integrity is difficult to guarantee. The collection of the logs by the auditing system depends on the generation and the pushing of each system, and the possibility of leaving out management and manual intervention exists in the whole life cycle of the logs, such as the daily amount of log data generated by an application system, the actual sending amount, whether an intermediate link is lost or not, and the like, cannot be known; on the other hand, the log data is compared and checked in time, and the method consumes huge manpower and material resources, wastes time and labor and needs huge investment.
Disclosure of Invention
The invention provides a terminal screen recording-based processing method and device and a storage medium.
The technical scheme of the invention is realized as follows:
in one aspect, a processing method based on terminal screen recording is provided, which includes:
acquiring screen recording data of a terminal;
acquiring target identification content according to the screen recording data;
and monitoring the terminal according to the target identification content.
Further, the obtaining of the target identification content according to the screen recording data includes:
and acquiring an operation sequence of the terminal according to the screen recording data.
Further, the obtaining of the target identification content according to the screen recording data includes:
performing video frame cutting processing on the screen recording data to obtain a processing unit, wherein the processing unit is 1 frame or continuous N frame data, and N is a positive integer not less than 2;
and analyzing the processing unit to obtain the semantic label of the processing unit.
Further, the semantic label of the processing unit and the screen recording time information of the processing unit form an operation field of the processing unit;
the operation fields of a plurality of the processing units constitute an operation sequence of the terminal.
Further, the analyzing the processing unit to obtain the semantic tag of the processing unit specifically includes:
tracking a cursor of the terminal based on the processing unit to obtain a detection result;
identifying characters within a certain range by taking the cursor as the center based on the processing unit, and acquiring an identification result;
and acquiring the semantic label of the processing unit based on the detection result and the identification result.
Further, the cursor includes: a mouse cursor and a keyboard cursor.
Further, the monitoring the terminal according to the target identification content specifically includes:
and performing sensitive data analysis on the operation sequence to acquire potential violation behaviors.
Further, the monitoring the terminal according to the target identification content further includes:
and acquiring a user behavior track based on the operation sequence of the terminal.
The invention also provides a processing device based on the terminal screen recording, which comprises:
the first acquisition unit is used for acquiring screen recording data of the terminal;
the second acquisition unit is used for acquiring target identification content according to the screen recording data;
and the monitoring unit is used for monitoring the terminal according to the target identification content.
The present invention also provides a computer storage medium having stored thereon computer-executable instructions; after the computer executable instruction is executed, the processing method based on the terminal screen recording can be realized.
The invention provides a processing method and device based on terminal screen recording and a storage medium, which are used for acquiring screen recording data of a terminal; acquiring target identification content according to the screen recording data; and monitoring the terminal according to the target identification content. Due to the fact that the possibility of oversight of management and manual intervention exists in log collection, the integrity of the logs is difficult to guarantee, and therefore the user behavior cannot be comprehensively audited through the log-based auditing method. The invention monitors the terminal through the target identification content obtained by the screen recording data of the terminal, and can realize comprehensive audit on user behavior compared with an audit method based on logs. In addition, the invention monitors the target identification content through NLP (Natural Language Processing) technology to identify potential violation, and compared with the method in which log data needs to be compared and checked manually, the method saves resources and improves efficiency.
Drawings
Fig. 1 is a schematic flowchart of a processing method based on terminal screen recording according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a processing apparatus based on terminal screen recording according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of another processing method based on terminal screen recording according to an embodiment of the present invention;
fig. 4 is a diagram of a video intelligent analysis platform architecture according to an embodiment of the present invention;
fig. 5 is a system framework diagram of a video intelligent analysis platform according to an embodiment of the present invention;
fig. 6 is a processing flow chart of another processing method based on terminal video according to an embodiment of the present invention.
Detailed Description
In various embodiments of the invention, the terminal monitoring is performed through the target identification content obtained through the screen recording data of the terminal, so that the limitation of behavior monitoring based on the logs caused by the difficulty in ensuring the integrity of the logs in the text format is reduced. In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a processing method based on terminal screen recording according to an embodiment of the present invention, and as shown in fig. 1, the processing method based on terminal screen recording includes the following steps:
step 101: acquiring screen recording data of a terminal;
step 102: acquiring target identification content according to the screen recording data;
step 103: and monitoring the terminal according to the target identification content.
The screen recording data is a video file obtained by recording a screen of the terminal. The screen recording data of the terminal can be directly sent to the server deployed by the method of the invention by the terminal, and can also be sent to the server deployed by the method of the invention by other screen recording tools. Other screen recording tools may employ any device with a camera function, such as: a camera, a mobile phone or a tablet computer with a camera, and other user equipment.
The following modes can trigger the terminal to perform screen recording operation and send screen recording data to a server deployed by the method of the invention: a terminal user triggers screen recording operation by himself, for example, the terminal performs screen recording operation at set time intervals; the terminal carries out screen recording operation within a preset certain period of time; a certain behavior of the end user triggers a screen recording operation, such as: logging in a certain setting system, and modifying the authority of the certain setting system by a user, and the like, the setting system may be a system with a high requirement on data security, such as: related systems for bank fund management, disease management systems relating to individual privacy, etc.
The target identification content is data information which is obtained based on screen recording data and has analysis value, and can be in an image form or a character form. The screen recording data may be subjected to image processing techniques, natural language processing techniques, and the like to obtain target identification content.
Through the target recognition content, user behavior portrait processing can be carried out, and further analysis and prediction are carried out on the user behavior through the user behavior portrait; and performing correlation analysis on the target identification content and other characteristic data, or performing sensitive data analysis on the target identification content to obtain the violation, and performing behavior tracing on the violation. An illegal action is one where the end user's action exceeds the user's rights, or there is a data leak, or other user action that violates the relevant regulations.
The method comprises a set of complete processing schemes of terminal supervision, content identification, behavior tracing and the like.
Further, the obtaining of the target identification content according to the screen recording data includes:
and acquiring an operation sequence of the terminal according to the screen recording data.
The operation sequence is an operation behavior sequence of the terminal user obtained by analyzing screen recording data of the terminal, and can be represented in a text sequence mode, each element in the operation sequence of the terminal user corresponds to a certain operation behavior of the terminal user, and the operation behavior can include: logging in a certain system, sending a mail, inquiring a certain report form and the like. The operation sequence can be combined in various forms such as time, a monitoring platform, a monitoring terminal, a monitoring user and the like.
Further, the obtaining of the target identification content according to the screen recording data includes:
performing video frame cutting processing on the screen recording data to obtain a processing unit, wherein the processing unit is 1 frame or continuous N frame data, and N is a positive integer not less than 2;
and analyzing the processing unit to obtain the semantic label of the processing unit.
The video frame-cutting processing is to cut the screen recording data of the terminal into image data of a single frame, and the frame-cutting processing can be carried out according to a set time interval.
Semantic tags are textual descriptions of some operational behavior of the end user.
The processing unit comprises 1 frame or continuous multiframe image data, and is in one-to-one correspondence with a certain operation behavior of a terminal user, and the image data contained in the processing unit is analyzed to obtain the semantic label. The semantic tag corresponds to a certain operation behavior of the end user.
Deep learning research has been carried out in many fields such as computer vision, speech recognition, natural language processing, and game strategy. According to the method, the terminal video data can be subjected to computer vision processing by introducing a deep learning technology, the visual analysis service is continuously optimized and promoted based on model training, and semantic labels corresponding to user behaviors are deduced from the intercepted image information better.
Further, the semantic label of the processing unit and the screen recording time information of the processing unit form an operation field of the processing unit;
the operation fields of a plurality of the processing units constitute an operation sequence of the terminal.
And the screen recording time of the processing unit is the recording time of the first frame in the processing unit. Assuming that the semantic label of the processing unit is "send mail attachment", and the screen recording time of the processing unit is "2018.11.211: 02: 28", the corresponding operation field is "2018.11.211: 02:28 send mail attachment". The operation sequence of the terminal is composed of a plurality of the operation fields, and the operation track of the terminal user with time as a main line can be clearly seen from the operation sequence of the terminal.
Further, the analyzing the processing unit to obtain the semantic tag of the processing unit specifically includes:
tracking a cursor of the terminal based on the processing unit to obtain a detection result;
identifying characters within a certain range by taking the cursor as the center based on the processing unit, and acquiring an identification result;
and acquiring the semantic label of the processing unit based on the detection result and the identification result.
Further, the cursor includes: a mouse cursor and a keyboard cursor.
The behavior of the user at the terminal is completed through mouse and keyboard operation, various applications and information are also presented in the window, so that a cursor of the mouse and a cursor of the keyboard in the video are identified and tracked, characters in a certain area near the cursor of the mouse and the cursor of the keyboard are identified, titles of the window, popup windows of information prompt types and characters in a floating layer are identified, and on the basis, an NLP technology is utilized to obtain an operation sequence of the terminal.
The detection result is the coordinate (X) of the mouse cursor in each frame of image after frame cutting1,Y1) And coordinates (X) of keyboard cursor2,Y2)。
Deep learning models are built and extensive data training is performed to ensure that the coordinates of the mouse cursor (X1, Y1) and the coordinates of the keyboard cursor (X2, Y2) can be accurately obtained. The R-FCN algorithm and the SiameseRPN algorithm can be adopted to realize the detection and tracking of the mouse cursor and the keyboard cursor.
Optionally, the recognition result includes: in each frame of image, the coordinate (X) of mouse cursor1,Y1) The coordinate (X) of the keyboard cursor is used in each frame of image of characters in a certain range as the center2,Y2) The characters in a certain range are centered.
Optionally, the recognition result further includes: in each frame of image, the window title and window coordinates where the mouse cursor is located, and the window title and window coordinates where the keyboard cursor is located.
Optionally, the recognition result further includes: in each frame of image, the information prompts the characters and pop-up window coordinates of the pop-up window of the class, and the characters and floating layer coordinates in the floating layer.
Optionally, the recognition result further includes: name of the system the terminal logs in.
Optionally, the recognition result further includes: the name of the user who logged into the system.
The method for recognizing the characters within a certain range by taking the cursor as the center based on the processing unit and acquiring the recognition result comprises the following steps:
respectively by the coordinates (X) of the mouse cursor1,Y1) And coordinates (X) of keyboard cursor2,Y2) As the center, a certain range is determined as the cursor identification range;
recognizing a window title position range, a floating layer range and the like in the image through a deep learning model to be used as a window identification range;
the characters in the cursor Recognition range and the windowsill Recognition range are preprocessed by inclination correction, Character segmentation, and the like, and the Character Recognition in the above ranges is realized by using an OCR (Optical Character Recognition) technology.
And acquiring the semantic label of the processing unit based on the detection result and the identification result by using an NLP technology.
Optionally, the semantic tags are manually added to the processing units, or the semantic tags added to the system are manually sampled and checked, and the semantic tags are corrected. And manually adding and correcting semantic labels for model training and updating in the system.
Further, the monitoring the terminal according to the target identification content specifically includes:
and performing sensitive data analysis on the operation sequence to acquire potential violation behaviors.
And establishing a sensitive word bank, analyzing the operation behavior sequence of the terminal through a sensitive data recognition engine, recognizing the operation behavior sequence containing the sensitive words, and further discovering the potential violation behaviors of the terminal user.
Further, the monitoring the terminal according to the target identification content further includes:
and acquiring a user behavior track based on the operation sequence of the terminal.
If potential violation behaviors exist or the operation of a specific terminal and a specific system needs to be checked within a period of time, user behavior portrait processing can be carried out based on the operation sequence of the terminal, and the behaviors of the user are further analyzed through the user behavior portrait; and performing correlation analysis on the operation sequence of the terminal and other characteristic data, extracting the behavior track of the previous operation, and tracing the IP address of the terminal corresponding to the operation behavior sequence, the identity information of the operation user and the like.
Fig. 2 is a schematic structural diagram of a processing apparatus based on terminal screen recording according to an embodiment of the present invention, including:
a first obtaining unit 201, configured to obtain screen recording data of a terminal;
a second obtaining unit 202, configured to obtain target identification content according to the screen recording data;
and the monitoring unit 203 is configured to monitor the terminal according to the target identification content.
Further, the second obtaining unit 202 is specifically configured to:
and acquiring an operation sequence of the terminal according to the screen recording data.
Further, the second obtaining unit 202 is further configured to:
performing video frame cutting processing on the screen recording data to obtain a processing unit, wherein the processing unit is 1 frame or continuous N frame data, and N is a positive integer not less than 2;
and analyzing the processing unit to obtain the semantic label of the processing unit.
Further, the semantic label of the processing unit and the screen recording time information of the processing unit form an operation field of the processing unit;
the operation fields of a plurality of the processing units constitute an operation sequence of the terminal.
Further, the second obtaining unit 202 is further configured to:
tracking a cursor of the terminal based on the processing unit to obtain a detection result;
identifying characters within a certain range by taking the cursor as the center based on the processing unit, and acquiring an identification result;
and acquiring the semantic label of the processing unit based on the detection result and the identification result.
Further, the cursor includes: a mouse cursor and a keyboard cursor.
Further, the monitoring unit 203 is configured to perform sensitive data analysis on the operation sequence to obtain a potential violation.
Further, the monitoring unit 203 is configured to obtain a user behavior track based on the operation sequence of the terminal.
The present invention also provides a computer storage medium having stored thereon computer-executable instructions; after the computer executable instruction is executed, the processing method based on the terminal screen recording can be realized.
Fig. 3 is a schematic flowchart of another processing method based on terminal screen recording according to an embodiment of the present invention. As shown in fig. 3, the operation behavior of the user is operated and screen-recorded by a screen recording tool of the terminal, the video is transmitted to the intelligent video analysis platform in real time, the operation sequence of the user is output after the platform analysis processing, and the operation sequence is recorded into a database for storage, so that the intelligent audit platform can perform subsequent analysis and audit on the operation behavior of the user, wherein the operation sequence is '2018.02.2115: 21: 31'.
In recent years, deep learning has been widely developed and applied in academic and industrial fields, and it has achieved great results in many fields such as computer vision, speech recognition, natural language processing and game strategies, and even achieved expression beyond human in some fields. According to the application, the screen recording data of the terminal video is subjected to computer vision processing by introducing a deep learning technology, and based on model training, the vision analysis service is continuously optimized and improved, so that the limitation in the user behavior audit field is well solved.
Fig. 4 is an architecture diagram of a video intelligent analysis platform according to an embodiment of the present invention, which is divided into a hardware layer, a data layer, an algorithm layer, a service place and a service layer. Wherein, hardware, data layer include: the system comprises a deep learning framework, an image video, a GPU cluster, a CPU cluster and a storage cluster; the algorithm layer comprises: image recognition, scene understanding, quality assessment and transcoding analysis; the service layer comprises: automatic training, intelligent visual analysis and standard; the service layer comprises: query, personnel images, early warning and behavior trajectories.
The method has the bottom layer capabilities of image recognition, scene understanding, quality evaluation and transcoding analysis under the support of machine learning and deep learning technologies, so that effective closed loop of online intelligent visual analysis, labeling and automatic training model updating of a service layer is realized.
The intelligent visual analysis service provides real-time video semantic analysis to acquire information in the video; the automatic training service updates the model in real time, and continuously optimizes and improves the visual analysis service; the annotation service receives online semantic tags provided by the visual analysis service on one hand, and provides corresponding service for manual annotation of a business expert team on the other hand.
The core of the proposal of the application is to introduce a computer vision technology based on artificial intelligence, the computer vision technology mainly processes images and videos to obtain three-dimensional information of corresponding scenes, and three basic tasks need to be completed:
image classification: identifying the type of content from the pictures and the videos;
target detection: identifying an imaged object in a picture or video and determining its position;
image segmentation: the method mainly comprises semantic segmentation, wherein the semantic segmentation is accurate to the pixel level, and visual input is divided into different semantic interpretable categories.
Based on three basic tasks of computer vision, deeper technologies are generated, including target tracking in videos, theme description of images and videos, semantic understanding, event detection in videos and the like, vertical application fields including information compression, user portrayal, search, recommendation systems, human-computer interaction and the like exist on the technologies, and upper applications include internet multimedia, smart home, driving, security, intelligent finance, medical robots and the like.
The operation screen recording of the user can be converted into the operation action time sequence of the user by tracking the cursor of the mouse and the cursor of the keyboard and recognizing characters in a certain area near the cursor of the mouse and the cursor of the keyboard. Specifically, the behavior of the user at the terminal is completed through the cursor operation of a mouse and the cursor operation of a keyboard, various applications and information are also presented in a window, so that the mouse and the cursor in the video are identified and tracked, characters in a certain area near the cursor of the mouse and the cursor of the keyboard are identified, titles of the window, popup windows of information prompt types and characters in a floating layer are identified, and the time sequence of the user behavior is output after natural language processing is performed on the basis.
Fig. 5 is a system framework diagram of a video intelligent analysis platform according to an embodiment of the present invention. As shown in fig. 5, the system framework of the video intelligent analysis platform is composed of three parts: a visual analysis service, a model training service, and a labeling service.
The visual analysis service processes a screen recording file of the terminal in real time to obtain an operation screen recording of a user; the model training service is used for carrying out model training and model evaluation based on the screen recording file to obtain a model reaching a certain accuracy rate; and the marking service is used for carrying out visual analysis on the operation sequence of the identified user by utilizing the model determined in the model training service, obtaining the operation sequence, carrying out manual sampling verification, calling the marking service to add the marking service into the marking platform after data correction, adding the label in the marking platform into the marking data, and enabling the label in the marking data to be used for model training and updating.
Fig. 6 is a processing flow chart of another processing method based on terminal video according to an embodiment of the present invention. As shown in fig. 6, a screen recording software is installed, the terminal operation behavior is subjected to screen recording processing through the terminal video recording software, a video recording file is obtained, the obtained video recording file is combined with an existing video recording file and is transmitted to a background analysis server, the analysis server performs video frame cutting processing, videos are analyzed into frame-by-frame pictures, processed single-frame data is pushed to a video analysis big data assembly, deep learning analysis is performed through the video analysis big data assembly, an obtained business scene picture model is trained through a model, picture analysis and model matching are performed, feature analysis is completed, a mark is added, the marked data is pushed to a feature analysis assembly, behavior portrait processing is performed, and whether data leakage exists in the operation behavior is intelligently recognized through matching of other features. If the risk of data leakage occurs, recording the time of a problem picture, further performing correlation analysis on operation behaviors, extracting a behavior track of previous operation, tracing the generation address of source data, detecting and identifying the data, identifying whether sensitive information exists, and presenting the problem. The whole analysis method comprises a set of complete audit solutions such as terminal supervision, content identification, behavior tracing and the like, is more flexible than the traditional solutions, and reduces the audit false alarm rate based on rules.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.
Claims (10)
1. A processing method based on terminal screen recording is characterized by comprising the following steps:
acquiring screen recording data of a terminal;
acquiring target identification content according to the screen recording data;
and monitoring the terminal according to the target identification content.
2. The method of claim 1, wherein:
the obtaining of the target identification content according to the screen recording data comprises:
and acquiring an operation sequence of the terminal according to the screen recording data.
3. The method of claim 2, wherein:
the obtaining of the target identification content according to the screen recording data comprises:
performing video frame cutting processing on the screen recording data to obtain a processing unit, wherein the processing unit is 1 frame or continuous N frame data, and N is a positive integer not less than 2;
and analyzing the processing unit to obtain the semantic label of the processing unit.
4. The method of claim 3, wherein:
the semantic label of the processing unit and the screen recording time information of the processing unit form an operation field of the processing unit;
the operation fields of a plurality of the processing units constitute an operation sequence of the terminal.
5. The method of claim 3, wherein:
the analyzing the processing unit to obtain the semantic label of the processing unit specifically includes:
tracking a cursor of the terminal based on the processing unit to obtain a detection result;
identifying characters within a certain range by taking the cursor as the center based on the processing unit, and acquiring an identification result;
and acquiring the semantic label of the processing unit based on the detection result and the identification result.
6. The method of claim 5, wherein:
the cursor includes: a mouse cursor and a keyboard cursor.
7. The method of claim 2, wherein:
the monitoring the terminal according to the target identification content specifically includes:
and performing sensitive data analysis on the operation sequence to acquire potential violation behaviors.
8. The method of claim 2, wherein:
the monitoring the terminal according to the target identification content further comprises:
and acquiring a user behavior track based on the operation sequence of the terminal.
9. A processing device based on terminal screen recording is characterized in that the device comprises:
the first acquisition unit is used for acquiring screen recording data of the terminal;
the second acquisition unit is used for acquiring target identification content according to the screen recording data;
and the monitoring unit is used for monitoring the terminal according to the target identification content.
10. A computer storage medium having stored thereon computer-executable instructions; the computer-executable instructions, when executed, enable the method provided by any one of claims 1 to 8 to be carried out.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811375939.0A CN111199172B (en) | 2018-11-19 | 2018-11-19 | Terminal screen recording-based processing method and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811375939.0A CN111199172B (en) | 2018-11-19 | 2018-11-19 | Terminal screen recording-based processing method and device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111199172A true CN111199172A (en) | 2020-05-26 |
CN111199172B CN111199172B (en) | 2024-08-13 |
Family
ID=70745917
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811375939.0A Active CN111199172B (en) | 2018-11-19 | 2018-11-19 | Terminal screen recording-based processing method and device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111199172B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111885303A (en) * | 2020-07-06 | 2020-11-03 | 雍朝良 | Active tracking recording and shooting visual method |
CN111931571A (en) * | 2020-07-07 | 2020-11-13 | 华中科技大学 | Video character target tracking method based on online enhanced detection and electronic equipment |
CN112866558A (en) * | 2020-11-04 | 2021-05-28 | 苏州臻迪智能科技有限公司 | Operation method of electronic equipment, control method of holder and holder system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008234085A (en) * | 2007-03-19 | 2008-10-02 | Sega Corp | Information display device, information display method, information display program and recording medium |
CN102724554A (en) * | 2012-07-02 | 2012-10-10 | 西南科技大学 | Scene-segmentation-based semantic watermark embedding method for video resource |
CN105049790A (en) * | 2015-06-18 | 2015-11-11 | 中国人民公安大学 | Video monitoring system image acquisition method and apparatus |
CN108024079A (en) * | 2017-11-29 | 2018-05-11 | 广东欧珀移动通信有限公司 | Record screen method, apparatus, terminal and storage medium |
CN108038396A (en) * | 2017-12-05 | 2018-05-15 | 广东欧珀移动通信有限公司 | Record screen method, apparatus and terminal |
-
2018
- 2018-11-19 CN CN201811375939.0A patent/CN111199172B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008234085A (en) * | 2007-03-19 | 2008-10-02 | Sega Corp | Information display device, information display method, information display program and recording medium |
CN102724554A (en) * | 2012-07-02 | 2012-10-10 | 西南科技大学 | Scene-segmentation-based semantic watermark embedding method for video resource |
CN105049790A (en) * | 2015-06-18 | 2015-11-11 | 中国人民公安大学 | Video monitoring system image acquisition method and apparatus |
CN108024079A (en) * | 2017-11-29 | 2018-05-11 | 广东欧珀移动通信有限公司 | Record screen method, apparatus, terminal and storage medium |
CN108038396A (en) * | 2017-12-05 | 2018-05-15 | 广东欧珀移动通信有限公司 | Record screen method, apparatus and terminal |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111885303A (en) * | 2020-07-06 | 2020-11-03 | 雍朝良 | Active tracking recording and shooting visual method |
CN111931571A (en) * | 2020-07-07 | 2020-11-13 | 华中科技大学 | Video character target tracking method based on online enhanced detection and electronic equipment |
CN111931571B (en) * | 2020-07-07 | 2022-05-17 | 华中科技大学 | Video character target tracking method based on online enhanced detection and electronic equipment |
CN112866558A (en) * | 2020-11-04 | 2021-05-28 | 苏州臻迪智能科技有限公司 | Operation method of electronic equipment, control method of holder and holder system |
Also Published As
Publication number | Publication date |
---|---|
CN111199172B (en) | 2024-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108304793B (en) | Online learning analysis system and method | |
CN113673459B (en) | Video-based production and construction site safety inspection method, system and equipment | |
US10963700B2 (en) | Character recognition | |
CN111199172B (en) | Terminal screen recording-based processing method and device and storage medium | |
CN112861673A (en) | False alarm removal early warning method and system for multi-target detection of surveillance video | |
CN115205764B (en) | Online learning concentration monitoring method, system and medium based on machine vision | |
CN117557414B (en) | Cultivated land supervision method, device, equipment and storage medium based on automatic interpretation of remote sensing image | |
US20200026955A1 (en) | Computation of Audience Metrics Focalized on Displayed Content | |
CN111914649A (en) | Face recognition method and device, electronic equipment and storage medium | |
CN110991246A (en) | Video detection method and system | |
CN116419059A (en) | Automatic monitoring method, device, equipment and medium based on behavior label | |
CN117112814A (en) | False media content mining and identification system and identification method thereof | |
US10949705B2 (en) | Focalized behavioral measurements in a video stream | |
CN111191498A (en) | Behavior recognition method and related product | |
CN114067396A (en) | Vision learning-based digital management system and method for live-in project field test | |
CN116824459B (en) | Intelligent monitoring and evaluating method, system and storage medium for real-time examination | |
CN113128414A (en) | Personnel tracking method and device, computer readable storage medium and electronic equipment | |
CN113011300A (en) | Method, system and equipment for AI visual identification of violation behavior | |
US20190243854A1 (en) | Analysis of Operator Behavior Focalized on Machine Events | |
CN113989499B (en) | Intelligent alarm method in bank scene based on artificial intelligence | |
CN116259104A (en) | Intelligent dance action quality assessment method, device and system | |
Berkovskyi et al. | CREATION OF INTELLIGENT SYSTEMS FOR ANALYZING SUPERMARKET VISITORS TO IDENTIFY CRIMINAL ELEMENTS | |
US20230177880A1 (en) | Device and method for inferring interaction relathionship between objects through image recognition | |
CN110674269A (en) | Cable information management and control method and system | |
CN118692028B (en) | Iron tower bird nest monitoring method and system based on multi-mode large model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |