CN115002511B - Live video evidence obtaining method and device, electronic equipment and readable storage medium - Google Patents

Live video evidence obtaining method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115002511B
CN115002511B CN202210915115.8A CN202210915115A CN115002511B CN 115002511 B CN115002511 B CN 115002511B CN 202210915115 A CN202210915115 A CN 202210915115A CN 115002511 B CN115002511 B CN 115002511B
Authority
CN
China
Prior art keywords
video
target
forensics
robot
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210915115.8A
Other languages
Chinese (zh)
Other versions
CN115002511A (en
Inventor
廖万里
金卓
欧阳博文
苏鹏
李土茂
李露慧
向链
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsware Information Technology Co Ltd
Original Assignee
Zhuhai Kingsware Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsware Information Technology Co Ltd filed Critical Zhuhai Kingsware Information Technology Co Ltd
Priority to CN202210915115.8A priority Critical patent/CN115002511B/en
Publication of CN115002511A publication Critical patent/CN115002511A/en
Application granted granted Critical
Publication of CN115002511B publication Critical patent/CN115002511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a live video evidence obtaining method, a live video evidence obtaining device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a first task, and selecting a target robot according to a first instruction in the first task; configuring a target robot according to configuration parameters in the first task, wherein the configuration parameters comprise room parameters, recording duration, timing start, storage positions, cycle time and start parameters; obtaining a forensic video through room parameters in the configuration parameters, and recording the forensic video by the target robot to obtain a target video; the illegal part is obtained by identifying the target video, and the illegal part is marked and displayed on a forensics interface, so that the direct-broadcast video can be automatically forensics. The method can be widely applied to the field of live video evidence obtaining.

Description

Live video evidence obtaining method and device, electronic equipment and readable storage medium
Technical Field
The invention relates to the field of live video forensics, in particular to a live video forensics method and device, electronic equipment and a readable storage medium.
Background
At present, the live broadcast is an outbreak period of the live broadcast industry, and a large number of people and horses are rushed in. Due to rapid development and low admission threshold, contents which do not conform to the regulation of the platform often appear, and viewers are attracted to the live webcasts, so that gifts are crazy to be refreshed in live webcasts. In order to continuously purify the network ecological environment and build a clear network space, a large amount of manpower is consumed to supervise and evidence collection in the live broadcast industry. The current common method still depends on a large amount of manpower to manually operate a plurality of live broadcast software in a plurality of time periods all day, continuously switches room numbers, records live broadcast pictures, and manually judges whether the content of the anchor is legal and compliant. Therefore, the operation is complicated, the time consumption is long, the manual monitoring time is not fixed, and great troubles are brought to the platform.
Disclosure of Invention
In view of this, embodiments of the present invention provide a live video forensics method and apparatus, an electronic device, and a readable storage medium, so as to achieve automatic forensics of live video.
The invention provides a live video forensics method in a first aspect, which comprises the following steps: acquiring a first task, and selecting a target robot according to a first instruction in the first task; configuring the target robot according to configuration parameters in the first task, wherein the configuration parameters comprise room parameters, recording duration, timing start, storage positions, cycle time and start parameters; obtaining a forensic video through room parameters in the configuration parameters, and recording the forensic video by the target robot to obtain a target video; and identifying the target video to obtain an illegal part, marking the illegal part and displaying the illegal part to a forensics interface, wherein the forensics interface is used for displaying the forensics condition of the target video.
According to some embodiments of the invention, the acquiring a first task, selecting a target robot according to a first instruction in the first task, comprises: and if the target robot specified by the first instruction does not exist, displaying a customization request to the forensics interface.
According to some embodiments of the present invention, the obtaining a forensic video according to the room parameters in the configuration parameters, and the recording of the forensic video by the target robot to obtain a target video, includes: and determining that the target robot enters the evidence obtaining video, and starting a reverse engineering system to record the evidence obtaining video.
According to some embodiments of the present invention, after obtaining a forensic video according to a room parameter in the configuration parameters and recording the forensic video by the target robot to obtain a target video, the method includes: storing the target video to a target location, wherein the target location comprises cloud storage or a local path.
According to some embodiments of the present invention, the identifying the target video to obtain an illegal portion, marking the illegal portion, and displaying the illegal portion on a forensics interface includes: displaying the state of the target video on the forensics interface through fields, wherein the fields comprise: live room name, online status, start recording time, end recording time, play, download, violation flag.
According to some embodiments of the present invention, the identifying the target video to obtain an illegal part, marking the illegal part and displaying the illegal part on a forensics interface includes at least one of: performing target detection and identification on the target video to obtain a target state of a target object in the target video, wherein the target state comprises: location, category, shape, size; or, performing target tracking on the target video, positioning a motion track of the target object, performing time sequence action positioning on the target object, predicting a start-stop time sequence interval and a category of the action of the target object to obtain a segment video, performing action identification on the target video, and determining a specific action category of the target object in the segment video; or carrying out voice recognition on the target video, and converting the voice content in the target video into the responding characters.
A second aspect of the present invention provides a live video forensics apparatus, including: the system comprises a first module, a second module and a third module, wherein the first module is used for acquiring a first task and selecting a target robot according to a first instruction in the first task; the second module is used for configuring the target robot according to configuration parameters in the first task, wherein the configuration parameters comprise room parameters, recording duration, timing start, storage position, cycle time and start parameters; a third module, configured to obtain a forensic video according to room parameters in the configuration parameters, and record the forensic video by the target robot to obtain a target video; and the fourth module is used for identifying the target video to obtain an illegal part, marking the illegal part and displaying the illegal part to a forensics interface, wherein the forensics interface is used for displaying the forensics condition of the target video.
A third aspect of the invention provides an electronic device comprising a processor and a memory; the memory is used for storing programs; the processor executes the program to implement the live video forensics method as described in any one of the above.
The electronic equipment provided by the embodiment of the invention at least has the same beneficial effects as the live video evidence obtaining method.
A fourth aspect of the present invention provides a computer-readable storage medium storing a program for execution by a processor to implement the live video forensics method as described in any one of the above.
The computer-readable storage medium according to the embodiment of the invention has at least the same beneficial effects as the live video forensics method.
The embodiment of the invention also discloses a computer program product or a computer program, which comprises computer instructions, and the computer instructions are stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and the computer instructions executed by the processor cause the computer device to perform the foregoing method.
According to the embodiment of the invention, the target robot is obtained through the first task and is configured, the target robot after configuration is used for recording the evidence obtaining video to obtain the target video, the violation part is obtained by identifying the target video and is marked and displayed in the evidence obtaining interface, the method realizes automatic evidence obtaining of the evidence obtaining video, the evidence obtaining video does not need to be obtained manually, and the labor cost is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a live video forensics method according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a live video forensics apparatus provided by an embodiment of the present invention;
fig. 3 is a schematic block diagram of an apparatus of an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
At present, the method is an outbreak period of the live broadcast industry, and brings about a plurality of low-custom network phenomena while bringing about network multi-ecology. In order to continuously purify the network ecological environment and build a clear network space, a large amount of manpower is required to be consumed to supervise and evidence collection in the live broadcast industry at present.
However, the currently adopted method still uses a lot of manpower to manually operate a plurality of live broadcast software in a plurality of time periods all day, continuously switches room numbers, records live broadcast pictures, and manually judges whether the content of the main broadcast is legal and compliant. The method is complicated in operation and long in time consumption, the manual monitoring time is not fixed, very large manpower resources need to be consumed, meanwhile, the performance of the live broadcast platform which does not accord with the platform regulation reaches the peak mostly in the evening and in the early morning, and related personnel in charge of monitoring and evidence obtaining often aim at the early morning without eyes, so that great trouble is brought to the platform.
Therefore, it is necessary to realize automatic video evidence obtaining for a live broadcast platform, and the method is applied to the live broadcast platform or related fields requiring video auditing.
Referring to fig. 1, a method of an embodiment of the present invention includes:
and S100, acquiring a first task, and selecting a target robot according to a first instruction in the first task.
Specifically, a first task is obtained, the first task is sent through a forensics interface, and a user may set the first task through the forensics interface, for example: and appointing which live broadcast platform is subjected to evidence obtaining, and setting corresponding configuration parameters. Assuming that the user sets evidence-taking "tremble" through the evidence-taking interface, the first task is evidence-taking tremble. The device obtains first task, selects the target robot according to the first instruction in the first task, and it can be understood that the robot that the live broadcast platform of difference corresponds is also different, and live broadcast platform is the relation of one-to-one with the target robot moreover. In addition, the robot can be an RPA robot.
In another embodiment, step S100 further comprises the steps of: and if the target robot specified by the first instruction does not exist, displaying a customization request to a forensics interface.
Specifically, if it is determined that the target robot specified by the first instruction does not exist, the corresponding robot is not set on the representative platform, and at this time, the customization request may be sent to the forensics interface. The developer can conveniently perform corresponding addition according to the customization request, the experience of the user is improved, and the mechanism is continuously improved according to the feedback.
And S200, configuring the target robot according to configuration parameters in the first task, wherein the configuration parameters comprise room parameters, recording duration, timing start, storage positions, cycle time and start parameters.
Specifically, the configuration parameters in the first task are obtained, wherein the configuration parameters include room parameters, the room parameters have the values of room names or IDs of specified recordings, and the room names or ID numbers support Excel and text batch import. Note that both the room name and the ID number have uniqueness; the recording duration can be set as default under the condition of not actively setting; the robot is started at fixed time, and the starting time of the robot can be customized according to the requirements of a user; the robot further comprises a storage position, the storage position can set to be stored in the cloud storage or a local path according to user requirements, and when the storage position is set to be stored in the cloud storage, the robot can automatically log in the relevant cloud storage; the method also comprises cycle time, and after the completion of the single task recording, whether the cycle recording is needed or not is set; also included are startup parameters.
And step S300, obtaining a forensic video through room parameters in the configuration parameters, and recording the forensic video by the target robot to obtain a target video.
Specifically, according to the room parameters, the room name or the ID number in the configuration parameters, the video to be subjected to forensics, namely the forensics video, is determined, and the forensics video is recorded to obtain the target video.
In another embodiment, step S300 further includes determining that the target robot enters the forensic video, and starting a reverse engineering system to record the forensic video.
Specifically, still including the start parameter among the configuration parameter, target robot gets into the room after, can generate the request, responds this request, and the system can call corresponding cmd order and start reverse engineering system, and reverse engineering system can record the video of collecting evidence at the background, and we do not need the rethread other software to record the video this moment, have improved the efficiency of collecting evidence, have realized the automation of recording, have avoided the artifical loaded down with trivial details step of recording.
In another embodiment, after step S300, the following steps are further included:
and storing the target video to a target position, wherein the target position can be cloud storage or a local path, and the target position is obtained according to the storage position in the configuration parameters.
And step S400, identifying the target video to obtain an illegal part, marking the illegal part and displaying the illegal part to a forensics interface, wherein the forensics interface is used for displaying the forensics condition of the target video.
Specifically, the violation part of the target video is identified and obtained, the violation part is marked and displayed on a forensic interface, automatic forensic analysis of the live broadcast video is completed at the moment, a user can confirm which part has violation suspicion and lock a specific room number only by checking the violation part on the forensic interface, and the user is not required to check the live broadcast one by one for a long time.
In another embodiment, step S400 further comprises the steps of: displaying the state of the target video on the forensics interface through fields, wherein the fields comprise: live room name, online status, start recording time, end recording time, play, download, violation flag.
Specifically, the illegal part is marked and then displayed on a forensics interface in a field mode, wherein the field comprises a live broadcast room name, an online state, a recording starting time, a recording ending time, a playing, a downloading and an illegal mark. The online state is the state of the live broadcast room when recording is displayed, so that a user can conveniently confirm the target video, the system supports the direct playing of the target video on the evidence obtaining interface, and simultaneously supports the single or batch downloading of the target video. It should be noted that, when identifying a target video, the content of the video is also classified, and after determining that the target video has a violation phenomenon, violation marks are marked, and according to the difference of the categories, the violation marks can be classified into yellow-related marks, terrorism-related marks, virus-related marks, and the like. These are all displayed on a forensics interface, a user can check a recorded video list associated with the current recording task through the forensics interface, and the recording state of each video is displayed in the list through a key field.
In another embodiment, the method for identifying the violation portion by identifying the target video in step S400 further includes at least one of:
carrying out target detection and identification on a target video to obtain a target state of a target object in the target video, wherein the target state comprises the following steps: location, category, shape, size;
or, performing target tracking on the target video, positioning the motion track of the target object, performing time sequence action positioning on the target object, predicting the starting and stopping time sequence interval and the type of the action of the target object to obtain a segment video, performing action identification on the target video, and determining the specific action type of the target object in the segment video;
or carrying out voice recognition on the target video, and converting the voice content in the target video into the responsive words.
Specifically, after the video is recorded to obtain the target video, the target video needs to be identified to determine an illegal portion thereof. The identification method comprises the steps of target detection and identification, finding out all interested targets in a video through the target detection and identification, determining the categories and the positions of the targets, and further confirming which category the image or the image in a certain area belongs to, the position of the image, the size and the shape of the target. The target state of the target object in the target video is obtained through the identification method, and the method mainly aims at the picture part in live broadcasting, such as whether clothing of the target object is illegal or not, whether an illegal picture is contained in a live broadcasting interface projected by a live broadcast of a game or not and the like.
In addition, the target video can be subjected to target tracking, and the motion trail of the target object can be positioned under the conditions that the target object is continuously changed and a blocking object exists. And on the basis of target tracking, the judgment of the action in the live broadcast is completed by combining action identification and time sequence action positioning. The goal of motion recognition is to recognize motion occurring in the video, typically human motion in the video. Video can be viewed as a data structure consisting of a set of image frames arranged in a temporal order, one more temporal dimension than the images. Motion recognition not only analyzes the content of each frame of image in the video, but also mines clues from timing information between video frames. It is noted that the videos of the motion recognition have been substantially clipped, so that the motion recognition has evolved into a pure classification problem, in which the videos to be recognized have been substantially clipped, i.e. each video contains a definite motion, the duration of the video is short, and there is a uniquely determined motion category. In the field of time sequence action positioning, a video is not clipped generally, the video duration is long, an action generally occurs only in a short time period in the video, and the video may contain a plurality of actions or may not contain the action, namely, the video is a background type. The time-sequential action location is to predict not only what actions are contained in the video, but also the start and end times of the actions. Therefore, the starting and ending time sequence interval and the category of the action are located and obtained in combination with the time sequence action, a rough segment video is obtained, the specific category in the segment video is accurately obtained through action identification, and then the specific category is compared with the violation action category to determine whether the action is violated. The method mainly aims at the action part in live broadcasting, determines whether the main broadcasting carries out illegal actions and classifies the illegal types of the actions.
In addition, the target video can be subjected to voice recognition, the voice content of a speaker is captured and converted into corresponding characters, and the corresponding characters are compared with illegal and sensitive words to confirm whether the voice content meets the regulations. The identification method mainly aims at the voice part in the live video.
This application can also realize the periodic maintenance to the robot through automatic update, adapts to the continuous version update of various video live APP, receives automatic update's influence in order to avoid the business of collecting evidence that is going on, and the system will check whether there is new robot template to update when starting at every turn, smoothness nature when guaranteeing the robot and collecting evidence.
Thus, specific examples can be obtained as follows: setting a first task as evidence obtaining 'tremble live broadcast' through an evidence obtaining interface, setting a room number as a number 0010, setting recording time as 5 minutes, setting 0 point in the morning for starting, storing in cloud storage, and circularly recording after recording is completed. The method comprises the steps of obtaining a target robot 'a shaking robot' according to a first instruction of a first task, configuring the shaking robot according to configuration parameters of the first task, obtaining evidence obtaining video of a room number 0010 by the shaking robot according to the room number, recording the room number 0010 to obtain recorded video, storing the recorded video to a target position, obtaining the recorded video according to the target position when needed, identifying the recorded video to confirm whether an illegal part exists in the recorded video, marking the illegal part, and displaying a final marking result on an evidence obtaining interface.
In one aspect, referring to fig. 2, this embodiment provides a live video forensics apparatus, which at least includes: a first module 610, a second module 620, a third module 630, and a fourth module 640.
Specifically, a first module acquires a first task and selects a corresponding target robot according to a first instruction in the first task; the second module is connected with the first module, acquires a first task and a target robot in the first module, and configures the target robot according to configuration parameters in the first task; the third module is connected with the second module, configured room parameters in the second module are obtained, evidence obtaining videos are obtained according to the room parameters, and the evidence obtaining videos are recorded by the target robot to obtain target videos; and the fourth module is connected with the third module, acquires the target video recorded in the third module, identifies the target video to obtain an illegal part, marks the illegal part and displays the illegal part on a forensics interface.
In another embodiment, the RPA develops a live video forensics device in combination with C #, and realizes automated flow operation of forensics services. This application develops the robot that can the automation mechanized operation needs a certain concrete live broadcast platform that needs to use in carrying out the live investigation of network to correspond, and the device of collecting evidence is broadcast in the computer end operation live, and the configuration is with RPA agent end the same network segment, according to the different scenes of using live video device of collecting evidence, when the user is applied to the cell-phone end: an RPA agent end robot is installed on a detection mobile phone, and a network segment consistent with a platform is configured; when the user is applied to the computer side: an Android simulator is installed on the investigation computer, an RPA proxy end robot is installed on the simulator, and a network segment consistent with the platform is configured.
The invention realizes a function of automatically recording live broadcast content by a video evidence obtaining platform and robot process automation mode. The automation of the whole network live broadcast recording service can be realized by operating the video evidence obtaining platform by related personnel of the platform, the input of manpower is reduced, the task load is reduced, and the problem of fatigue caused by overlong monitoring time is solved. And an AI data analysis technology is utilized to analyze the recorded live broadcast video, thereby achieving a real unattended network robot. Meanwhile, the simulator can be switched on more than one computer by adopting the scheme of combining the simulator with an agent, and a plurality of live broadcast platforms can be run simultaneously in the same time period, so that not only is the time cost saved, but also the cost for purchasing the mobile phone is saved.
Referring to fig. 3, the present embodiment provides an electronic device, which includes a processor 610 and a memory 620 coupled to the processor 610, where the memory 620 stores program instructions executable by the processor 610, and when the processor 610 executes the program instructions stored in the memory 620, the live video forensics method is implemented. The processor 610 may also be referred to as a Central Processing Unit (CPU). The processor 610 may be an integrated circuit chip having signal processing capabilities. The processor 610 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The general purpose processor may be a microprocessor, but in the alternative, the general purpose processor may be any conventional processor or the like. Memory 620 may include various components (e.g., machine-readable media) including, but not limited to, random access memory components, read-only components, and any combination thereof. The memory 620 may also include: instructions (e.g., software) (e.g., stored on one or more machine-readable media); the instruction implements the live video forensics method in the above embodiment. The electronic device has a function of loading and operating a software system for live video forensics provided by the embodiment of the present invention, for example, a Personal Computer (PC), a mobile phone, a smart phone, a Personal Digital Assistant (PDA), a wearable device, a Pocket PC (Pocket PC), a tablet Computer, and the like.
The present embodiment provides a computer-readable storage medium storing a program executed by a processor to implement the live video forensics method described above.
The embodiment of the invention also discloses a computer program product or a computer program, which comprises computer instructions, and the computer instructions are stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and executed by the processor to cause the computer device to perform the method illustrated in fig. 1.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more comprehensive understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the described functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in a separate physical device or software module. It will also be understood that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer given the nature, function, and interrelationships of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is to be determined from the appended claims along with their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part thereof which substantially contributes to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description of the specification, reference to the description of "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A live video forensics method is characterized by comprising the following steps:
acquiring a first task, and selecting a target robot according to a first instruction in the first task, wherein a live broadcast platform and the target robot are in one-to-one correspondence, the target robot realizes regular maintenance through automatic updating, so that the target robot adapts to different versions of the live broadcast platform for updating, and the target robot is an RPA robot;
configuring the target robot according to configuration parameters in the first task, wherein the configuration parameters comprise room parameters, recording duration, timing start, storage positions, cycle time and start parameters;
obtaining a forensic video through room parameters in the configuration parameters, and recording the forensic video by the target robot to obtain a target video;
and identifying the target video to obtain an illegal part, classifying and marking an illegal mark, marking the illegal part and displaying the illegal part to a forensics interface, wherein the forensics interface is used for displaying the forensics condition of the target video.
2. The live video forensics method according to claim 1, wherein the obtaining a first task, selecting a target robot according to a first instruction in the first task, and including:
and if the target robot specified by the first instruction does not exist, displaying a customization request to the forensics interface.
3. The live video forensics method according to claim 1, wherein the obtaining of the forensics video through the room parameters in the configuration parameters, and the recording of the forensics video by the target robot to obtain the target video comprise:
and acquiring a forensic video based on room parameters through the target robot, and starting a reverse engineering system to record the forensic video.
4. The live video forensics method according to claim 1, wherein after obtaining forensics video through room parameters in the configuration parameters and recording the forensics video by the target robot to obtain target video, the method includes:
storing the target video to a target location, wherein the target location comprises cloud storage or a local path.
5. The live video forensics method according to claim 1, wherein the identifying the target video to obtain an illegal part, classifying and marking an illegal mark, marking the illegal part and displaying the illegal part on a forensics interface comprises:
displaying the state of the target video on the forensics interface through fields, wherein the fields comprise: live room name, online status, start recording time, end recording time, play, download, or violation flag.
6. The live video forensics method according to claim 1, wherein the identifying of the target video to obtain an illegal part, classifying and marking an illegal mark, marking the illegal part and displaying the illegal part on a forensics interface comprises at least one of the following steps:
performing target detection and identification on the target video to obtain a target state of a target object in the target video, wherein the target state comprises: location, category, shape and size;
or, performing target tracking on the target video, positioning a motion track of the target object, performing time sequence action positioning on the target object, predicting a start-stop time sequence interval and a category of the action of the target object to obtain a segment video, performing action identification on the target video, and determining a specific action category of the target object in the segment video;
or carrying out voice recognition on the target video, and converting the voice content in the target video into the responsive words.
7. A live video forensics device, comprising:
the system comprises a first module, a second module and a third module, wherein the first module is used for acquiring a first task and selecting a target robot according to a first instruction in the first task, the direct broadcast platform and the target robot are in one-to-one correspondence, the target robot realizes regular maintenance through automatic updating, the target robot is adaptive to different versions of the direct broadcast platform to update, and the target robot is an RPA robot;
the second module is used for configuring the target robot according to configuration parameters in the first task, wherein the configuration parameters comprise room parameters, recording duration, timing start, storage position, cycle time and start parameters;
the third module is used for acquiring a forensic video through room parameters in the configuration parameters, and the target robot records the forensic video to obtain a target video;
and the fourth module is used for identifying the target video to obtain an illegal part, classifying and marking an illegal mark, marking the illegal part and displaying the illegal part on a forensics interface, wherein the forensics interface is used for displaying the forensics condition of the target video.
8. An electronic device comprising a processor and a memory;
the memory is used for storing programs;
the processor executing the program implements the method of any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that the storage medium stores a program, which is executed by a processor to implement the method according to any one of claims 1 to 6.
CN202210915115.8A 2022-08-01 2022-08-01 Live video evidence obtaining method and device, electronic equipment and readable storage medium Active CN115002511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210915115.8A CN115002511B (en) 2022-08-01 2022-08-01 Live video evidence obtaining method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210915115.8A CN115002511B (en) 2022-08-01 2022-08-01 Live video evidence obtaining method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN115002511A CN115002511A (en) 2022-09-02
CN115002511B true CN115002511B (en) 2023-02-28

Family

ID=83022443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210915115.8A Active CN115002511B (en) 2022-08-01 2022-08-01 Live video evidence obtaining method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115002511B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110099282A (en) * 2019-05-06 2019-08-06 海马云(天津)信息技术有限公司 The method and system that content in a kind of pair of live streaming type application is monitored
CN112533010A (en) * 2020-11-23 2021-03-19 北京北笛科技有限公司 Automatic evidence obtaining method and device for audio and video data in network live broadcast service
CN114786038A (en) * 2022-03-29 2022-07-22 慧之安信息技术股份有限公司 Low-custom live broadcast behavior monitoring method based on deep learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021179315A1 (en) * 2020-03-13 2021-09-16 深圳市大疆创新科技有限公司 Video live streaming method and system, and computer storage medium
CN112711741A (en) * 2021-01-05 2021-04-27 天津证好在数据科技有限公司 Method for solidifying infringement evidence of live broadcasting
CN113132746B (en) * 2021-04-16 2023-03-10 北京北笛科技有限公司 Automatic evidence obtaining method and device for audio and video data in network live broadcast service

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110099282A (en) * 2019-05-06 2019-08-06 海马云(天津)信息技术有限公司 The method and system that content in a kind of pair of live streaming type application is monitored
CN112533010A (en) * 2020-11-23 2021-03-19 北京北笛科技有限公司 Automatic evidence obtaining method and device for audio and video data in network live broadcast service
CN114786038A (en) * 2022-03-29 2022-07-22 慧之安信息技术股份有限公司 Low-custom live broadcast behavior monitoring method based on deep learning

Also Published As

Publication number Publication date
CN115002511A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN110458591A (en) Advertising information detection method, device and computer equipment
CN109669798B (en) Crash analysis method, crash analysis device, electronic equipment and storage medium
CN104486649A (en) Video content rating method and device
CN113377637A (en) Performance capacity diagnostic method and device
CN112511818B (en) Video playing quality detection method and device
CN112399252B (en) Soft and hard decoding control method and device and electronic equipment
CN114419502A (en) Data analysis method and device and storage medium
CN115002511B (en) Live video evidence obtaining method and device, electronic equipment and readable storage medium
CN113824987A (en) Method, medium, device and computing equipment for determining time consumption of first frame of live broadcast room
CN111897737B (en) Missing detection method and device for program test of micro-service system
CN117412070A (en) Merchant live time confidence policy operating system
CN111428806A (en) Image tag determination method and device, electronic equipment and storage medium
CN113923443A (en) Network video recorder testing method and device and computer readable storage medium
US11398091B1 (en) Repairing missing frames in recorded video with machine learning
CN115550638A (en) Camera state detection system and method
CN112560809A (en) Method and device for displaying recognition effect in real time
CN111930608A (en) Automatic testing device and method based on process control
CN114579908A (en) Content distribution method and device, electronic equipment and storage medium
CN114513686A (en) Method and device for determining video information and storage medium
Nawała et al. Software package for measurement of quality indicators working in no-reference model
CN112729319A (en) Automatic data acquisition and analysis system and method
CN112418215A (en) Video classification identification method and device, storage medium and equipment
CN117668298B (en) Artificial intelligence method and system for application data analysis
CN110740347B (en) Video content detection system, method, device, server and storage medium
CN111866428B (en) Historical video data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant