CN111246244A - Method and device for rapidly analyzing and processing audio and video in cluster and electronic equipment - Google Patents

Method and device for rapidly analyzing and processing audio and video in cluster and electronic equipment Download PDF

Info

Publication number
CN111246244A
CN111246244A CN202010079527.3A CN202010079527A CN111246244A CN 111246244 A CN111246244 A CN 111246244A CN 202010079527 A CN202010079527 A CN 202010079527A CN 111246244 A CN111246244 A CN 111246244A
Authority
CN
China
Prior art keywords
audio
video
unit
processing
units
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010079527.3A
Other languages
Chinese (zh)
Other versions
CN111246244B (en
Inventor
王家万
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Beisike Technology Co ltd
Original Assignee
Beijing Beisike Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Beisike Technology Co ltd filed Critical Beijing Beisike Technology Co ltd
Priority to CN202010079527.3A priority Critical patent/CN111246244B/en
Publication of CN111246244A publication Critical patent/CN111246244A/en
Application granted granted Critical
Publication of CN111246244B publication Critical patent/CN111246244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23406Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving management of server-side video buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4392Processing of audio elementary streams involving audio buffer management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The embodiment of the invention provides a method, a device and electronic equipment for rapidly analyzing and processing audio and video in a cluster, wherein the method comprises the following steps: acquiring an audio and video clip; dividing the acquired audio and video clips into a plurality of audio and video units; performing first processing on the plurality of audio and video units, wherein the first processing comprises calculating the difference degree between each audio and video unit in the plurality of audio and video units and a reference unit, and if the difference degree is greater than a preset threshold value, marking the audio and video units; and carrying out second processing on the marked audio and video unit. According to the embodiment of the invention, the audio and video unit is preprocessed, namely the audio and video unit is compared with the reference unit to screen out the effective audio and video unit with larger difference, and then the effective audio and video unit is further processed, so that the resource waste caused by processing the ineffective audio and video unit is avoided, and the processing efficiency of the audio and video unit is improved.

Description

Method and device for rapidly analyzing and processing audio and video in cluster and electronic equipment
Technical Field
The application relates to a method and a device for rapidly analyzing and processing audio and video in a cluster and electronic equipment, and belongs to the technical field of computers.
Background
In the field of audio and video processing, a certain amount of invalid audio and video units exist in audio and video acquired in real time, so that when the audio and video are processed, the effective audio and video units and the invalid audio and video units are equally processed, which causes waste of processing resources and reduction of processing efficiency.
Disclosure of Invention
The embodiment of the invention provides a method and a device for rapidly analyzing and processing audio and video in a cluster and electronic equipment, which are used for screening audio and video units and improving the video processing efficiency.
In order to achieve the above object, an embodiment of the present invention provides a method for quickly analyzing and processing an audio/video in a cluster, including:
acquiring an audio and video clip;
dividing the acquired audio and video clips into a plurality of audio and video units;
performing first processing on the plurality of audio and video units, wherein the first processing comprises calculating the difference degree between each audio and video unit in the plurality of audio and video units and a reference unit, and the reference unit is an audio and video unit generated based on the audio and video unit before the currently-processed audio and video unit;
marking the audio and video units with the difference degree larger than a preset threshold value;
and carrying out second processing on the marked audio and video unit.
The embodiment of the invention also provides a device for rapidly analyzing and processing the audio and video in the cluster, which comprises the following components:
the audio and video clip acquisition module is used for acquiring audio and video clips;
the audio and video unit dividing module is used for dividing the acquired audio and video fragments into a plurality of audio and video units;
a first processing module for performing a first processing on the plurality of audio/video units,
the first processing comprises calculating the difference degree between each audio/video unit in the plurality of audio/video units and a reference unit, wherein the reference unit is an audio/video unit generated on the basis of the audio/video unit before the currently first-processed audio/video unit;
the marking module is used for marking the audio and video units with the difference degree larger than a preset threshold value;
and the second processing module is used for carrying out second processing on the marked audio and video unit.
An embodiment of the present invention further provides an electronic device, including:
a memory for storing a program;
and the processor is used for operating the program stored in the memory so as to execute the method for rapidly analyzing and processing the audio and video in the cluster.
According to the embodiment of the invention, the audio and video units are preprocessed, namely, the audio and video units are compared with the reference unit to screen out the effective audio and video units with larger difference, and then the effective audio and video units are further processed, so that the resource waste caused by processing the ineffective audio and video units is avoided, and the processing efficiency of the audio and video units is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Fig. 1 is a schematic view of an application scenario of a method for rapidly analyzing and processing audio and video in a cluster according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for rapidly analyzing and processing audio and video in a cluster according to an embodiment of the present invention;
fig. 3 is a second schematic flowchart of a method for rapidly analyzing and processing audio/video in a cluster according to an embodiment of the present invention;
fig. 4 is a third schematic flowchart of a method for rapidly analyzing and processing audio and video in a cluster according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an apparatus for rapidly analyzing and processing audio and video in a cluster according to an embodiment of the present invention;
fig. 6 is a second schematic structural diagram of an apparatus for fast analyzing and processing audio/video in a cluster according to an embodiment of the present invention;
fig. 7 is a third schematic structural diagram of an apparatus for fast analyzing and processing audio and video in a cluster according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the field of audio and video processing, a certain amount of invalid audio and video units exist in audio and video acquired in real time, so that when the audio and video are processed, the effective audio and video units and the invalid audio and video units are equally processed, which causes waste of processing resources and reduction of processing efficiency.
According to the embodiment of the invention, the effective audio and video units are screened out by comparing the audio and video units with the reference unit, and then the effective audio and video units are further processed.
In the present application, the audio-visual segment and the audio-visual element may contain audio, video or a combination of audio and video.
In a specific scene, the audio and video acquisition equipment acquires audio and video fragments, then the audio and video fragments are divided into a plurality of continuous audio and video units, and some of the continuous audio and video units are audio and video units containing effective information, so that the audio and video acquisition equipment can be used as effective audio and video units. And some audio and video units do not contain valid information and can be used as invalid audio and video units. For example, in videos for monitoring a gate of a cell, the flow of people coming in and out of the cell is large during work hours, more audio and video units with large image changes for reflecting the situation of coming in and going out of residents in the cell are shot in the videos and can be used as effective audio and video units, at night, the coming in and going out of the residents is small, the pictures of the static gate of the cell through which no people pass are shot in the videos, and the audio and video units similar to the images of the gate of the cell with unchanged images can be used as ineffective audio and video units.
Therefore, for a plurality of audio and video units divided by the audio and video clips, first processing can be performed on the audio and video units, namely, the audio and video units are compared with the reference unit, and the difference degree between the audio and video units and the reference unit is calculated, so that effective audio and video units with the difference degree larger than a preset threshold value are screened out. It should be noted that, the calculating of the difference between the audio/video unit and the reference unit is implemented by extracting the image features of the audio/video unit and the reference unit, and then calculating the difference between the image features of the audio/video unit and the reference unit.
Specifically, in the embodiment of the present invention, the reference unit is used as a comparison reference in a screening (first processing) process of an audio/video unit, for example, as shown in fig. 1, which is an application scenario diagram of a method for rapidly analyzing and processing audio/video in a cluster according to the embodiment of the present invention, when a current unit (for convenience of description, an audio/video unit currently performing the first processing may be referred to as a current unit) is a unit 1 of an audio/video clip, that is, a first audio/video unit, the reference unit may be a preset audio/video unit or an audio/video unit generated based on the preset audio/video unit.
When the current unit is the audio/video unit after the unit 1, the reference unit may be an audio/video unit generated based on the proportional synthesis of the audio/video unit before the current unit and the reference unit of the audio/video unit before the current unit. For example, in the embodiment of the present invention, the reference unit of the current unit may be an audio/video unit generated based on proportional synthesis of a previous unit of the current unit and the reference unit of the previous unit. For example, as shown in FIG. 1, the reference cell of the 2 nd cell is generated based on the 1 st cell of 10% and the 1 st cell of 90%, the reference cell of the 3 rd cell is generated based on the 2 nd cell of 10% and the 2 nd cell of 90%, and so on, the reference cell of the n-th cell is generated based on the n-1 th cell of 10% and the n-1 th cell of 90%.
In addition, in another embodiment of the present invention, the reference unit of the current unit may also be an audio-video unit generated based on a previous unit of the current unit and a reference unit of the previous unit which are proportionally synthesized.
Further, if the current unit is compared with the reference unit, and the difference between the current unit and the reference unit is calculated to be greater than the preset threshold, the audio/video unit is regarded as an effective audio/video unit and is marked, wherein the marking may include various processing for screening out the effective audio/video unit, for example, a special identifier may be added to the effective audio/video unit, and the effective audio/video unit may be separately stored, and the like. In addition, if the calculated difference degree is smaller than or equal to the preset threshold value, the audio/video unit can be regarded as an invalid audio/video unit, and the audio/video unit can be deleted.
The first processing of screening out effective audio and video units by comparing the audio and video units with the reference unit and calculating the difference degree is introduced, and further, the screened effective audio and video units can be subjected to further second processing, such as analysis processing of face recognition and the like.
In addition, on the basis of the above scheme, in order to avoid accumulation of the audio and video units to be processed due to a large number of audio and video units to be subjected to the first processing or a low efficiency of the first processing, the audio and video units may be sent to the temporary storage area after the audio and video fragments are divided into a plurality of audio and video units, and then the corresponding number of audio and video units are obtained from the temporary storage area according to the processing state of the first processing to perform the first processing.
In addition, after the effective audio and video units are screened out by the first processing, in order to avoid the accumulation of the effective audio and video units to be subjected to the second processing, the effective audio and video units can also be sent to a temporary storage area, and then the corresponding number of effective audio and video units are obtained from the temporary storage area according to the processing state of the second processing to perform the second processing.
According to the embodiment of the invention, the audio and video units are preprocessed, namely, the audio and video units are compared with the reference unit to screen out the effective audio and video units with larger difference, and then the effective audio and video units are further processed, so that the resource waste caused by processing the ineffective audio and video units is avoided, and the processing efficiency of the audio and video units is improved.
The technical solution of the present invention is further illustrated by some specific examples.
Example one
As shown in fig. 2, which is a schematic flow chart of a method for rapidly analyzing and processing audio and video in a cluster according to an embodiment of the present invention, the method includes the following steps:
s201: and acquiring audio and video clips.
Specifically, audio and video clips, such as a camera, may be acquired by the video capture device.
S202: and dividing the acquired audio and video clips into a plurality of audio and video units.
S203: and carrying out first processing on the plurality of audio and video units.
The first processing may include calculating a difference between each of the plurality of audio/video units and a reference unit, and in an embodiment of the present invention, the reference unit may be an audio/video unit generated based on an audio/video unit before the audio/video unit currently performing the first processing.
For example, the image characteristics of each audio/video unit and the reference unit in the plurality of audio/video units can be obtained, and the difference degree can be calculated according to the image characteristics of each audio/video unit and the reference unit in the plurality of audio/video units. The image features of the audio/video unit may include color features, shape features, spatial relationship features, texture features, and the like.
For example, taking video acquisition as an example, in a scene in which a security system monitors a room, a video acquisition device acquires an audio/video clip of a room in which a person steals, and the audio/video clip is pre-processed to divide the audio/video clip into a plurality of audio/video units.
Then, a reference unit of each audio/video unit is obtained, in the embodiment of the present invention, the reference unit is used as a comparison reference of the audio/video unit in the first processing process of the audio/video unit, and when the current unit (for convenience of description, the current audio/video unit performing the first processing may be referred to as a current unit) is the 1 st unit of the audio/video clip, that is, the first audio/video unit, the reference unit may be a preset audio/video unit or an audio/video unit generated based on the preset audio/video unit. For example, in the above scenario, the reference unit of the unit 1 may be a previously acquired picture of a room without a human being, or an image generated by a predetermined algorithm based on a picture of a room without a human being. When the current unit is the audio/video unit after the unit 1, the reference unit may be an audio/video unit generated based on the proportional synthesis of the audio/video unit before the current unit and the reference unit of the audio/video unit before the current unit. For example, in the embodiment of the present invention, the reference unit of the current unit may be an audio/video unit generated based on proportional synthesis of a previous unit of the current unit and the reference unit of the previous unit. Of course, in the embodiment of the present application, the difference between each audio/video unit and the reference unit may also be calculated according to other information of the audio/video unit.
For example, as shown in fig. 1, which is an application scenario diagram (a configuration of reference units in the figure is schematically represented) of the method for rapidly analyzing and processing videos and audios in a cluster according to an embodiment of the present invention, taking unit 2 as an example, reference units thereof may be generated based on unit 1 and unit 1 in proportion to the reference units, for example, in the above scenario, unit 1 may be a picture in which a room door is pushed away by a small gap, and unit 1 may be a picture in which a room door is closed and no person is in the room. Thus, the reference cell of cell 2 may be generated based on, for example, 10% of the reference cells of cell 1 and 90% of the reference cells of cell 1. Taking the 3 rd cell as an example, the reference cells can be generated based on 10% of the 2 nd cell and 90% of the 2 nd cell. Thus, in the embodiment of the present invention, after the 1 st cell, the reference cell of each av cell may be proportionally generated based on the previous cell of the av cell and the reference cell of the previous cell, for example, the reference cell of the nth cell is generated based on 10% of the nth-1 st cell and 90% of the nth-1 st cell. In addition, in another embodiment of the present invention, the reference unit of the current unit may also be an audio-video unit generated based on a previous unit of the current unit and a reference unit of the previous unit which are proportionally synthesized.
Further, the image features of the plurality of audio-video units and their reference units are extracted, for example, unit 1 is an image that a thief has not entered a room yet and pushes the door open by only a small gap, the reference unit of unit 1 is a picture that the door of the room is closed and no person is in the room, then the extracted image features of unit 1 and its reference unit may be the color features, shape features, spatial relationship features, etc. of the image, such as the color characteristic of the sofa in the room, the shape characteristic of the table, the position relation characteristic of each object and the like, therefore, the difference between the image characteristics of the 1 st unit and its reference unit can be calculated by calculating the difference between when the door of the room is closed and after being pushed away by a small gap, the difference between the relative positional relationship characteristics of the room door to the walls of the room and other objects in the room, or the characteristic difference between the background color of the small gap that appears after the door is pushed open and the color of the door. For example, the door is brown in color, while the background of the appearing small gap is the color of the wall, i.e., white. So that the degree of difference between the 1 st cell and its reference cell can be calculated based on the image characteristics described above.
S204: and marking the audio and video units with the difference degree larger than a preset threshold value.
If the calculated difference between the image characteristics of the audio/video unit and the reference unit is greater than the preset threshold, the audio/video unit can be regarded as an audio/video unit with a large image characteristic change (in the embodiment of the present invention, the audio/video unit can be referred to as an effective audio/video unit), and is marked. For example, in the embodiment of the present application, the preset threshold may be set to 20%. For example, in the above scenario where a thief enters a room, by calculating the area of brown in the 1 st cell and comparing it with the area of brown in the reference cell, when the area of brown in the first cell is reduced by more than 20% relative to the area of brown in the reference cell, it can be determined that the 1 st cell is an audiovisual cell worth the user to watch, i.e. an active video cell, and thus the 1 st cell can be marked. The marking of the audio/video unit may include various operations for screening out the valid audio/video unit, for example, a special identifier may be added to the valid audio/video unit, and the valid audio/video unit may be separately stored.
In addition, in the embodiment of the present invention, if the calculated difference between the image features of the audio/video unit and the reference unit is less than or equal to the preset threshold, the audio/video unit may be regarded as an invalid audio/video unit, and the audio/video unit may be deleted. For example, in the scenario described above where a thief enters a room, when the area of brown color in the first cell is reduced by no more than 20% relative to the area of brown color in the reference cell, which may be due to a contingent event (e.g., a pet running over or blocking, etc.), it may be determined that this 1 st cell is an audiovisual cell that does not require the user's particular attention, i.e., an inactive audiovisual cell. In the embodiment of the application, the invalid video unit can be deleted.
S205: and carrying out second processing on the marked audio and video unit.
In particular, the second processing here may comprise a further analysis processing of the marked audiovisual unit. For example, in the above-mentioned scenario of monitoring a room, after the audio/video unit is subjected to the first processing, and then the audio/video unit in which a thief appears in the room is screened out, the screened audio/video unit may be subjected to further processing such as face recognition.
According to the embodiment of the invention, the audio and video units are preprocessed, namely, the audio and video units are compared with the reference unit to screen out the effective audio and video units with larger difference, and then the effective audio and video units are further processed, so that the resource waste caused by processing the ineffective audio and video units is avoided, and the processing efficiency of the audio and video units is improved.
Example two
As shown in fig. 3, which is a second schematic flowchart of a method for rapidly analyzing and processing audio and video in a cluster according to an embodiment of the present invention, the method may include the following steps:
on the basis of the first embodiment, after step S202, step S303 may be added.
S301: and acquiring audio and video clips.
S302: and dividing the acquired audio and video clips into a plurality of audio and video units.
S303: a plurality of audio-video units are sent to a first storage area,
and acquiring at least one part of the plurality of audio/video units stored in the first storage area according to the processing state of the first processing.
Specifically, on the basis of the above scheme, in order to avoid accumulation of the audio/video units to be processed due to a large number of audio/video units to be subjected to the first processing or a low efficiency of the first processing, after the audio/video fragments are divided into a plurality of audio/video units, the audio/video units may be first sent to a temporary storage region (i.e., a first storage region) for temporary storage, and then according to the processing state of the first processing, the corresponding number of audio/video units may be obtained from the temporary storage region for the first processing.
It should be noted that the processing state of the first processing here may be evaluated by the use of the processing resource of the first processing, the processing efficiency of the first processing, the queuing condition of the audio/video unit performing the first processing, and the like, alone, or by a combination of a plurality of factors. Therefore, the number of the acquired audio/video units is determined according to the processing state of the first processing, for example, when 50% of the processing resources for the first processing remain, the audio/video units which can be processed by 50% of the processing resources can be acquired, the processing efficiency of the first processing can be further integrated, and the number of the audio/video units which can be processed by the processing resources released by processing the original task is increased on the basis of the number of the audio/video units which can be processed by 50% of the processing resources.
In addition, the temporary storage area may be different physical nodes in a cluster formed by multiple nodes, for example, in the cluster, there are 10 nodes numbered from 1 to 10, that is, nodes 1 to 10, where node 1 is a preprocessing node (in the embodiment of the present invention, the preprocessing node may also be a node outside the cluster), and the preprocessing node is configured to receive an audio/video segment and divide the audio/video segment into multiple audio/video units. In addition, the node 2 is a node that processes tasks, and one or more of the nodes 3 to 10 may serve as a temporary storage area for the node 2. For example, the audio/video clips are preprocessed by the node 1 and then divided into 80 audio/video units, then the 80 audio/video units may be sent to a temporary storage area, for example, the node 3 to the node 10, and when each node does not receive an audio/video unit, that is, when the audio/video units are allocated for the first time, the audio/video units may be evenly allocated to the node 3 to the node 10, that is, each node is allocated with 10 audio/video units. In the subsequent processing process, the audio and video units can be distributed according to the state of each node broadcast. For example, if the state of the node 4 is that 8 audio/video units can be stored, 8 audio/video units may be sent to the node 4.
After the nodes 3 to 10 receive the audio and video units, the audio and video units may be temporarily stored first, and then the audio and video units are sent to the node 2 according to the processing state of the node 2, for example, after the node 2 completes the first processing of 10 audio and video units, 10 audio and video units may be obtained from one or more nodes of the nodes 3 to 10, for example, the node 2 may receive and read a task list from the node 3 to the node 10, and select a node to be obtained and the number of audio and video units to be processed in the task list, for example, 10 audio and video units stored in the node 3 may be selected, 8 audio and video units stored in the node 3 and 2 audio and video units stored in the node 4 may also be selected, and then a task request may be sent to the selected node, where the task request may include a network identifier of the node 2 (for example, number 2 or other identifiers which represent the node 2 and are compiled according to a predetermined rule) and the number of the audio/video units requested to be sent, so that the selected node sends the corresponding number of the audio/video units to the node 2 after receiving the task request.
In addition, for a single node, the temporary storage area may also be a buffer area of the same node, for example, for a certain time period, a certain node may have more audio/video units to be processed, and the receiving node may first send the audio/video units to the buffer area of the node, and then, in a time period in which other audio/video units to be processed are less, the audio/video units in the buffer area may be processed, for example, in a scene in which people in and out of a gate of a monitoring cell enter and exit, there are more people entering and exiting the cell in the daytime, so that a plurality of captured audio/video unit images of a gate of the cell include different people, that is, the variation between a plurality of captured audio/video unit images is large, and therefore, there are more effective audio/video units in the captured audio/video units. At night, people come in and go out a little, the shot audio and video unit images are mostly images of a community gate through which no people pass, the difference degree between the images is small, and therefore the shot audio and video unit images at night are mostly invalid audio and video units. Under the condition, more effective audio and video units need to be processed in the daytime, and less effective audio and video units need to be processed at night, so that unprocessed effective audio and video units in the daytime can be sent to the cache region of the processing node, and when fewer effective audio and video units need to be processed at night, the effective audio and video units sent to the cache region are processed,
s304: and carrying out first processing on the plurality of audio and video units.
The first processing may include calculating a difference between each of the plurality of audio/video units and a reference unit, and in an embodiment of the present invention, the reference unit may be an audio/video unit generated based on an audio/video unit before the audio/video unit currently performing the first processing.
S305: and marking the audio and video units with the difference degree larger than a preset threshold value.
And if the difference degree is greater than a preset threshold value, marking the audio and video unit.
In addition, in the embodiment of the present invention, if the calculated difference between the image features of the audio/video unit and the reference unit is less than or equal to the preset threshold, the audio/video unit may be regarded as an invalid audio/video unit, and the audio/video unit may be deleted.
S306: and carrying out second processing on the marked audio and video unit.
Specifically, steps S301, S302, and S304 to S306 are the same as steps S201 to S205 in the first embodiment, and are not repeated here.
According to the embodiment of the invention, the audio and video units are preprocessed, namely, the audio and video units are compared with the reference unit to screen out the effective audio and video units with larger difference, and then the effective audio and video units are further processed, so that the resource waste caused by processing the ineffective audio and video units is avoided, and the processing efficiency of the audio and video units is improved.
EXAMPLE III
As shown in fig. 4, it is a third schematic flow chart of a method for rapidly analyzing and processing audio and video in a cluster according to an embodiment of the present invention, and the method may include the following steps:
on the basis of the first embodiment, after step S204, step S405 may be added.
S401: and acquiring audio and video clips.
S402: and dividing the acquired audio and video clips into a plurality of audio and video units.
S403: and carrying out first processing on the plurality of audio and video units.
The first processing may include calculating a difference between each of the plurality of audio/video units and a reference unit, and in an embodiment of the present invention, the reference unit may be an audio/video unit generated based on an audio/video unit before the audio/video unit currently performing the first processing.
S404: and marking the audio and video units with the difference degree larger than a preset threshold value.
And if the difference degree is greater than a preset threshold value, marking the audio and video unit.
In addition, in the embodiment of the present invention, if the calculated difference between the image features of the audio/video unit and the reference unit is less than or equal to the preset threshold, the audio/video unit may be regarded as an invalid audio/video unit, and the audio/video unit may be deleted.
S405: sending the marked audio-video unit to a second storage area,
and acquiring at least one part of marked audio and video units stored in the second storage area according to the processing state of the second processing.
Specifically, the concept of the second storage area is the same as that of the first storage area in the second embodiment, and the operation of sending the marked audio/video units to the second storage area is the same as that of sending the plurality of audio/video units to the first storage area in the second embodiment, which is not described herein again.
S406: and carrying out second processing on the marked audio and video unit.
Specifically, steps S401 to 404 and S406 are the same as steps S201 to S205 in the first embodiment, and are not repeated here.
According to the embodiment of the invention, the audio and video units are preprocessed, namely, the audio and video units are compared with the reference unit to screen out the effective audio and video units with larger difference, and then the effective audio and video units are further processed, so that the resource waste caused by processing the ineffective audio and video units is avoided, and the processing efficiency of the audio and video units is improved.
Example four
As shown in fig. 5, which is a schematic structural diagram of an apparatus for rapidly analyzing and processing audio and video in a cluster according to an embodiment of the present invention, the apparatus includes:
and an audio/video clip obtaining module 501, configured to obtain audio/video clips.
An audio/video unit dividing module 502, configured to divide the acquired audio/video clips into a plurality of audio/video units.
A first processing module 503, configured to perform a first processing on the multiple audio/video units,
the first processing comprises calculating the difference degree between each audio/video unit in the plurality of audio/video units and a reference unit, wherein the reference unit is an audio/video unit generated based on the audio/video unit before the currently first-processed audio/video unit.
Specifically, the image characteristics of each of the plurality of audio/video units and the reference unit may be obtained, and the difference degree may be calculated according to the image characteristics of each of the plurality of audio/video units and the reference unit. The image features of the audio/video unit may include color features, shape features, spatial relationship features, texture features, and the like.
In addition, in the embodiment of the present invention, the reference unit is used as a comparison reference in the screening (first processing) process of the audio/video unit, and when the current unit (for convenience of description, the current audio/video unit that is currently subjected to the first processing may be referred to as a current unit) is a first unit of the audio/video segment, that is, a first audio/video unit, the reference unit may be a preset audio/video unit or an audio/video unit generated based on the preset audio/video unit. When the current unit is an audio/video unit after the first unit, the reference unit may be an audio/video unit generated based on proportional synthesis of an audio/video unit before the current unit and a reference unit of an audio/video unit before the current unit.
The marking module 504 is configured to mark the audio/video unit with the difference degree greater than the preset threshold.
If the calculated difference between the image characteristics of the audio/video unit and the reference unit is greater than the preset threshold, the audio/video unit can be regarded as an audio/video unit with a large image characteristic change (in the embodiment of the present invention, the audio/video unit can be referred to as an effective audio/video unit), and is marked. The marking of the audio/video unit may include various operations for screening out the valid audio/video unit, for example, a special identifier may be added to the valid audio/video unit, and the valid audio/video unit may be separately stored.
In addition, in the embodiment of the present invention, if the calculated difference degree between the image features of the audio/video unit and the reference unit is less than or equal to the preset threshold, the embodiment of the present invention may further include a deletion module, where the deletion module is configured to delete the audio/video unit whose difference degree is less than or equal to the preset threshold.
And the second processing module 505 is configured to perform second processing on the marked audio/video unit.
Specifically, for a specific process of implementing the function of each module in the apparatus for rapidly analyzing and processing audio and video in a cluster according to the embodiment of the present invention, reference may be made to the related description in the method embodiment shown in the first embodiment, and details are not described here again.
According to the embodiment of the invention, the audio and video units are preprocessed, namely, the audio and video units are compared with the reference unit to screen out the effective audio and video units with larger difference, and then the effective audio and video units are further processed, so that the resource waste caused by processing the ineffective audio and video units is avoided, and the processing efficiency of the audio and video units is improved.
EXAMPLE five
As shown in fig. 6, which is a second schematic structural diagram of the apparatus for rapidly analyzing and processing audio and video in a cluster according to the second embodiment of the present invention, the apparatus for rapidly analyzing and processing audio and video in a cluster according to the second embodiment of the present invention may further include a first buffer module 506 on the basis of the fourth embodiment.
The first cache module 506 is configured to send the multiple audio/video units to the first storage area, and obtain at least a part of the multiple audio/video units stored in the first storage area according to the processing state of the first processing.
Specifically, the processing performed by the module may be performed after the processing performed by the audio/video unit dividing module in the above embodiment.
It should be noted that the processing state of the first processing here may be evaluated by the use of the processing resource of the first processing executed by the first processing module, the processing efficiency of the first processing, the queuing condition of the audio/video unit performing the first processing, and the like, alone, or by combining a plurality of factors. Therefore, the number of the acquired audio/video units is determined according to the processing state of the first processing executed by the first processing module, for example, when 50% of processing resources for the first processing remain, the audio/video units which can be processed by 50% of the processing resources can be acquired, the processing efficiency of the first processing can be further integrated, and the number of the audio/video units which can be processed by the processing resources released by the original task is increased on the basis of the number of the audio/video units which can be processed by 50% of the processing resources.
Specifically, for a specific process of implementing the functions of each module in the apparatus for rapidly analyzing and processing audio and video in a cluster according to the embodiment of the present invention, reference may be made to the related description in the method embodiment shown in the second embodiment, and details are not described here again.
According to the embodiment of the invention, the audio and video units are preprocessed, namely, the audio and video units are compared with the reference unit to screen out the effective audio and video units with larger difference, and then the effective audio and video units are further processed, so that the resource waste caused by processing the ineffective audio and video units is avoided, and the processing efficiency of the audio and video units is improved.
EXAMPLE six
As shown in fig. 7, which is a third schematic structural diagram of the apparatus for rapidly analyzing and processing audio and video in a cluster according to the embodiment of the present invention, on the basis of the fourth embodiment, the apparatus for rapidly analyzing and processing audio and video in a cluster according to the embodiment of the present invention may further include a second buffer module 507.
And the second cache module 507 is configured to send the marked audio/video unit to the second storage area, and obtain at least a part of the marked audio/video unit stored in the second storage area according to the processing state of the second processing.
Specifically, the processing performed by this module may be performed after the processing performed by the first processing module in the above-described embodiment.
It should be noted that the processing state of the second processing here may be evaluated by the use of the processing resource of the second processing executed by the second processing module, the processing efficiency of the second processing, the queuing condition of the audio/video unit performing the second processing, and the like, alone, or by combining a plurality of factors. Thus, the number of the acquired marked av cells will be determined according to the processing state of the second processing performed by the second processing module.
Specifically, for a specific process of implementing the function of each module in the apparatus for rapidly analyzing and processing audio and video in a cluster according to the embodiment of the present invention, reference may be made to the related description in the method embodiment shown in the third embodiment, and details are not described here again.
According to the embodiment of the invention, the audio and video units are preprocessed, namely, the audio and video units are compared with the reference unit to screen out the effective audio and video units with larger difference, and then the effective audio and video units are further processed, so that the resource waste caused by processing the ineffective audio and video units is avoided, and the processing efficiency of the audio and video units is improved.
EXAMPLE seven
The foregoing embodiment describes a flow process and a device structure according to an embodiment of the present invention, and the functions of the method and the device can be implemented by an electronic device, as shown in fig. 8, which is a schematic structural diagram of the electronic device according to an embodiment of the present invention, and specifically includes: a memory 810 and a processor 820.
A memory 810 for storing a program.
In addition to the programs described above, the memory 810 may also be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device, contact data, phonebook data, messages, pictures, videos, and so forth.
The memory 810 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
And a processor 820, coupled to the memory 810, for executing the program in the memory 810 to perform the operation steps of the method for fast analyzing and processing audio and video in a cluster described in the foregoing embodiments.
Furthermore, the processor 820 may also include various modules described in the foregoing embodiments to perform intra-cluster fast analysis processing of audio and video, and the memory 810 may be used, for example, to store data required by the modules to perform operations and/or data output.
Further, as shown, the electronic device may further include: communication components 830, power components 840, audio components 850, a display 860, and the like. Only some of the components are schematically shown in the figure and it is not meant that the electronic device comprises only the components shown in the figure.
The communication component 830 is configured to facilitate wired or wireless communication between the electronic device and other devices. The electronic device may access a wireless network based on a communication standard, such as WiFi, 3G, 4G, or 5G, or a combination thereof. In an exemplary embodiment, the communication component 830 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 830 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
A power supply assembly 840 to provide power to the various components of the electronic device. The power components 840 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for an electronic device.
Audio component 850 is configured to output and/or input audio signals. For example, the audio component 850 includes a Microphone (MIC) configured to receive external audio signals when the electronic device is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 810 or transmitted via the communication component 830. In some embodiments, audio component 850 also includes a speaker for outputting audio signals.
The display 860 includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (13)

1. A method for rapidly analyzing and processing audio and video in a cluster comprises the following steps:
acquiring an audio and video clip;
dividing the acquired audio and video clips into a plurality of audio and video units;
performing first processing on the plurality of audio and video units, wherein the first processing comprises calculating the difference degree between each audio and video unit in the plurality of audio and video units and a reference unit, and the reference unit is an audio and video unit generated based on the audio and video unit before the currently-processed audio and video unit;
marking the audio and video units with the difference degree larger than a preset threshold value;
and carrying out second processing on the marked audio and video unit.
2. The method of claim 1, wherein after dividing the acquired audio-video clips into a plurality of audio-video elements, further comprising:
the plurality of audio and video units are sent to a first storage area,
and acquiring at least one part of the plurality of audio/video units stored in the first storage area according to the processing state of the first processing.
3. The method of claim 1, wherein said calculating a degree of difference of each of said plurality of audio-visual elements from a reference element comprises:
acquiring image characteristics of each audio/video unit in the plurality of audio/video units and the reference unit; and
and calculating the difference degree according to the image characteristics of each audio and video unit in the plurality of audio and video units and the reference unit.
4. The method of claim 1, wherein after the marking of the audiovisual units with the difference degree greater than the preset threshold, the method further comprises:
sending the marked audio and video unit to a second storage area,
and acquiring at least one part of marked audio and video units stored in the second storage area according to the processing state of the second processing.
5. The method of claim 1, further comprising:
and deleting the audio and video units with the difference degree smaller than or equal to a preset threshold value.
6. The method of claim 1, wherein,
and when the current audio/video unit for the first processing is the first audio/video unit in the audio/video clip, the reference unit is an audio/video unit generated based on a preset audio/video unit.
7. An apparatus for fast analyzing and processing audio and video in a cluster, comprising:
the audio and video clip acquisition module is used for acquiring audio and video clips;
the audio and video unit dividing module is used for dividing the acquired audio and video fragments into a plurality of audio and video units;
a first processing module for performing a first processing on the plurality of audio/video units,
the first processing comprises calculating the difference degree between each audio/video unit in the plurality of audio/video units and a reference unit, wherein the reference unit is an audio/video unit generated on the basis of the audio/video unit before the currently first-processed audio/video unit;
the marking module is used for marking the audio and video units with the difference degree larger than a preset threshold value;
and the second processing module is used for carrying out second processing on the marked audio and video unit.
8. The apparatus of claim 7, further comprising:
the first cache module is used for sending the audio and video units to a first storage area,
and acquiring at least one part of the plurality of audio/video units stored in the first storage area according to the processing state of the first processing.
9. The apparatus of claim 7, wherein the calculating the degree of difference between each of the plurality of audio-visual units and a reference unit comprises:
acquiring image characteristics of each audio/video unit in the plurality of audio/video units and the reference unit; and
and calculating the difference degree according to the image characteristics of each audio and video unit in the plurality of audio and video units and the reference unit.
10. The apparatus of claim 7, further comprising:
the second cache module is used for sending the marked audio and video unit to a second storage area,
and acquiring at least one part of marked audio and video units stored in the second storage area according to the processing state of the second processing.
11. The apparatus of claim 7, further comprising:
and the deleting module is used for deleting the audio and video units with the difference degree smaller than or equal to a preset threshold value.
12. The apparatus of claim 7, wherein,
and when the current audio/video unit for the first processing is the first audio/video unit in the audio/video clip, the reference unit is an audio/video unit generated based on a preset audio/video unit.
13. An electronic device, comprising:
a memory for storing a program;
a processor for executing the program stored in the memory to execute the method for fast analyzing and processing audio and video in the cluster according to any one of claims 1 to 6.
CN202010079527.3A 2020-02-04 2020-02-04 Method and device for rapidly analyzing and processing audio and video in cluster and electronic equipment Active CN111246244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010079527.3A CN111246244B (en) 2020-02-04 2020-02-04 Method and device for rapidly analyzing and processing audio and video in cluster and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010079527.3A CN111246244B (en) 2020-02-04 2020-02-04 Method and device for rapidly analyzing and processing audio and video in cluster and electronic equipment

Publications (2)

Publication Number Publication Date
CN111246244A true CN111246244A (en) 2020-06-05
CN111246244B CN111246244B (en) 2023-05-23

Family

ID=70865019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010079527.3A Active CN111246244B (en) 2020-02-04 2020-02-04 Method and device for rapidly analyzing and processing audio and video in cluster and electronic equipment

Country Status (1)

Country Link
CN (1) CN111246244B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995613A (en) * 2021-05-20 2021-06-18 武汉中科通达高新技术股份有限公司 Analysis resource management method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101599179A (en) * 2009-07-17 2009-12-09 北京邮电大学 Method for automatically generating field motion wonderful scene highlights
WO2010055627A1 (en) * 2008-11-14 2010-05-20 パナソニック株式会社 Imaging device and digest playback method
US20120076357A1 (en) * 2010-09-24 2012-03-29 Kabushiki Kaisha Toshiba Video processing apparatus, method and system
CN103077203A (en) * 2012-12-28 2013-05-01 青岛爱维互动信息技术有限公司 Method for detecting repetitive audio/video clips
JP2014039137A (en) * 2012-08-15 2014-02-27 Nippon Telegr & Teleph Corp <Ntt> Video analysis device, video analysis method, and video analysis program
CN103905742A (en) * 2014-04-10 2014-07-02 北京数码视讯科技股份有限公司 Video file segmentation method and device
CN104519401A (en) * 2013-09-30 2015-04-15 华为技术有限公司 Video division point acquiring method and equipment
CN105681894A (en) * 2016-01-04 2016-06-15 努比亚技术有限公司 Device and method for displaying video file
KR101873675B1 (en) * 2017-01-24 2018-07-03 전자부품연구원 Method for Transmitting Audio Content Information in Network-based Audio Video Bridging System
WO2019019403A1 (en) * 2017-07-25 2019-01-31 深圳市鹰硕技术有限公司 Interactive situational teaching system for use in k12 stage
CN110312162A (en) * 2019-06-27 2019-10-08 北京字节跳动网络技术有限公司 Selected stage treatment method, device, electronic equipment and readable medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010055627A1 (en) * 2008-11-14 2010-05-20 パナソニック株式会社 Imaging device and digest playback method
CN101599179A (en) * 2009-07-17 2009-12-09 北京邮电大学 Method for automatically generating field motion wonderful scene highlights
US20120076357A1 (en) * 2010-09-24 2012-03-29 Kabushiki Kaisha Toshiba Video processing apparatus, method and system
JP2014039137A (en) * 2012-08-15 2014-02-27 Nippon Telegr & Teleph Corp <Ntt> Video analysis device, video analysis method, and video analysis program
CN103077203A (en) * 2012-12-28 2013-05-01 青岛爱维互动信息技术有限公司 Method for detecting repetitive audio/video clips
CN104519401A (en) * 2013-09-30 2015-04-15 华为技术有限公司 Video division point acquiring method and equipment
CN103905742A (en) * 2014-04-10 2014-07-02 北京数码视讯科技股份有限公司 Video file segmentation method and device
CN105681894A (en) * 2016-01-04 2016-06-15 努比亚技术有限公司 Device and method for displaying video file
WO2017118353A1 (en) * 2016-01-04 2017-07-13 努比亚技术有限公司 Device and method for displaying video file
KR101873675B1 (en) * 2017-01-24 2018-07-03 전자부품연구원 Method for Transmitting Audio Content Information in Network-based Audio Video Bridging System
WO2019019403A1 (en) * 2017-07-25 2019-01-31 深圳市鹰硕技术有限公司 Interactive situational teaching system for use in k12 stage
CN110312162A (en) * 2019-06-27 2019-10-08 北京字节跳动网络技术有限公司 Selected stage treatment method, device, electronic equipment and readable medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995613A (en) * 2021-05-20 2021-06-18 武汉中科通达高新技术股份有限公司 Analysis resource management method and device
CN112995613B (en) * 2021-05-20 2021-08-06 武汉中科通达高新技术股份有限公司 Analysis resource management method and device

Also Published As

Publication number Publication date
CN111246244B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
US9811748B2 (en) Adaptive camera setting modification based on analytics data
US20140071273A1 (en) Recognition Based Security
EP3382527B1 (en) Method and apparatus for managing a shared storage system
US10373015B2 (en) System and method of detecting moving objects
US11240542B2 (en) System and method for multiple video playback
US9992443B2 (en) System and methods for time lapse video acquisition and compression
CN114679607B (en) Video frame rate control method and device, electronic equipment and storage medium
CN112584083B (en) Video playing method, system, electronic equipment and storage medium
CN116204308A (en) Dynamic adjusting method and device for audio and video computing power and electronic equipment
US20060062430A1 (en) Feed-customized processing of multiple video streams in a pipeline architecture
CN112530205A (en) Airport parking apron airplane state detection method and device
CN111246244B (en) Method and device for rapidly analyzing and processing audio and video in cluster and electronic equipment
EP3989227A1 (en) Systems and methods for producing a privacy-protected video clip
CN112181144B (en) Terminal control method and system
US10805527B2 (en) Image processing device and method
CN112799773A (en) Data visualization method, terminal device, system and storage medium
CN115422203A (en) Data management method, device, equipment and medium for block chain distributed system
US7667732B1 (en) Event generation and camera cluster analysis of multiple video streams in a pipeline architecture
CN111489276B (en) Personnel management method and related device
CN116132623A (en) Intelligent analysis method, system and equipment based on video monitoring
CN111405355B (en) Processing method and device for dynamically generating audio and video clips and electronic equipment
KR102201241B1 (en) Apaptive Object Recognizing Apparatus and Method for Processing Data Real Time In Multi Channel Video
EP4358508A1 (en) Methods and systems for privacy protecting a live video stream with an archived video system
US20160134842A1 (en) Mobile device capable of being associated with security equipment using widget
CN117170857A (en) Resource management and control method, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Method, device, and electronic equipment for rapid analysis and processing of audio and video within a cluster

Effective date of registration: 20230808

Granted publication date: 20230523

Pledgee: Zhongguancun Beijing technology financing Company limited by guarantee

Pledgor: BEIJING BEISIKE TECHNOLOGY Co.,Ltd.

Registration number: Y2023990000393

PE01 Entry into force of the registration of the contract for pledge of patent right