US20220030225A1 - Video data burst control for remote towers - Google Patents

Video data burst control for remote towers Download PDF

Info

Publication number
US20220030225A1
US20220030225A1 US17/291,511 US201817291511A US2022030225A1 US 20220030225 A1 US20220030225 A1 US 20220030225A1 US 201817291511 A US201817291511 A US 201817291511A US 2022030225 A1 US2022030225 A1 US 2022030225A1
Authority
US
United States
Prior art keywords
global
gop sequence
distribution
subsections
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/291,511
Inventor
Henrik ENGSTRÖM
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Saab AB
Original Assignee
Saab AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Saab AB filed Critical Saab AB
Assigned to SAAB AB reassignment SAAB AB ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ENGSTRÖM, Henrik
Publication of US20220030225A1 publication Critical patent/US20220030225A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0017Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
    • G08G5/0026Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located on the ground
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0043Traffic management of multiple aircrafts from the ground
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0073Surveillance aids
    • G08G5/0082Surveillance aids for monitoring traffic from a ground station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/114Adapting the group of pictures [GOP] structure, e.g. number of B-frames between two anchor frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/177Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • H04N21/23655Statistical multiplexing, e.g. by controlling the encoder to alter its bitrate to optimize the bandwidth utilization
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26275Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for distributing content or additional data in a staggered manner, e.g. repeating movies on different channels in a time-staggered manner in a near video on demand system

Definitions

  • the present invention relates to data management method for remote towers of a remote Air Traffic Control (ATC) system. More specifically, the present invention relates to a method for managing data output from multiple cameras of a remote tower via a common transmission media to a central entity.
  • ATC Air Traffic Control
  • Remote and digital tower is a new concept where the Air Traffic Service (ATS) at an airport is performed somewhere else than in the local control tower.
  • ATS Air Traffic Service
  • these systems one uses multiple cameras to monitor the air and ground traffic at an airport.
  • the camera output is transmitted in real-time to a central location where air traffic controllers view the camera video and control the traffic at the airport.
  • the transmission media from the airport to the central location often has limited capacity, and at the same time very strict requirements in terms of data loss and delay.
  • the data output from each camera varies a lot. If the cameras are allowed to transmit their data without any control, the capacity of the transmission media may easily be exceeded.
  • This object is achieved by means of a method for managing data output from multiple cameras of a remote tower via a common transmission media to a central entity, wherein each camera is configured to periodically output an I-frame (Intra frame) in a Group of Pictures (GOP) sequence.
  • the method comprises:
  • the global GOP sequence comprising the I-frame of each camera of the multiple cameras
  • the proposed method allows for transmitting data from a remote tower to a control centre with reduced data peaks. More specifically, the video data can be transmitted without additional buffering while at the same time keeping the end-to-end transmission at low level. Stated differently, the method allows for effectively controlling the data output of the cameras in such a way that they do not form momentary large peaks in the total bandwidth which may cause data loss.
  • the video output (data output from the cameras) are encoded by a method which generates an image output (data output) comprising an I-frame (Intra frame) in a Group of Pictures (GOP) sequence.
  • I-frame Intra frame
  • GOP Group of Pictures
  • the step of moving an I-frame may for example include shortening a single GOP sequence for a specific camera so that the future I-frames are being generated and transmitted at their designated positions within the global GOP sequence.
  • the shortening can be done by e.g. requesting a new I-frame at a specific point in time within the global GOP sequence, which will consequently move the start of individual GOP sequence to that point in time.
  • the step of moving an I-frame may alternatively be done by extending a single GOP sequence for a specific camera so that the future I-frames are being generated and transmitted at their designated positions within the global GOP sequence.
  • the data output (i.e. the video output) is digitally encoded by a method that uses I-frames together with Predicted Frames (P-frames) and possibly Bi-directional Frames (B-frames).
  • I-frames are larger than the other frames (P-frames/B-frames) and can be said to form peaks in the output bandwidth from the cameras.
  • the I-frames can further be said to contain the full image and do therefore not require any additional information to reconstruct them.
  • encoders use GOP structures that cause each I-frame to be a “clean random access point,” such that decoding can start cleanly on an I-frame.
  • a GOP may include an I-frame, one or more P-frames, and one or more B-frames.
  • the I-frame is a picture that is coded independently of all other pictures. Each GOP begins (in decoding order) with this type of picture.
  • the I-frame is a complete image, like a jpeg image file.
  • the P-frame predictive coded picture
  • P-frames may also be known as delta-frames.
  • the B-frame (bi-predictive coded picture) saves even more space by using differences between the current frame and both the preceding and following frames to specify its content.
  • the terms “frame” and “picture” are used interchangeably, and even though the term “frame” is mainly used in the present disclosure, it is considered to encompass both a complete image and a partial image (i.e. a field).
  • the term remote tower should in the present context be interpreted broadly, and may for example include embodiments where the cameras are spatially distributed at an airport instead of gathered in a tower-like structure.
  • the present invention is at least partly based on the realization that if the data output from the cameras is left un-controlled there is a relatively high probability that I-frames will be clustered together in time and cause large peaks in the total bandwidth when summed together.
  • the present inventor realized that the common transmission media is limited such that if the output bandwidth from a plurality of cameras peak closely in time, the system will “drop” video data in such way that it will not be received at the receiving side. In other words, if the bandwidth limit is exceeded, data loss will occur, which is for obvious reasons inacceptable in Air Traffic Control applications.
  • the method further comprises identifying, in the first distribution of I-frames, at least one time slot comprising at least two I-frames, the time slot being of a limited length shorter than the global GOP sequence;
  • step of forming the second distribution comprises:
  • a time slot may in the present context be understood as a limited subsection or part of the “time-axis” of the global GOP sequence. Accordingly, the I-frames are moved from time slots (in the global GOP sequence) which are considered as running a relatively high risk of momentarily exceeding bandwidth limits, to a different time slot. Preferably, the I-frames are moved to arbitrary time slots which contain no I-frames.
  • the step of determining a first distribution of I-frames within the global GOP sequence comprises:
  • the high-density subsection comprises a number of I-frames above a first predefined threshold
  • the low-density subsection comprises a number of I-frames below a second predefined threshold
  • step of forming the second distribution within the global GOP sequence comprises:
  • the global GOP sequence is arranged in a plurality of defined subsections or parts. This may for example be done by forming a timeline and recording every time an I-frame occurs in the global GOP sequence. Then this timeline is then divided into a plurality of segments forming the subsections. The subsections are then classified based on how many “recordings” they contain. Subsequently, the I-frames are moved from the subsections that contain a relatively high number of I-frames to subsections that contain a relatively low number (e.g. zero) I-frames in order to even out the spread of I-frames within the global GOP sequence, thereby reducing the risk of momentary peaks in bandwidth in the data output.
  • a relatively high number of I-frames to subsections that contain a relatively low number (e.g. zero) I-frames in order to even out the spread of I-frames within the global GOP sequence, thereby reducing the risk of momentary peaks in bandwidth in the data output.
  • the global GOP sequence comprises N I-frames, wherein the step of dividing the global GOP sequence into a plurality of subsections comprises:
  • step of forming the second distribution within the global GOP sequence comprises:
  • the step of dividing the global GOP sequence into N subsections comprises:
  • the step of forming the second distribution of I-frames within the global GOP sequence may further comprise changing a status of each low-density subsection to a normal-density subsection when the I-frame is moved thereto.
  • a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a remote tower system, the one or more programs comprising instructions for performing the method according to any one of the above discussed embodiments in reference to the first aspect of the present invention.
  • a remote tower system for air traffic control comprising:
  • each camera being configured to periodically output an I-frame in a Group of Pictures, GOP, sequence:
  • controller arranged to monitor and control the data output of each camera onto the common transmission media, the controller being configured to:
  • the global GOP sequence comprising the I-frame of each camera of the plurality of cameras
  • the controller (may also be referred to as a control unit) can be provided by means of appropriate software, hardware or a combination thereof. Moreover, the controller may be referred to as a “supervisor unit” which is configured to monitor the data output on the common transmission media directly or indirectly by monitoring the output from each individual camera.
  • the controller is further configured to identify, in the first distribution of I-frames, at least one time slot comprising at least two I-frames, the time slot being of a limited length shorter than the global GOP sequence; and form the second distribution of I-frames within the global GOP sequence by moving at least one I-frame of the at least two I-frames from the at least one time slot to a different time slot within the global GOP sequence.
  • a time slot may in the present context be understood as a defined subsection of the time-axis of the global GOP sequence.
  • the controller is further configured to arrange the global GOP sequence into N subsections by forming a group of subsections comprising the N subsections, and aligning the group of subsections with said global GOP sequence based on a predefined metric.
  • the group of subsections may be construed as a series of subsections, each subsection having a defined length in time.
  • the controller is further configured to change a status of each low-density subsection to a normal-density subsection when the I-frame is moved thereto.
  • the controller is further configured to change a status of each low-density subsection to a normal-density subsection when the I-frame is moved thereto.
  • FIG. 1 is a schematic perspective view illustration of a remote tower system for air traffic control in accordance with an embodiment of the present invention
  • FIG. 2 a is a schematic plot illustrating the data output from one camera in a remote tower system in accordance with an embodiment of the present invention
  • FIG. 2 b is a schematic plot illustrating a first distribution of the data output from a plurality of cameras in a remote tower system in accordance with an embodiment of the present invention
  • FIG. 2 c is a schematic plot illustrating a second distribution of the data output from a plurality of cameras in a remote tower system in accordance with an embodiment of the present invention
  • FIG. 3 is a flow chart representation of a method for managing data output from multiple cameras of a remote tower in accordance with an embodiment of the present invention
  • FIG. 4 is a block chart representation of a remote tower system in accordance with an embodiment of the present invention.
  • FIG. 1 illustrates a schematic overview of a remote tower system 1 in an Air Traffic Control (ATC) application, according to an exemplary embodiment of the present invention.
  • FIG. 1 shows a remote tower 4 arranged at an airport 20 .
  • the remote tower 4 may also be referred to as a remote and digital tower and can generally be explained as a solution where the Air Traffic Service (ATS) is performed somewhere else 3 than in a local control tower.
  • ATS Air Traffic Service
  • a general remote tower system has a remote ATC control room 3 with a video-sensor 9 based surveillance instead of the conventional “out-of-the-window” view from a real tower.
  • the optical sensors 9 e.g.
  • the Air Traffic Control Officer(s) 7 supply the Air Traffic Control Officer(s) 7 at the remote tower centre 3 with a high-quality real-time image feed of the runway, the movement area, and the nearby airspace.
  • the real-time images are displayed at large monitors 6 providing up to a 360-degree view of the area around the remote tower 4 .
  • the remote tower 4 may for example comprise more than ten cameras (such as e.g. twelve or fourteen cameras) arranged in a camera house.
  • the camera house is arranged to protect the cameras from weather (rain, snow, hail, moisture, etc.), high and low temperatures, sunlight, insects, birds, etc.
  • the cameras 9 may be any type of suitable optical sensors such as e.g. high definition video cameras, infrared cameras, night vision cameras, etc.
  • the remote tower may also be realized as a plurality of cameras or groups of cameras spatially distributed at different locations (not shown) overlooking the airport, instead of being gathered in a single tower-like structure as illustrated in the drawing.
  • remote tower 4 further may further be provided with radar systems, microphones, and any other suitable arrangements used in general ATC applications. These aspects are however considered obvious for the skilled reader and will for the sake of brevity and conciseness not be further discussed in any detail.
  • the plurality of cameras 9 are arranged to transmit a data output (i.e. an image feed or video feed) via a common transmission media 2 (e.g. wide area network) to a central entity 3 (e.g. control centre).
  • a data output i.e. an image feed or video feed
  • a common transmission media 2 e.g. wide area network
  • a central entity 3 e.g. control centre
  • the term common with respect to the common transmission media is to be understood as “shared by” and not as “ordinary” or “conventional”.
  • the common transmission media may be realised by a wired connection, wireless connection, or a combination thereof.
  • the data output, i.e. the video output is digitally encoded by a method that uses I-frames together with Predicted Frames (P-frames) and possibly Bi-directional Frames (B-frames).
  • each camera 9 is configured to generate an image output (video output) comprising an I-frame (Intra frame) in a Group of Pictures (GO
  • I-frames are larger than the other frames (P-frames/B-frames) and can be said to form peaks in the output bandwidth from the cameras 9 .
  • the I-frames can be said to contain the full image and do therefore not require any additional information to reconstruct them.
  • encoders use GOP structures that cause each I-frame to be a “clean random access point,” such that decoding can start cleanly on an I-frame.
  • I-frame the full frame
  • P-frames and B-frames the rest of the frames
  • the time interval for I-frames varies (e.g. multiple times per second, or once every second, once every 10 seconds, etc.).
  • the more I-frames that are to be transmitted the larger the video stream will be which requires more bandwidth capacity.
  • FIG. 2 a An example of data output from a camera 9 can be seen in FIG. 2 a , where the large peaks are I-frames and the P-frames are barely visible between the I-frames.
  • the GOP is 60 frames in the illustrated example in FIG. 2 a .
  • the plots in FIGS. 2 a -2 c are amount of data, e.g. bytes, (y-axis) over output frame index (x-axis).
  • a GOP may include the following picture types:
  • the remote tower system 1 has a controller 10 (may also be referred to as a control unit), here as a part of a remote server/controller, such as e.g. a cloud-based system 5 , arranged to monitor and control the data output of each camera 9 onto the common transmission media 2 .
  • the controller 10 is arranged to monitor the data output from each camera in real-time, and is capable of moving the GOP sequence for each individual camera by e.g. an external command.
  • the controller 10 is configured to determine a global GOP sequence for the plurality of cameras 9 , where the global GOP sequence comprises the I-frame of each camera of the plurality of cameras 9 . Thereafter, the controller 10 analyses the global GOP sequence and determines or observes a first distribution of I-frames within the GOP.
  • FIG. 2 b shows data output from 10 individual cameras 9 , where the peaks represent I-frames from each of the cameras.
  • the plot shows four full global GOP sequences.
  • the global GOP sequences in FIG. 2 b comprise output from ten individual cameras. It should be noted that this is merely an example of a first (un-ordered distribution) chosen for illustrative purposes.
  • the data peaks may be partly or completely overlapping in some areas, which would further elucidate the problems in terms of exceeding bandwidth limits, particularly when considering the aggregated data output over time (not shown).
  • the controller 10 is configured to form a second distribution of I-frames within the global GOP sequence by moving the I-frame of at least one of the plurality of cameras 9 within the global GOP sequence so that the second distribution is different from the first distribution.
  • the controller 10 may be configured to form a second distribution of I-frames within the global GOP sequence by moving the I-frame of at least one of the plurality of cameras 9 within the global GOP sequence so that the I-frames are more evenly distributed within the global GOP sequence as compared to the first distribution.
  • the step of moving an I-frame may for example include shortening a single GOP sequence for a specific camera so that the future I-frames are being generated and transmitted at their designated positions within the global GOP sequence.
  • the shortening can be done by e.g. requesting a new I-frame at a specific point in time within the global GOP sequence, which will consequently move the start of individual GOP sequence to that point in time.
  • the step of moving an I-frame may alternatively be done by extending a single GOP sequence for a specific camera so that the future I-frames are being generated and transmitted at their designated positions within the global GOP sequence.
  • FIG. 2 c An example of a resulting second distribution is illustrated in FIG. 2 c , showing the sequence from FIG. 2 b but where the I-frames are re-ordered in time (x-axis).
  • the risk of exceeding a bandwidth limit of a transmission media is relatively high in the high-density sections around frame index 60, 120, 180, 240 in the first distribution illustrated in FIG. 2 b .
  • the peaks defined by the I-frames are more evenly distributed in time, which reduces the risk of temporary aggregated transmission peaks.
  • FIG. 3 is a schematic flow chart representation of a method 100 for managing data output from multiple cameras of a remote tower via a common transmission media to a central entity.
  • Each camera is here configured to periodically output an I-frame 50 a in a GOP sequence.
  • the method 100 comprises a step of determining 101 a global GOP sequence 51 for the plurality of cameras.
  • the global GOP sequence 51 includes the I-frame 50 a of each camera of the multiple cameras.
  • a first distribution of I-frames 50 a within the global GOP sequence 51 is determined 102 .
  • This may for example be construed as forming a timeline 52 and recording, on the timeline, when each camera emits an I-frame 50 a.
  • Each I-frame 50 a is accordingly marked with a representation 50 b on the timeline 52 .
  • the start 53 of the global GOP sequence 51 and the end of the global GOP sequence 54 accordingly serve as a start and stop of the “recording”.
  • the global GOP sequence 51 is divided 104 into a plurality of subsections 55 .
  • the global GOP sequence 51 is divided into as many subsections 55 as there are I-frames 50 a within the global GOP 51 .
  • the global GOP sequence 51 comprises the data output from four cameras, and accordingly includes four I-frames 50 a resulting in four subsections 55 .
  • the method 100 may comprise a step of identifying subsections 55 in the formed global GOP sequence timeline 52 .
  • the subsections 55 are then arranged in a group of subsections 55 .
  • the group includes the individual subsections arranged in a sequential manner.
  • the total length (in time) of the group of subsections 55 is equal to the length (in time) of the global GOP sequence 51 .
  • high-density subsections 55 a and low-density subsections 55 b are identified 106 .
  • a high-density subsection 55 a can be construed as a subsection having a relatively high-density of I-frames, e.g. two or more I-frames 50 a.
  • a high-density subsection 55 a can be understood as a subsection of the global GOP sequence that is more likely to exceed available bandwidth on the common transmission media than other subsections (low and normal-density subsections).
  • a low-density subsection 55 b can be construed as subsection having a relatively low-density of I-frames 50 a, e.g. zero I-frames.
  • a low-density subsection 55 b can be understood as a subsection in the global GOP sequence that into which one can insert additional I-frames without exceeding the bandwidth of the common transmission media.
  • bandwidth The amount of data that has to be transmitted per second from the plurality of cameras is referred to as bandwidth. It is generally measured in bit/s, but may also be measured in byte/s which equals to 1 ⁇ 8 th of the corresponding bitrate.
  • the method 100 may further comprise identifying or classifying the remaining subsections as normal subsections 55 c. Accordingly, the step of classifying 106 subsections 55 in the group of subsections 55 may for example be such that:
  • the method may include a step of aligning 105 the group of subsections 55 by some predefined metric before identifying/classifying the subsections as high, low and normal-density. More specifically, the alignment 105 or adjustment may be understood optimizing the placement of the group of subsections 55 on the formed timeline 52 representation of the global GOP sequence 51 .
  • the predefined metric may for example be to minimize the sum of absolute differences between the time stamp of an I-frame 50 a and the closest central point of a subsection 55 .
  • the step of aligning 105 the group of subsections may comprise adjusting the position in time of the group of subsections 55 such that a sum of an absolute difference between a time stamp of an I-frame 50 a and a closest central point of a subsection is below a first predefined threshold.
  • a predefined metric is to minimize the sum of the squared differences between the time stamp of an I-frame 50 a and the closest central point of a section 55 .
  • the step of aligning 105 the group of subsections may comprise adjusting the position in time of the group of subsections 55 such that a sum of a squared difference between a time stamp of an I-frame 50 a and a closest central point of a section 55 is below a second predefined threshold.
  • the method 100 includes a step of forming 103 a second distribution of I-frames 50 within the global GOP sequence 51 by moving 107 the I-frame 50 a of one or more cameras within the global GOP sequence 51 such that the second distribution is different from the first distribution.
  • one of the I-frames 50 a in the high-density subsection 55 a is moved to the low-density subsection 55 b. More specifically, the I-frame 50 a is moved to a different point in time within the interval defined by the low-density subsection 55 b. Preferably, the I-frame 50 a is moved to a center point of the interval defined by the low-density subsection 55 b.
  • the method may further comprise a step of changing 108 the status of the subsections from a high/low-density subsection to a normal subsection 55 c.
  • the forming 103 of the second distribution may include populating a list ⁇ A ⁇ with I-frames 50 a that are out of order by taking I-frames 50 a from high-density subsections 55 a until there are no more high-density subsections left.
  • the method 100 may also include making a note of where it shall be moved within the global GOP sequence (i.e. to a free low-density subsection 55 b ).
  • FIG. 4 is a schematic block diagram illustrating a remote tower system 1 according to an exemplary embodiment of the present invention.
  • the remote tower system 1 has a plurality of cameras 9 arranged to transmit data output via a common transmission media 2 to a central entity 3 .
  • the data output is generally in the form of images or parts of images of the surrounding environment of the cameras 9 .
  • Each camera 9 is configured to periodically output an Intra Frame in a GOP sequence.
  • the system 1 further has a controller 10 connected to each camera 9 .
  • the controller 10 is arranged to monitor 8 and control the data output (in real-time) of each camera 9 onto the common transmission media 2 .
  • the controller 10 may for example be manifested as a general-purpose processor, an application specific processor, a circuit containing processing components, a group of distributed processing components, a group of distributed computers configured for processing, a field programmable gate array (FPGA), etc.
  • the controller 10 may for example be in the form of a circuit having a processor 11 such as e.g. a microprocessor, microcontroller, programmable digital signal processor or another programmable device.
  • the controller 10 may also, or instead, include an application-specific integrated circuit (ASIC), a programmable gate array or programmable array logic, a programmable logic device, or a digital signal processor.
  • ASIC application-specific integrated circuit
  • the processor 11 or an associated memory 12 may further include computer executable code that controls operation of the programmable device.
  • the controller 10 may comprise a digital signal processor arranged and be configured for digital communication with an off-site server or cloud based server.
  • the digital signal processor may be configured for digital communication with one or more local control units associated with the cameras 9 .
  • data may be sent to and from the controller 10 , as readily understood by the skilled reader.
  • parts of the described solution may be implemented either in the controller 10 , in a system located external the controller 10 , or in a combination of internal and external the controller 10 ; for instance in a server in communication with the controller 10 , a so called cloud solution.
  • communication signal may be sent to an external system and that external system performs the steps to determine how to move the I-frames so to form the second distribution and send back information indicating the moving process and other relevant parameters used in forming the second distribution.
  • the processor 11 may be or include any number of hardware components for conducting data or signal processing or for executing computer code stored in memory 12 .
  • controller 10 may have an associated memory 12
  • the memory 12 may be one or more devices for storing data and/or computer code for completing or facilitating the various methods described in the present description.
  • the memory 12 may include volatile memory or non-volatile memory.
  • the memory 12 may include database components, object code components, script components, or any other type of information structure for supporting the various activities of the present description.
  • any distributed or local memory device may be utilized with the systems and methods of this description.
  • the memory 12 is communicably connected to the processor 11 (e.g., via a circuit or any other wired, wireless, or network connection) and includes computer code for executing one or more processes described herein.
  • one or more communication interfaces 13 , 14 and/or one or more antenna interfaces may be provided and furthermore, also one or more sensor interfaces (not shown) may be provided for acquiring data from sensors associated with the system.
  • the controller 10 is configured to determine a global GOP sequence for the plurality of cameras 9 .
  • the global GOP sequence comprises one I-frame of each camera in the plurality of cameras.
  • the global GOP sequence is a continuous or discrete time series, thus it can be construed as the smallest interval in time of the data output on the common transmission media 2 in which one can fit the I-frame from each camera 9 .
  • the controller is configured to determine a first distribution of I-frames in the global GOP sequence. This step may also be construed as detecting or observing a first distribution (i.e. unordered distribution) of I-frames within the determined global GOP sequence.
  • the control unit 10 is configured to form a second distribution (i.e. ordered distribution) of I-frames within the global GOP sequence by moving the I-frame of at least one camera 9 within the global GOP sequence so that the second distribution is different form the first distribution.
  • the controller 10 may be configured to perform any one of the method steps discussed in the foregoing in reference to FIG. 3 .
  • the controller 10 may be configured to arrange the global GOP sequence into a plurality of sequential subsections together forming a group of subsections.
  • the controller 10 is configured to identify at least one high-density subsection and at least one low-density subsection in the group of subsections.
  • the controller 10 may for example review the number of I-frames within each subsection in the group of subsections and classify them based on the number of identified I-frames within each subsection.
  • each subsection having two or more I-frames can be classified as a high-density subsection, while each subsection having zero I-frames can be classified as a low-density subsection, and the remaining subsections can be classified as normal-density subsections.
  • the controller 10 can be configured to control the output of each camera 9 so that one or more I-frames from the high-density subsections is moved to one or more low-density subsections so that all subsections of the group of subsections can be re-classified as normal-density subsections.
  • the second distribution is formed, the second distribution having a more evenly spread bandwidth requirement within the global GOP sequence, as compared to the first distribution.
  • the controller 10 is configured to ensure that no high-density subsections are formed in the second distribution.
  • the inventive method may be construed as, on a periodic basis, analysing a current distribution of I-frames within the global GOP, identifying a list ⁇ A ⁇ containing I-frames that are out of order, designating a new position for each I-frame in the list ⁇ A ⁇ within the global GOP sequence, and moving one or more I-frames from the list ⁇ A ⁇ to their new designated positions within the global GOP sequence.
  • the event that invokes periodic reordering can be based on a periodic timer event, on the global GOP sequence, or an external command (e.g. the system controller noticing an increasing risk of congestion in the data traffic).
  • the present disclosure contemplates methods, devices and program products on any machine-readable media for accomplishing various operations.
  • the embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system.
  • Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon.
  • Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor.
  • machine-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor.
  • a network or another communications connection either hardwired, wireless, or a combination of hardwired or wireless
  • any such connection is properly termed a machine-readable medium.
  • Machine-executable instructions include, for example, instructions and data that cause a general-purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions. As already exemplified, some parts or all of the functions may be realized as a “cloud-based” solution.

Abstract

The present invention relates to a method for managing data output from multiple cameras of a remote tower via a common transmission media to a central entity, non-transitory computer-readable storage medium, and a corresponding remote tower system. The method comprises determining a global GOP sequence for the multiple cameras, determining a first distribution of I-frames within the global GOP sequence and forming a second distribution of I-frames within that global GOP sequence by moving the I-frame of at least one camera. The proposed method allows for transmitting data from a remote virtual tower to a control centre with reduced data peaks whereby the data can be transmitted without additional buffering and the end-to-end transmission delays can be kept at low level.

Description

    TECHNICAL FIELD OF THE INVENTION
  • The present invention relates to data management method for remote towers of a remote Air Traffic Control (ATC) system. More specifically, the present invention relates to a method for managing data output from multiple cameras of a remote tower via a common transmission media to a central entity.
  • BACKGROUND
  • Remote and digital tower is a new concept where the Air Traffic Service (ATS) at an airport is performed somewhere else than in the local control tower. In these systems one uses multiple cameras to monitor the air and ground traffic at an airport. The camera output is transmitted in real-time to a central location where air traffic controllers view the camera video and control the traffic at the airport.
  • The transmission media from the airport to the central location often has limited capacity, and at the same time very strict requirements in terms of data loss and delay. In general, the data output from each camera varies a lot. If the cameras are allowed to transmit their data without any control, the capacity of the transmission media may easily be exceeded.
  • Accordingly, there is need for an active bandwidth control function; otherwise, one runs a high risk of frequent data loss, which is unacceptable in ATS environments.
  • To this end, a conventional solution to this problem is to buffer the data and send it out at a constant pace (i.e. a “leaky-bucket” technique). However, such methods suffer from a general drawback of introducing relatively large end-to-end transmission delays. Alternatively one could contemplate increasing the maximum available bandwidth, this is however generally not applicable from a cost perspective.
  • There is therefore a need in the art for a new method which allows the data to be transmitted without additional buffering, and which allows the end-to-end transmission delays to stay low in a cost-effective manner.
  • SUMMARY OF THE INVENTION
  • It is therefore an object of the present invention to provide a method for managing data output from multiple cameras of a remote tower via a common transmission media to a central entity, a non-transitory computer-readable storage medium, and a remote tower system, which alleviate all or at least some of the above-discussed drawbacks of presently known systems.
  • The term exemplary is the present context to be understood as serving as an example, instance or illustration.
  • This object is achieved by means of a method for managing data output from multiple cameras of a remote tower via a common transmission media to a central entity, wherein each camera is configured to periodically output an I-frame (Intra frame) in a Group of Pictures (GOP) sequence. The method comprises:
  • determining a global GOP sequence for the multiple cameras, the global GOP sequence comprising the I-frame of each camera of the multiple cameras;
  • determining a first distribution of I-frames within the global GOP sequence;
  • forming a second distribution of I-frames within the global GOP sequence by moving the I-frame of at least one of the multiple cameras within the global GOP sequence so that the second distribution is different from the first distribution.
  • The proposed method allows for transmitting data from a remote tower to a control centre with reduced data peaks. More specifically, the video data can be transmitted without additional buffering while at the same time keeping the end-to-end transmission at low level. Stated differently, the method allows for effectively controlling the data output of the cameras in such a way that they do not form momentary large peaks in the total bandwidth which may cause data loss.
  • The video output (data output from the cameras) are encoded by a method which generates an image output (data output) comprising an I-frame (Intra frame) in a Group of Pictures (GOP) sequence.
  • The step of moving an I-frame may for example include shortening a single GOP sequence for a specific camera so that the future I-frames are being generated and transmitted at their designated positions within the global GOP sequence. The shortening can be done by e.g. requesting a new I-frame at a specific point in time within the global GOP sequence, which will consequently move the start of individual GOP sequence to that point in time. However, the step of moving an I-frame may alternatively be done by extending a single GOP sequence for a specific camera so that the future I-frames are being generated and transmitted at their designated positions within the global GOP sequence.
  • Further, in reference to the encoding, the data output (i.e. the video output) is digitally encoded by a method that uses I-frames together with Predicted Frames (P-frames) and possibly Bi-directional Frames (B-frames). I-frames are larger than the other frames (P-frames/B-frames) and can be said to form peaks in the output bandwidth from the cameras. The I-frames can further be said to contain the full image and do therefore not require any additional information to reconstruct them. Typically, encoders use GOP structures that cause each I-frame to be a “clean random access point,” such that decoding can start cleanly on an I-frame.
  • In more detail, a GOP may include an I-frame, one or more P-frames, and one or more B-frames. The I-frame (intra coded picture) is a picture that is coded independently of all other pictures. Each GOP begins (in decoding order) with this type of picture. As mentioned, the I-frame is a complete image, like a jpeg image file. The P-frame (predictive coded picture) holds only the changes in the image from the previous frame. For example, in a scene where an object moves across a stationary background, only the object's relative movements are encoded. Accordingly, the encoder does not need to store the unchanging background pixels in the P-frame, thereby saving space. P-frames may also be known as delta-frames. The B-frame (bi-predictive coded picture) saves even more space by using differences between the current frame and both the preceding and following frames to specify its content.
  • In the present context, the terms “frame” and “picture” are used interchangeably, and even though the term “frame” is mainly used in the present disclosure, it is considered to encompass both a complete image and a partial image (i.e. a field). Moreover, the term remote tower should in the present context be interpreted broadly, and may for example include embodiments where the cameras are spatially distributed at an airport instead of gathered in a tower-like structure.
  • The present invention is at least partly based on the realization that if the data output from the cameras is left un-controlled there is a relatively high probability that I-frames will be clustered together in time and cause large peaks in the total bandwidth when summed together. In particular, the present inventor realized that the common transmission media is limited such that if the output bandwidth from a plurality of cameras peak closely in time, the system will “drop” video data in such way that it will not be received at the receiving side. In other words, if the bandwidth limit is exceeded, data loss will occur, which is for obvious reasons inacceptable in Air Traffic Control applications.
  • Thus, by actively controlling the data output from each camera and moving specific individual GOP sequences in order to form a new “ordered” distribution where the I-frames are evenly spread in time, these adverse effects may be mitigated. In more detail, the “ordered” situation will reduce the risk of forming large data peaks in the total bandwidth and thereby increase the chances of sending the data over the common transmission media without data loss.
  • Further, in accordance with an exemplary embodiment of the present invention, the method further comprises identifying, in the first distribution of I-frames, at least one time slot comprising at least two I-frames, the time slot being of a limited length shorter than the global GOP sequence;
  • wherein the step of forming the second distribution comprises:
      • moving at least one I-frame of the at least two I-frames from the at least one time slot to a different time slot within the global GOP sequence.
  • A time slot may in the present context be understood as a limited subsection or part of the “time-axis” of the global GOP sequence. Accordingly, the I-frames are moved from time slots (in the global GOP sequence) which are considered as running a relatively high risk of momentarily exceeding bandwidth limits, to a different time slot. Preferably, the I-frames are moved to arbitrary time slots which contain no I-frames.
  • Thus, according to another exemplary embodiment of the present invention, the step of determining a first distribution of I-frames within the global GOP sequence comprises:
  • dividing the global GOP sequence into a plurality of subsections;
  • identifying at least one high-density subsection, wherein the high-density subsection comprises a number of I-frames above a first predefined threshold;
  • identifying at least one low-density subsection, wherein the low-density subsection comprises a number of I-frames below a second predefined threshold;
  • wherein the step of forming the second distribution within the global GOP sequence comprises:
      • moving an I-frame from each identified high-density subsection to a corresponding low-density subsection. The first predefined threshold and the second predefined threshold may be 1.0, so that any subsection having two or more I-frames is classified as a high-density subsection and any subsection having zero I-frames is classified as a low-density subsection. The remaining subsections (the subsections containing only one I-frame) may be classified/identified as normal-density subsections. The subsections are preferably of equal length. However, in some exemplary embodiments the first predefined threshold may be 2.0, 3.0 or 4.0, and the second predefined threshold may be 2.0, 3.0 or 4.0. In particular, the first predefined threshold may be 3.0 and the second predefined threshold may be 2.0.
  • Stated differently, the global GOP sequence is arranged in a plurality of defined subsections or parts. This may for example be done by forming a timeline and recording every time an I-frame occurs in the global GOP sequence. Then this timeline is then divided into a plurality of segments forming the subsections. The subsections are then classified based on how many “recordings” they contain. Subsequently, the I-frames are moved from the subsections that contain a relatively high number of I-frames to subsections that contain a relatively low number (e.g. zero) I-frames in order to even out the spread of I-frames within the global GOP sequence, thereby reducing the risk of momentary peaks in bandwidth in the data output.
  • Further, in accordance with another exemplary embodiment of the present invention, the global GOP sequence comprises N I-frames, wherein the step of dividing the global GOP sequence into a plurality of subsections comprises:
  • dividing the global GOP sequence into N subsections; and
  • wherein the step of forming the second distribution within the global GOP sequence comprises:
      • moving an I-frame from each identified high-density subsection to a corresponding low-density subsection such that each subsection out of the N subsections comprises one I-frame. In other words, the second distribution is formed such that it contains N subsections and each subsection contains one I-frame.
  • Still further, in accordance with yet another exemplary embodiment of the present invention, the step of dividing the global GOP sequence into N subsections comprises:
  • forming a group of subsections comprising a series of the N subsections;
  • aligning the group of subsections with said global GOP sequence based on a predefined metric. One example of such a predefined metric is to minimize the sum of absolute differences between the time stamp of an I-frame and the closest central point of a section. Another example of such a predefined metric is to minimize the sum of the squared differences between the time stamp of an I-frame and the closest central point of a section. Moreover, the step of forming the second distribution of I-frames within the global GOP sequence may further comprise changing a status of each low-density subsection to a normal-density subsection when the I-frame is moved thereto.
  • Further, in accordance with another aspect of the present invention, there is provided a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a remote tower system, the one or more programs comprising instructions for performing the method according to any one of the above discussed embodiments in reference to the first aspect of the present invention. Thus, with this aspect of the invention, similar embodiments and advantages are present as in the previously discussed first aspect of the invention.
  • Yet further, in accordance with another aspect of the present invention, there is provided a remote tower system for air traffic control comprising:
  • a plurality of cameras arranged to transmit data output via a common transmission media to a central entity, each camera being configured to periodically output an I-frame in a Group of Pictures, GOP, sequence:
  • a controller arranged to monitor and control the data output of each camera onto the common transmission media, the controller being configured to:
  • determine a global GOP sequence for the plurality of cameras, the global GOP sequence comprising the I-frame of each camera of the plurality of cameras;
  • determine a first distribution of I-frames within the global GOP sequence; form a second distribution of I-frames within the global GOP sequence by moving the I-frame of at least one of the plurality of cameras within the global GOP sequence so that the second distribution is different from the first distribution. With this aspect of the invention, similar embodiments and advantages are present as in the previously discussed first aspect of the invention, and vice versa.
  • The controller (may also be referred to as a control unit) can be provided by means of appropriate software, hardware or a combination thereof. Moreover, the controller may be referred to as a “supervisor unit” which is configured to monitor the data output on the common transmission media directly or indirectly by monitoring the output from each individual camera.
  • In accordance with an exemplary embodiment of the present invention, the controller is further configured to identify, in the first distribution of I-frames, at least one time slot comprising at least two I-frames, the time slot being of a limited length shorter than the global GOP sequence; and form the second distribution of I-frames within the global GOP sequence by moving at least one I-frame of the at least two I-frames from the at least one time slot to a different time slot within the global GOP sequence. As mentioned, a time slot may in the present context be understood as a defined subsection of the time-axis of the global GOP sequence.
  • Further, according to yet another exemplary embodiment of the present invention, the controller is further configured to arrange the global GOP sequence into N subsections by forming a group of subsections comprising the N subsections, and aligning the group of subsections with said global GOP sequence based on a predefined metric. The group of subsections may be construed as a series of subsections, each subsection having a defined length in time.
  • Moreover, in accordance with another exemplary embodiment of the present invention, the controller is further configured to change a status of each low-density subsection to a normal-density subsection when the I-frame is moved thereto. Thus, once all of the subsections are labelled or marked as normal-density subsections the rearranging of the I-frames may be considered to be completed.
  • These and other features and advantages of the present invention will in the following be further clarified with reference to the embodiments described hereinafter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For exemplifying purposes, the invention will be described in closer detail in the following with reference to embodiments thereof illustrated in the attached drawings, wherein:
  • FIG. 1 is a schematic perspective view illustration of a remote tower system for air traffic control in accordance with an embodiment of the present invention;
  • FIG. 2a is a schematic plot illustrating the data output from one camera in a remote tower system in accordance with an embodiment of the present invention;
  • FIG. 2b is a schematic plot illustrating a first distribution of the data output from a plurality of cameras in a remote tower system in accordance with an embodiment of the present invention;
  • FIG. 2c is a schematic plot illustrating a second distribution of the data output from a plurality of cameras in a remote tower system in accordance with an embodiment of the present invention;
  • FIG. 3 is a flow chart representation of a method for managing data output from multiple cameras of a remote tower in accordance with an embodiment of the present invention;
  • FIG. 4 is a block chart representation of a remote tower system in accordance with an embodiment of the present invention.
  • DETAILED DESCRIPTION
  • In the following detailed description, some embodiments of the present invention will be described. However, it is to be understood that features of the different embodiments are exchangeable between the embodiments and may be combined in different ways, unless anything else is specifically indicated. Even though in the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well known constructions or functions are not described in detail, so as not to obscure the present invention.
  • FIG. 1 illustrates a schematic overview of a remote tower system 1 in an Air Traffic Control (ATC) application, according to an exemplary embodiment of the present invention. In more detail, FIG. 1 shows a remote tower 4 arranged at an airport 20. The remote tower 4 may also be referred to as a remote and digital tower and can generally be explained as a solution where the Air Traffic Service (ATS) is performed somewhere else 3 than in a local control tower. In more detail, a general remote tower system has a remote ATC control room 3 with a video-sensor 9 based surveillance instead of the conventional “out-of-the-window” view from a real tower. The optical sensors 9 (e.g. cameras) supply the Air Traffic Control Officer(s) 7 at the remote tower centre 3 with a high-quality real-time image feed of the runway, the movement area, and the nearby airspace. The real-time images are displayed at large monitors 6 providing up to a 360-degree view of the area around the remote tower 4.
  • The remote tower 4 may for example comprise more than ten cameras (such as e.g. twelve or fourteen cameras) arranged in a camera house. The camera house is arranged to protect the cameras from weather (rain, snow, hail, moisture, etc.), high and low temperatures, sunlight, insects, birds, etc. The cameras 9 may be any type of suitable optical sensors such as e.g. high definition video cameras, infrared cameras, night vision cameras, etc. Moreover, the remote tower may also be realized as a plurality of cameras or groups of cameras spatially distributed at different locations (not shown) overlooking the airport, instead of being gathered in a single tower-like structure as illustrated in the drawing.
  • As the skilled artisan realizes the remote tower 4 further may further be provided with radar systems, microphones, and any other suitable arrangements used in general ATC applications. These aspects are however considered obvious for the skilled reader and will for the sake of brevity and conciseness not be further discussed in any detail.
  • Moving on, the plurality of cameras 9 are arranged to transmit a data output (i.e. an image feed or video feed) via a common transmission media 2 (e.g. wide area network) to a central entity 3 (e.g. control centre). The term common with respect to the common transmission media is to be understood as “shared by” and not as “ordinary” or “conventional”. Moreover, the common transmission media may be realised by a wired connection, wireless connection, or a combination thereof. The data output, i.e. the video output is digitally encoded by a method that uses I-frames together with Predicted Frames (P-frames) and possibly Bi-directional Frames (B-frames). Accordingly, each camera 9 is configured to generate an image output (video output) comprising an I-frame (Intra frame) in a Group of Pictures (GOP) sequence.
  • I-frames are larger than the other frames (P-frames/B-frames) and can be said to form peaks in the output bandwidth from the cameras 9. The I-frames can be said to contain the full image and do therefore not require any additional information to reconstruct them. Typically, encoders use GOP structures that cause each I-frame to be a “clean random access point,” such that decoding can start cleanly on an I-frame.
  • In these compression standards (video coding designs), the full frame (I-frame) is stored only at specific intervals of e.g. one per second, and encodes the rest of the frames (P-frames and B-frames) only with the differences caused by motion in the video. The time interval for I-frames varies (e.g. multiple times per second, or once every second, once every 10 seconds, etc.). The more I-frames that are to be transmitted, the larger the video stream will be which requires more bandwidth capacity. However, the more I-frames the video stream has, the more editable it is.
  • An example of data output from a camera 9 can be seen in FIG. 2a , where the large peaks are I-frames and the P-frames are barely visible between the I-frames. The GOP is 60 frames in the illustrated example in FIG. 2a . The plots in FIGS. 2a-2c are amount of data, e.g. bytes, (y-axis) over output frame index (x-axis).
  • In more detail, a GOP may include the following picture types:
      • I-frame (intra/inter coded picture). An I-frame is a picture that is coded independently of all other pictures. Each GOP begins (in decoding order) with this type of picture.
      • P-frame (predictive coded picture). The P-frame contains motion-compensated difference information relative to previously decoded pictures.
      • B-frame (bi-predictive coded picture). A B-frame also contains motion-compensated difference information relative to previously decoded pictures.
  • Further, the remote tower system 1 has a controller 10 (may also be referred to as a control unit), here as a part of a remote server/controller, such as e.g. a cloud-based system 5, arranged to monitor and control the data output of each camera 9 onto the common transmission media 2. More specifically, the controller 10 is arranged to monitor the data output from each camera in real-time, and is capable of moving the GOP sequence for each individual camera by e.g. an external command.
  • The controller 10 is configured to determine a global GOP sequence for the plurality of cameras 9, where the global GOP sequence comprises the I-frame of each camera of the plurality of cameras 9. Thereafter, the controller 10 analyses the global GOP sequence and determines or observes a first distribution of I-frames within the GOP. An example of this can be seen in FIG. 2b , showing data output from 10 individual cameras 9, where the peaks represent I-frames from each of the cameras. The plot shows four full global GOP sequences. The global GOP sequences in FIG. 2b comprise output from ten individual cameras. It should be noted that this is merely an example of a first (un-ordered distribution) chosen for illustrative purposes. As the skilled reader realizes, the data peaks may be partly or completely overlapping in some areas, which would further elucidate the problems in terms of exceeding bandwidth limits, particularly when considering the aggregated data output over time (not shown).
  • Further, the controller 10 is configured to form a second distribution of I-frames within the global GOP sequence by moving the I-frame of at least one of the plurality of cameras 9 within the global GOP sequence so that the second distribution is different from the first distribution. For example, the controller 10 may be configured to form a second distribution of I-frames within the global GOP sequence by moving the I-frame of at least one of the plurality of cameras 9 within the global GOP sequence so that the I-frames are more evenly distributed within the global GOP sequence as compared to the first distribution.
  • The step of moving an I-frame may for example include shortening a single GOP sequence for a specific camera so that the future I-frames are being generated and transmitted at their designated positions within the global GOP sequence. The shortening can be done by e.g. requesting a new I-frame at a specific point in time within the global GOP sequence, which will consequently move the start of individual GOP sequence to that point in time. However, the step of moving an I-frame may alternatively be done by extending a single GOP sequence for a specific camera so that the future I-frames are being generated and transmitted at their designated positions within the global GOP sequence.
  • An example of a resulting second distribution is illustrated in FIG. 2c , showing the sequence from FIG. 2b but where the I-frames are re-ordered in time (x-axis). As can be understood from the two plots in FIGS. 2b and 2c , the risk of exceeding a bandwidth limit of a transmission media is relatively high in the high-density sections around frame index 60, 120, 180, 240 in the first distribution illustrated in FIG. 2b . However, in the second distribution, the peaks defined by the I-frames are more evenly distributed in time, which reduces the risk of temporary aggregated transmission peaks.
  • FIG. 3 is a schematic flow chart representation of a method 100 for managing data output from multiple cameras of a remote tower via a common transmission media to a central entity. Each camera is here configured to periodically output an I-frame 50 a in a GOP sequence. The method 100 comprises a step of determining 101 a global GOP sequence 51 for the plurality of cameras. The global GOP sequence 51 includes the I-frame 50 a of each camera of the multiple cameras.
  • Further, a first distribution of I-frames 50 a within the global GOP sequence 51 is determined 102. This may for example be construed as forming a timeline 52 and recording, on the timeline, when each camera emits an I-frame 50 a. Each I-frame 50 a is accordingly marked with a representation 50 b on the timeline 52. The start 53 of the global GOP sequence 51 and the end of the global GOP sequence 54 accordingly serve as a start and stop of the “recording”. Next, the global GOP sequence 51 is divided 104 into a plurality of subsections 55. Preferably, the global GOP sequence 51 is divided into as many subsections 55 as there are I-frames 50 a within the global GOP 51. In the illustrated example, the global GOP sequence 51 comprises the data output from four cameras, and accordingly includes four I-frames 50 a resulting in four subsections 55. Alternatively, the method 100 may comprise a step of identifying subsections 55 in the formed global GOP sequence timeline 52.
  • The subsections 55 are then arranged in a group of subsections 55. The group includes the individual subsections arranged in a sequential manner. The total length (in time) of the group of subsections 55 is equal to the length (in time) of the global GOP sequence 51.
  • Moving on, high-density subsections 55 a and low-density subsections 55 b are identified 106. A high-density subsection 55 a can be construed as a subsection having a relatively high-density of I-frames, e.g. two or more I-frames 50 a. In more detail, a high-density subsection 55 a can be understood as a subsection of the global GOP sequence that is more likely to exceed available bandwidth on the common transmission media than other subsections (low and normal-density subsections). A low-density subsection 55 b can be construed as subsection having a relatively low-density of I-frames 50 a, e.g. zero I-frames. In more detail, a low-density subsection 55 b can be understood as a subsection in the global GOP sequence that into which one can insert additional I-frames without exceeding the bandwidth of the common transmission media. The amount of data that has to be transmitted per second from the plurality of cameras is referred to as bandwidth. It is generally measured in bit/s, but may also be measured in byte/s which equals to ⅛th of the corresponding bitrate.
  • Moreover, the method 100 may further comprise identifying or classifying the remaining subsections as normal subsections 55 c. Accordingly, the step of classifying 106 subsections 55 in the group of subsections 55 may for example be such that:
      • If the subsection contains zero I-frames, that section is classified/identified as a low-density subsection.
      • If the subsection contains two or more I-frames, that section is classified/identified as a high-density subsection.
      • If the subsection contains one and only one I-frame, that section is classified/identified as a normal-density subsection.
  • As an option, the method may include a step of aligning 105 the group of subsections 55 by some predefined metric before identifying/classifying the subsections as high, low and normal-density. More specifically, the alignment 105 or adjustment may be understood optimizing the placement of the group of subsections 55 on the formed timeline 52 representation of the global GOP sequence 51. The predefined metric may for example be to minimize the sum of absolute differences between the time stamp of an I-frame 50 a and the closest central point of a subsection 55. In other words, the step of aligning 105 the group of subsections may comprise adjusting the position in time of the group of subsections 55 such that a sum of an absolute difference between a time stamp of an I-frame 50 a and a closest central point of a subsection is below a first predefined threshold.
  • Another example of a predefined metric is to minimize the sum of the squared differences between the time stamp of an I-frame 50 a and the closest central point of a section 55. In other words, the step of aligning 105 the group of subsections may comprise adjusting the position in time of the group of subsections 55 such that a sum of a squared difference between a time stamp of an I-frame 50 a and a closest central point of a section 55 is below a second predefined threshold.
  • Further, the method 100 includes a step of forming 103 a second distribution of I-frames 50 within the global GOP sequence 51 by moving 107 the I-frame 50 a of one or more cameras within the global GOP sequence 51 such that the second distribution is different from the first distribution. In more detail, one of the I-frames 50 a in the high-density subsection 55 a is moved to the low-density subsection 55 b. More specifically, the I-frame 50 a is moved to a different point in time within the interval defined by the low-density subsection 55 b. Preferably, the I-frame 50 a is moved to a center point of the interval defined by the low-density subsection 55 b. Once, all of the relevant Intra frames 50 a are moved and the second distribution is formed, as indicated by the time-line 56 showing representations (marked with an x) of the position of each I-frame 50 a in the second distribution of I-frames 50 a within the global GOP sequence 51. The method may further comprise a step of changing 108 the status of the subsections from a high/low-density subsection to a normal subsection 55 c.
  • Thus, on a general level, the forming 103 of the second distribution may include populating a list {A} with I-frames 50 a that are out of order by taking I-frames 50 a from high-density subsections 55 a until there are no more high-density subsections left. For each selected I-frame 50 a the method 100 may also include making a note of where it shall be moved within the global GOP sequence (i.e. to a free low-density subsection 55 b).
  • FIG. 4 is a schematic block diagram illustrating a remote tower system 1 according to an exemplary embodiment of the present invention. The remote tower system 1 has a plurality of cameras 9 arranged to transmit data output via a common transmission media 2 to a central entity 3. The data output is generally in the form of images or parts of images of the surrounding environment of the cameras 9. Each camera 9 is configured to periodically output an Intra Frame in a GOP sequence.
  • The system 1 further has a controller 10 connected to each camera 9. The controller 10 is arranged to monitor 8 and control the data output (in real-time) of each camera 9 onto the common transmission media 2. Even though only three cameras 9 are illustrated in FIG. 4, the skilled reader readily realizes that the system 1 may comprise any suitable number of cameras to provide an adequate overview of the surrounding area around the vantage point formed by the remote tower at the airport.
  • It should be noted that the controller 10 may for example be manifested as a general-purpose processor, an application specific processor, a circuit containing processing components, a group of distributed processing components, a group of distributed computers configured for processing, a field programmable gate array (FPGA), etc. The controller 10 may for example be in the form of a circuit having a processor 11 such as e.g. a microprocessor, microcontroller, programmable digital signal processor or another programmable device. The controller 10 may also, or instead, include an application-specific integrated circuit (ASIC), a programmable gate array or programmable array logic, a programmable logic device, or a digital signal processor. Where the controller 10 includes a programmable device such as the microprocessor, microcontroller or programmable digital signal processor mentioned above, the processor 11 or an associated memory 12 may further include computer executable code that controls operation of the programmable device.
  • It should be understood that the controller 10 may comprise a digital signal processor arranged and be configured for digital communication with an off-site server or cloud based server. Analogously, if the controller 10 is part of a cloud based system (as depicted in FIG. 1), the digital signal processor may be configured for digital communication with one or more local control units associated with the cameras 9. Thus, data may be sent to and from the controller 10, as readily understood by the skilled reader.
  • Further, it should be understood that parts of the described solution may be implemented either in the controller 10, in a system located external the controller 10, or in a combination of internal and external the controller 10; for instance in a server in communication with the controller 10, a so called cloud solution. For instance, communication signal may be sent to an external system and that external system performs the steps to determine how to move the I-frames so to form the second distribution and send back information indicating the moving process and other relevant parameters used in forming the second distribution.
  • The processor 11 (of the controller 10) may be or include any number of hardware components for conducting data or signal processing or for executing computer code stored in memory 12. Accordingly, controller 10 may have an associated memory 12, and the memory 12 may be one or more devices for storing data and/or computer code for completing or facilitating the various methods described in the present description. The memory 12 may include volatile memory or non-volatile memory. The memory 12 may include database components, object code components, script components, or any other type of information structure for supporting the various activities of the present description. According to an exemplary embodiment, any distributed or local memory device may be utilized with the systems and methods of this description. According to an exemplary embodiment the memory 12 is communicably connected to the processor 11 (e.g., via a circuit or any other wired, wireless, or network connection) and includes computer code for executing one or more processes described herein.
  • Moreover, depending on functionality provided in the controller 10 one or more communication interfaces 13, 14 and/or one or more antenna interfaces (not shown) may be provided and furthermore, also one or more sensor interfaces (not shown) may be provided for acquiring data from sensors associated with the system.
  • Moving on, the controller 10 is configured to determine a global GOP sequence for the plurality of cameras 9. The global GOP sequence comprises one I-frame of each camera in the plurality of cameras. The global GOP sequence is a continuous or discrete time series, thus it can be construed as the smallest interval in time of the data output on the common transmission media 2 in which one can fit the I-frame from each camera 9.
  • Further, the controller is configured to determine a first distribution of I-frames in the global GOP sequence. This step may also be construed as detecting or observing a first distribution (i.e. unordered distribution) of I-frames within the determined global GOP sequence. Next, the control unit 10 is configured to form a second distribution (i.e. ordered distribution) of I-frames within the global GOP sequence by moving the I-frame of at least one camera 9 within the global GOP sequence so that the second distribution is different form the first distribution.
  • The controller 10 may be configured to perform any one of the method steps discussed in the foregoing in reference to FIG. 3. For example, the controller 10 may be configured to arrange the global GOP sequence into a plurality of sequential subsections together forming a group of subsections. Further, the controller 10 is configured to identify at least one high-density subsection and at least one low-density subsection in the group of subsections. In more detail, the controller 10 may for example review the number of I-frames within each subsection in the group of subsections and classify them based on the number of identified I-frames within each subsection. For example, each subsection having two or more I-frames can be classified as a high-density subsection, while each subsection having zero I-frames can be classified as a low-density subsection, and the remaining subsections can be classified as normal-density subsections. Based on this classification, the controller 10 can be configured to control the output of each camera 9 so that one or more I-frames from the high-density subsections is moved to one or more low-density subsections so that all subsections of the group of subsections can be re-classified as normal-density subsections. Thereby, the second distribution is formed, the second distribution having a more evenly spread bandwidth requirement within the global GOP sequence, as compared to the first distribution.
  • Preferably, the controller 10 is configured to ensure that no high-density subsections are formed in the second distribution.
  • In summary, the inventive method may be construed as, on a periodic basis, analysing a current distribution of I-frames within the global GOP, identifying a list {A} containing I-frames that are out of order, designating a new position for each I-frame in the list {A} within the global GOP sequence, and moving one or more I-frames from the list {A} to their new designated positions within the global GOP sequence. The event that invokes periodic reordering can be based on a periodic timer event, on the global GOP sequence, or an external command (e.g. the system controller noticing an increasing risk of congestion in the data traffic).
  • The present disclosure contemplates methods, devices and program products on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor.
  • By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a machine, the machine properly views the connection as a machine-readable medium. Thus, any such connection is properly termed a machine-readable medium. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data that cause a general-purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions. As already exemplified, some parts or all of the functions may be realized as a “cloud-based” solution.
  • Although the figures may show a specific order of method steps, the order of the steps may differ from what is depicted. In addition, two or more steps may be performed concurrently or with partial concurrence. Such variation will depend on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations could be accomplished with standard programming techniques with rule-based logic and other logic to accomplish the various connection steps, processing steps, comparison steps and decision steps. Additionally, even though the disclosure has been described with reference to specific exemplifying embodiments thereof, many different alterations, modifications and the like will become apparent for those skilled in the art.
  • It should be noted that the word “comprising” does not exclude the presence of other elements or steps than those listed and the words “a” or “an” preceding an element do not exclude the presence of a plurality of such elements. It should further be noted that any reference signs do not limit the scope of the claims, that the invention may be at least in part implemented by means of both hardware and software, and that several “means” or “units” may be represented by the same item of hardware.
  • The above mentioned and described embodiments are only given as examples and should not be limiting to the present invention. Other solutions, uses, objectives, and functions within the scope of the invention as claimed in the below described patent embodiments should be apparent for the person skilled in the art.

Claims (16)

1-15. (canceled)
16. A method for managing data output from multiple cameras of a remote tower via a common transmission media to a central entity, wherein each camera is configured to periodically output an Intra frame, in a Group of Pictures, GOP, sequence, the method comprising:
determining a global GOP sequence for the multiple cameras, said global GOP sequence comprising the Intra frame of each camera of the multiple cameras;
determining a first distribution of Intra frames within the global GOP sequence; and
forming a second distribution of Intra frames within the global GOP sequence by moving the Intra frame of at least one of said multiple cameras within the global GOP sequence so that the second distribution is different from the first distribution.
17. The method according to claim 17, wherein:
the method further comprises identifying, in the first distribution of Intra frames, at least one time slot comprising at least two Intra frames, said time slot being of a limited length shorter than the global GOP sequence; and
the step of forming the second distribution comprises moving at least one Intra frame of the at least two Intra frames from said at least one time slot to a different time slot within the global GOP sequence.
18. The method according to claim 16, wherein:
the step of determining a first distribution of Intra frames within the global GOP sequence comprises:
dividing the global GOP sequence into a plurality of subsections;
identifying at least one high-density subsection, wherein the high-density subsection comprises a number of Intra frames above a first predefined threshold; and
identifying at least one low-density subsection, wherein the low-density subsection comprises a number of Intra frames below a second predefined threshold; and
the step of forming the second distribution within the global GOP sequence comprises moving an Intra frame from each identified high-density subsection to a corresponding low-density subsection.
19. The method according to claim 18, wherein the first predefined threshold and the second predefined threshold is equal to one.
20. The method according to claim 18, wherein:
the global GOP sequence comprises N Intra frames, wherein the step of dividing the global GOP sequence into a plurality of subsections comprises dividing the global GOP sequence into N subsections; and
the step of forming the second distribution within the global GOP sequence comprises moving an Intra frame from each identified high-density subsection to a corresponding low-density subsection such that each subsection out of the N subsections comprises one Intra frame.
21. The method according to claim 20, wherein the step of dividing the global GOP sequence into N subsections comprises:
forming a group of subsections comprising a series of the N subsections; and
aligning the group of subsections with said global GOP sequence based on a predefined metric.
22. The method according to claim 18, wherein the step of forming the second distribution within the global GOP sequence further comprises changing a status of each low-density subsection to a normal-density subsection when the Intra frame is moved thereto.
23. The method according to claim 18, wherein the subsections are of equal length.
24. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a remote tower system, the one or more programs comprising instructions for performing the method according to claim 16.
25. A remote tower system for air traffic control comprising:
a plurality of cameras arranged to transmit data output via a common transmission media to a central entity, each camera being configured to periodically output an Intra frame in a Group of Pictures, GOP, sequence; and
a controller arranged to monitor and control the data output of each camera onto the common transmission media,
wherein the controller is configured to:
determine a global GOP sequence for the plurality of cameras, the global GOP sequence comprising the Intra frame of each camera of the plurality of cameras;
determine a first distribution of Intra frames within the global GOP sequence; and
form a second distribution of Intra frames within the global GOP sequence by moving the Intra frame of at least one of the plurality of cameras within the global GOP sequence so that the second distribution is different from the first distribution.
26. The remote tower system according to claim 25, wherein the controller is further configured to:
identify, in the first distribution of Intra frames, at least one time slot comprising at least two Intra frames, said time slot being of a limited length shorter than the global GOP sequence; and
form the second distribution of Intra frames within the global GOP sequence by moving at least one Intra frame of the at least two Intra frames from said at least one time slot to a different time slot within the global GOP sequence.
27. The remote tower system according to claim 25, wherein the controller is further configured to:
arrange the global GOP sequence into a plurality of subsections;
identify at least one high-density subsection, wherein the high-density subsection comprises a number of Intra frames above a first predefined threshold;
identify at least one low-density subsection, wherein the low-density subsection comprises a number of Intra frames below a second predefined threshold; and
form the second distribution of Intra frames within the global GOP sequence by moving an Intra frame from each identified high-density subsection to a corresponding low-density subsection.
28. The remote tower system according to claim 27, wherein the global GOP sequence comprises N Intra frames, and wherein the controller is configured to:
arrange the global GOP sequence into N subsections; and
form the second distribution of Intra frames within the global GOP sequence by moving an Intra frame from each identified high-density subsection to a corresponding low-density subsection such that each subsection out of the N subsections comprises one Intra frame.
29. The remote tower system according to claim 28, wherein the controller is configured to arrange the global GOP sequence into N subsections by forming a group of subsections comprising the N subsections, and aligning the group of subsections with said global GOP sequence based on a predefined metric.
30. The remote tower system according to claim 27, wherein the controller is further configured to change a status of each low-density subsection to a normal-density subsection when the Intra frame is moved thereto.
US17/291,511 2018-11-14 2018-11-14 Video data burst control for remote towers Abandoned US20220030225A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE2018/051170 WO2020101547A1 (en) 2018-11-14 2018-11-14 Video data burst control for remote towers

Publications (1)

Publication Number Publication Date
US20220030225A1 true US20220030225A1 (en) 2022-01-27

Family

ID=70731886

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/291,511 Abandoned US20220030225A1 (en) 2018-11-14 2018-11-14 Video data burst control for remote towers

Country Status (3)

Country Link
US (1) US20220030225A1 (en)
EP (1) EP3881306A4 (en)
WO (1) WO2020101547A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4205386A4 (en) * 2020-10-16 2024-01-24 Zhejiang Dahua Technology Co Systems and methods for data transmission

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150085132A1 (en) * 2013-09-24 2015-03-26 Motorola Solutions, Inc Apparatus for and method of identifying video streams transmitted over a shared network link, and for identifying and time-offsetting intra-frames generated substantially simultaneously in such streams

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009504036A (en) * 2005-07-28 2009-01-29 トムソン ライセンシング Method and apparatus for transmitting multiple video streams over a video channel
US8296813B2 (en) * 2006-06-22 2012-10-23 Sony Computer Entertainment Inc. Predictive frame dropping to enhance quality of service in streaming data
JP5351040B2 (en) * 2006-12-12 2013-11-27 ヴァントリックス コーポレーション Improved video rate control for video coding standards
JP4934524B2 (en) * 2007-06-25 2012-05-16 パナソニック株式会社 Data communication apparatus and data communication method
WO2010069427A1 (en) * 2008-12-19 2010-06-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method and encoder for providing a tune- in stream for an encoded video stream and method and decoder for tuning into an encoded video stream
US20150312651A1 (en) * 2014-04-28 2015-10-29 Honeywell International Inc. System and method of optimized network traffic in video surveillance system
EP3021579B1 (en) * 2014-11-14 2016-10-12 Axis AB Method and encoder system for encoding video
EP3070695B1 (en) * 2015-03-16 2017-06-14 Axis AB Method and system for generating an event video sequence, and camera comprising such system
EP3376766B1 (en) * 2017-03-14 2019-01-30 Axis AB Method and encoder system for determining gop length for encoding video

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150085132A1 (en) * 2013-09-24 2015-03-26 Motorola Solutions, Inc Apparatus for and method of identifying video streams transmitted over a shared network link, and for identifying and time-offsetting intra-frames generated substantially simultaneously in such streams

Also Published As

Publication number Publication date
EP3881306A1 (en) 2021-09-22
WO2020101547A1 (en) 2020-05-22
EP3881306A4 (en) 2022-08-03

Similar Documents

Publication Publication Date Title
US11197057B2 (en) Storage management of data streamed from a video source device
TWI767972B (en) Methods for decoding/encoding video data based on gaze sensing, display devices, and cameras
US9210436B2 (en) Distributed video coding/decoding method, distributed video coding/decoding apparatus, and transcoding apparatus
US9813732B2 (en) System and method for encoding video content using virtual intra-frames
US11108993B2 (en) Predictive network management for real-time video with varying video and network conditions
US9544534B2 (en) Apparatus for and method of identifying video streams transmitted over a shared network link, and for identifying and time-offsetting intra-frames generated substantially simultaneously in such streams
US9538207B2 (en) Method and apparatus for managing video storage
US20110273563A1 (en) Video analytics with burst-like transmission of video data
CN111988610B (en) Method and bit rate controller for controlling the output bit rate of a video encoder
CN111726584A (en) Video monitoring system
US20220030225A1 (en) Video data burst control for remote towers
US8249141B1 (en) Method and system for managing bandwidth based on intraframes
CN113068001B (en) Data processing method, device, equipment and medium based on cascade camera
US20130235928A1 (en) Advanced coding techniques
US11438545B2 (en) Video image-based media stream bandwidth reduction
US11463651B2 (en) Video frame-based media stream bandwidth reduction
EP4152747A1 (en) Methods and devices for controlling a transmission of a video stream
KR102251230B1 (en) Adaptive storage between multiple cameras in a video recording system
KR102101507B1 (en) Method and Apparatus for Lossless Network Video Transmission
US20230222840A1 (en) Systems, devices, and methods for pedestrian traffic assessment

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAAB AB, SWEDEN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ENGSTROEM, HENRIK;REEL/FRAME:056146/0086

Effective date: 20210428

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION