WO2022234575A1 - System and method for dynamic video compression - Google Patents

System and method for dynamic video compression Download PDF

Info

Publication number
WO2022234575A1
WO2022234575A1 PCT/IL2022/050457 IL2022050457W WO2022234575A1 WO 2022234575 A1 WO2022234575 A1 WO 2022234575A1 IL 2022050457 W IL2022050457 W IL 2022050457W WO 2022234575 A1 WO2022234575 A1 WO 2022234575A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
colors
encoding
data
color scale
Prior art date
Application number
PCT/IL2022/050457
Other languages
French (fr)
Inventor
Michael LACHOWER
Dan SHANI
Uri Shani
Tal MELENBOIM
Original Assignee
Mythrealio Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mythrealio Ltd filed Critical Mythrealio Ltd
Publication of WO2022234575A1 publication Critical patent/WO2022234575A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Definitions

  • the present invention relates generally to video compression, and specifically to dynamic encoding of digital images for video streaming.
  • Streaming media technology refers to the process of delivering media to an end-user.
  • An end-user may wish to play digital video or digital audio content before transmission of an entire file of content, as opposed to a downloading action that requires a complete transmission of a file before playing.
  • Streaming media techniques were only made possible with advances in data compression, due to high bandwidth requirements of uncompressed media.
  • the usage of streaming technologies and specifically online video streaming technologies is prevalent, and the need for fast, high quality, and reliable streaming is strongly required.
  • Commonly used digital video streaming methods require a bandwidth that a common household or home wireless network cannot support. For example, raw digital video requires a bandwidth of 168 Mbit/s for standard definition (SD) video and over 1000 Mbit/s for full high definition (FHD) video.
  • a variety of compression techniques which enable practical streaming media are based on the discrete cosine transform (DCT) which formed the basis for the first practical video coding format, H.261 and other video coding standards such as MPEG and MPEG-4 (MP4).
  • DCT discrete cosine transform
  • MP4 is commonly used for defining compression of audio and video digital data.
  • MP4 may be used for compression of audio and video data for web applications (e.g., streaming media), voice (e.g., telephone and videophone applications), and broadcast television applications.
  • voice e.g., telephone and videophone applications
  • broadcast television applications There is a need for an improved compression technique that may lower bandwidth requirements and improve reliability and speed in comparison to common compression techniques.
  • most compression algorithms employ a constant algorithm for video compression. Due to differences in video content, using a single compression method for different types of video data may not give optimal results for all types of video data. A compression method which uses optimal compression algorithms for every type of video content is strongly required.
  • a system and a method for color-based encoding of image frames in a video stream may include scanning one or more image frames from video data, the video data may include a plurality of image frames. Creating a color scale to represent an indexed value of pixel colors based on frequency of pixel colors in an image frame and encoding each of the plurality of image frames based on the color scale.
  • a size of the indexed value may be dynamically determined based on a number of colors in the scale.
  • the color scale size may be dynamically determined based on a number of colors in the image frame.
  • a size of an encoded image frame corresponds to the number of colors in the image.
  • scanning the one or more image frames may include determining division parameters of an image.
  • scanning the one or more image frames may include encoding the one or more of image frames according to a plurality of encoding methods and selecting an encoding method from the plurality of encoding methods according to results of the encoding.
  • selecting an encoding method from the plurality of encoding methods may include selecting an encoding method based on a size of one or more encoded image frames.
  • Embodiments of the invention may include creating a color scale may include identifying frequency of sets of red, green, blue (RGB) values of pixel and updating the color scale when a new set of red, green, blue (RGB) values of pixel is detected.
  • RGB red, green, blue
  • Embodiments of the invention may include encoding each of the plurality of image frames based on the color scale may include determining a number of data bytes to represent the colors in the encoded image.
  • determining a number of data bytes to represent the colors in the encoded image may include using a direct representation of colors in the encoded image if a number of colors is below a predefined threshold and using a dynamic representation of colors in the encoded image if a number of colors is above a predefined threshold.
  • transmitting the compressed video data may include compressing and decompressing the video data based on the division parameters.
  • compressing of the video data may include dividing each of the plurality of image frames to a plurality of sub-images, detecting a sub-image that includes changes in comparison to a previous sub-image and transmitting the sub-image that includes changes.
  • transmitting the compressed video data may include alternately transmitting unique communication commands.
  • the encoded image frame is decoded.
  • decoding includes reversing one or more steps used in encoding the image frame.
  • FIG. 1 is a schematic illustration of a system in accordance with embodiments of the invention.
  • Fig. 2 is a flowchart of a method for image color-based encoding and video compression, according to embodiments of the invention
  • Fig. 3 illustrates a process for selecting image encoding method, according to embodiments of the invention
  • Fig. 4 is a flowchart of a method for color scale creation for color-based encoding, according to embodiments of the invention.
  • Fig. 5 is a flowchart of a method for color scale encoding, according to embodiments of the invention
  • Fig. 6 is a flowchart of a method for color-based encoding, according to embodiments of the invention.
  • Fig. 7 is a flowchart of compression parameters determination for video compression, according to embodiments of the invention.
  • Fig. 8 is a flowchart of a method for video compression, according to embodiments of the invention
  • Fig. 9 is a schematical illustration of video stream data structure, according to embodiments of the invention
  • Figs. 10A, 10B and IOC depict data message forms according to embodiments of the invention
  • Fig. 11 illustrates an exemplary computing device according to an embodiment of the invention.
  • the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more”.
  • the terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like.
  • the term “set” when used herein may include one or more items unless otherwise stated.
  • the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed in a different order from that described, simultaneously, at the same point in time, or concurrently.
  • Embodiments of the invention may include image data encoding and video compression algorithms capable of transferring a video stream using a variety of compression techniques determined at runtime according to the structure of the data. It will be understood that the decompression process may be achieved by following the same compression chain steps in reverse order. Dynamic compression according to embodiments of the invention may lower bandwidth transmission requirements of any type of given video stream in comparison to methods known in the art . This may allow seamless video transmission to a plurality of end target platforms as well as any other platform or system. According to embodiments of the invention, a video stream may contain, in addition to the compressed video data, a dedicated and/or optional command data transfer channel that may act as a controller and may allow controlling devices (such as for example, smart televisions and live gaming consoles) with minimal bandwidth requirements.
  • a dedicated and/or optional command data transfer channel may act as a controller and may allow controlling devices (such as for example, smart televisions and live gaming consoles) with minimal bandwidth requirements.
  • Fig. 1 is a schematic illustration of a system in accordance with embodiments of the invention.
  • System 100 may be configured to encode image data frames, compress a video stream and transmit a compressed video stream across one or more communication networks 150, where it may be decompressed (which may include decoding) at an end device, e.g. a target device 130 .
  • the video data stream may be transmitted among a host device 120, remote services 110 and a target device 130.
  • Data played on host device 120 may be efficiently transmitted and displayed on a screen 140 connected to target device 130 according to embodiments of the invention.
  • a game running on host device 120 e.g., a mobile device or a movie streamed from remote services 110 may be displayed to a user on screen 140, e.g., a screen of a smart television.
  • Host device 120 may be any computerized end-device such as a mobile device, for example, a laptop, a smartphone, a workstation, a personal computer, a game console, IPad or any other computerized end-device.
  • Host device 120 may include a processor 121, a controller 122 and a memory 123. Other modules, elements or units may be included in host device 120.
  • Target device 130 may be any computerized device that may receive data transmission and may allow displaying data on screen 140.
  • Target device 130 may include or may be connected to screen or monitor 140 in a wireless or wired connection 135. In some embodiments of the invention screen 140 may be connected to target device 130 while in other embodiments of the invention screen 140 may be embedded or included in target device 130.
  • Remote services 110 may include one or more services separated from host device 120 and/or target device 130 which may connect to host device 120 and/or target device 130 by one or more communication networks 150.
  • Remote services 110 may include, for example, a remote processor, a gateway server 112 and a database 113. Any other device, service or platform may be included in remote services 110.
  • Any of the components included in system 100 may be the example computer system shown in Fig. 11 and any of the operations described with relation to system 100 may be performed, for example, by the example computer system shown in Fig. 11.
  • Communication network 150 may include one or more communication networks and may connect between remote services 110, host device 120 and target device 130.
  • Communication network 150 may include, for example, a local area communication network (LAN) that interconnects computers within a limited area, a wide area communication network (WAN), that covers a larger geographic distance.
  • LAN local area communication network
  • WAN wide area communication network
  • communication network 150 may include one or more wired or wireless communication networks or communication technologies such as for example, Internet, Wi-fi, Ethernet, or any other known in the art communication method.
  • streaming video data may be rendered by host device 120 e.g., a mobile smartphone.
  • host device 120 e.g., a mobile smartphone.
  • processor 121 and controller 122 of host device 120 may execute a computer game and may transmit video data via communication network 150 to target device 130, e.g., a smart television.
  • Host device 120 may communicate by controller 122 with remote services 110 via communication network 150.
  • the game host may run in remote processor 111.
  • the online game may run, e.g. executed by controller 122, in host device 120 and a request for communication may be transmitted from host device 120 to gateway server 112.
  • Game data and user account data may be transmitted, transferred, or streamed between host device 120 and remote services 110, e.g., from host device 120 to remote services 110 and from remote services 110 to host device 120.
  • User account data may include information of host device 120 and/or a user of host device 120.
  • the user account data may be transmitted to and received from database 113 via gateway server 112.
  • Game data may include data or information on other players participating in the online game.
  • Game data may be transmitted to and received from remote processor 111 via gateway server 112.
  • Host device 120 may run or execute the executable code of the online game and the control commands of the online game and the video stream related to the online game may be streamed from host device 120 to target device 130.
  • streaming video data may be rendered by remote services 110 e.g., remote processor 111.
  • remote processor 111 may execute the online game and may transmit video data via communication network 150 directly to target device 130 and not via host device 120.
  • Host device 120 may communicate, e.g. using controller 122, with remote services 110 via communication network 150.
  • Game data and user account data may be transmitted from host device 120 to remote services 110, e.g., to gateway server 112 and from remote services 110 to host device 120.
  • the user account data may be transmitted to and received from database 113 via gateway server 112.
  • Game data may be transmitted to and received from remote processor 111 via gateway server 112 to/from host device 120.
  • Remote processor 111 may run the executable code of the online game and the control commands of the online game, and the video stream related to the online game may be streamed from remote processor 111 directly to target device 130.
  • Video data may be compressed and streamed along with dedicated command data transfer channel that may allow controlling target device 130, for example, with minimal bandwidth requirements.
  • Embodiments of the invention may combine two or more independent data streams which may include, for example, game data, video data, user account data, commands and the like.
  • Each of the combined data streams may be used independently with no loss in throughput.
  • a first data stream may include a compressed video stream and a second data stream may include two directional control data for one or more types of applications, e.g., games, smart TVs, command communication, and the like.
  • Fig. 2 is a flowchart of a method for image color-based encoding and video compression, according to embodiments of the invention.
  • Embodiments of a method for color-based encoding and video compression may be performed, for example, by the system shown in Fig. 11.
  • operations of Fig. 2 may be used with operations in other flowcharts shown herein in some embodiment of the invention.
  • Embodiments of the invention may implement two or more innovative compression methods such as: a) image compression, also referred to herein as “image color-based encoding” or “image encoding” and b) video compression.
  • video data may be obtained or received.
  • Video data may include a plurality of digital images, frames, or image frames, also referred to herein as “full images”. Each of the full images may be divided into a plurality of sub-frames or sub-blocks also referred to herein as “image blocks” or frame blocks”.
  • An image block or frame block is a sub-block of a full image or frame, e.g., a full image may be divided into one or more image blocks that together complete a full image or a frame. Each of the image blocks may be divided into image packets.
  • obtaining video data may include obtaining a plurality of images or frames. Operations may be performed on full images, image blocks and/or image packets. For example, encoding, compression and transmission may include performing digital operation on any size of data, e.g., by one or more systems or elements of Fig. 11.
  • a video stream may include a plurality of full images 900.
  • Each of the full images may include a plurality of image blocks 910.
  • Each image blocks 910 may be a part of a full image 900.
  • Each of image blocks 910 may include a plurality of image packets 920. It should be understood to a person skilled in the art that the division of a full image into sub-images or image blocks 910 may be performed according to the application and/or communication link and/or content consideration and/or any other consideration or reason. Operations, processes and methods described in embodiments of the invention, may be performed on full images, image blocks and/or image packets.
  • encoding operations or streaming data described in embodiments of the invention may be performed on full images, image blocks and/or image packets.
  • An exemplary image block may have a size of 4.28 gigabyte and may include 65,535 image packets. Other image blocks and image packet sizes may be used.
  • two or more encoding methods may be examined, assessed, tested and/or analyzed in order to select an encoding method which may allow to reduce data usage, processing time, transfer bandwidth and/or storage usage.
  • a preliminary scan may be performed to decide or determine which encoding method may be selected from a plurality of encoding methods.
  • One or more image frames or image blocks may be scanned in order to select an image encoding method and to determine division parameters of the one or more images for the video compression.
  • the preliminary scan also referred to herein as “scan step” or “preliminary scan step” may allow selecting an encoding method which may result in the highest compression ratio of the encoded full image.
  • Each frame may be encoded by two or more encoding methods and the size of the encoded images may be saved for comparison purposes.
  • scanning the one or more image frames or image blocks may include encoding one or more image frames or image blocks according to a plurality of encoding methods and selecting an encoding method from the plurality of encoding methods according to results of the encoding.
  • a preferred encoding method may be selected from a plurality of encoding methods.
  • Scanning one or more image frames or image blocks may be performed and may include saving a size of each of the encoded image frames or image blocks. Selecting an encoding method may be performed based on the size of encoded image frames or image blocks.
  • a plurality of encoding methods may be tested, examined and/or analyzed and the method which gets optimal results, e.g., minimal storage usage and/or minimal transfer bandwidth may be selected during the preliminary scan.
  • the preliminary scan step may include selection of an optimal encoding method which may allow maximum compression for any type of video and/or gaming content by choosing at runtime an optimal encoding method and by determining or defining division parameters to be used during video compression. Further processes, steps, operations and/or actions performed during preliminary scan in operation 220 of Fig. 2 are described with reference to Fig. 3. If a color-based encoding method is selected, a color scale may be created as described in operation 230. If any other encoding method is selected, the color scale creation process may be skipped, as shown by arrow 225, and encoding process may start immediately after operation 220.
  • a color scale may be created as an initial step of color-based encoding.
  • a color scale may be created to represent frequency of pixel colors in an image frame or image block.
  • the color scale may represent an indexed value of pixel colors based on frequency of pixel colors in an image frame.
  • Color-based encoding may include applying color indexing to the input image data based on a color scale, which may be implemented by any other format or arrangement of data which may allow to remap, translate, or assign colors from one color space to colors from another.
  • a color lookup table (CLUT) may be used.
  • CLUT is a correspondence table in which all colors from a certain color space are assigned to an index, by which they may be referenced. Referencing colors via or by an index may require less information or data to represent the actual colors.
  • each of the plurality of image frames or image blocks may be encoded based on the color scale.
  • Encoding each of the plurality of image frames or image blocks based on the color scale may include determining a number of data bytes to represent the colors in the encoded image.
  • a size of the encoded image frame or image block may correspond to the number of colors in the image.
  • Determining a number of data bytes may include using a direct representation of colors in the encoded image if a number of colors is below a predefined threshold and using a dynamic representation of colors in the encoded image if a number of colors is above a predefined threshold.
  • video data which includes the plurality of encoded image frames or image blocks, may be compressed based on the division parameters and transmitted to a dedicated target, e.g., target device 130 of Fig. 1.
  • compressing the video data may include dividing each of the plurality of image frames or image blocks into a plurality of sub-frames, detecting a sub-frame that includes changes in comparison to a previous sub-frame and transmitting the sub-frame and transmitting the sub-frame that includes changes as indicated in operation 260.
  • the compressed video data may be transmitted.
  • transmitting the compressed video data may include transmitting unique communication commands within the compressed video data or transmitted alternately with the compressed video data.
  • the unique communication commands may include a plurality of unique communication messages.
  • a data stream used for compressed video data may contain a plurality of types of messages.
  • the data stream may include data messages and commands messages. Using both data and commands embedded within a single data stream may be used for interactive applications such as video games, online games, and the like.
  • the plurality of data streams may be treated as independent streams even though all types of messages may be sent on the same communication data stream.
  • Embodiments of the invention which relate to operations 250 and 260 including further processes, steps, operations and/or actions performed during compression and transmission of video data are described with reference to Fig. 8.
  • FIG. 3 illustrates a process for selecting an image encoding method, according to embodiments of the invention.
  • the flowchart of Fig. 3 shows in detail the processes, steps, operations and/or actions performed in operation 220 of Fig. 2.
  • one or more image frames may be encoded according to a plurality of encoding methods including color-based encoding, and a selected encoding method may be determined according to, or based on, the results of the encoding.
  • image data may be obtained.
  • the operation may be performed when a complete full image is generated by a host device or a remote device, e.g., by a graphic card included in host device 120 of Fig. 1 or by a graphic card included in remote processor 111 of Fig. 1.
  • a full image may be obtained by host device 120 and may be transferred from a graphic card included in host device 120 to memory 123 of host device 120.
  • color-based encoding may be applied on image data, e.g., on one or more full images. Color-based encoding method may be applied, employed, or executed as described in accordance with embodiments of the invention.
  • the one or more full images that may be scanned during the preliminary scan process may be a predetermined number of full images, semi -random number of images or a number of full images that may dynamically be changed during the scan process.
  • the color-based encoding method may be applied to a first full image, and additional images may be encoded if required.
  • color-based encoding data or results may be saved, for example in a dedicated analysis file, register, or any applicable data storage element.
  • a size of the encoded image frame or image block may be saved and, in addition, the encoded image itself may be saved.
  • Other information or data related to the color-based encoding of the one or more sample full images may be saved in an analysis file.
  • the analysis file may include encoded image size, performance parameters such as, encoding time, file size, maximal and/or minimal compression records and the like.
  • one or more encoded full images may be saved.
  • one or more other encoding methods may be performed on image data, e.g., on one or more full images.
  • Exemplary encoding methods may include lossy or lossless compression method such as Deflate, Discrete Cosine Transform (DCT), Joint Photographic Experts Group (JPEG), wavelet compression, fractal compression, run-length encoding, area image compression, adaptive dictionary algorithms, predictive coding, or any other encoding method and/or algorithm.
  • encoding data or results of any encoding method may be saved, for example in a dedicated analysis file.
  • An analysis file may be generated for each encoding method which was used in the preliminary scan step in addition, the one or more encoded image frame or image block, a size of each encoded image frame or image block may be saved.
  • the data or information collected for each of the encoding method may be saved in a single file or database, or in any manner which may allow to perform a comparison between performances of different encoding methods.
  • operations 310-350 may be repeated as indicated by arrow 355.
  • an encoding method may be selected. Selecting an encoding method from the plurality of encoding methods based on results of encoding process in the preliminary scan. Selecting an encoding method may be performed based on the size of encoded image frames or encoded image blocks in each encoding method.
  • the scan process may be employed according to a predefined definition, e.g., of a system operator, a user or as a default process.
  • encoding by one or more encoding methods may be performed on all pixels in one or more sample images, screen shoots and/or full frames.
  • Embodiments of the invention may allow selecting the encoding method based on the compressed data size and/or compression time of one or more encoded image frames or image blocks. For example, an encoding method having the highest compression ratio of the sample image may be selected.
  • the final values saved e.g., by dedicated counters
  • the optimal image encoding method may be compared to decide on the optimal image encoding method to apply while encoding frames in the video stream.
  • compression parameters may be determined based on the encoding process during the preliminary scan. While encoding the sample images, data related to video compression may be stored and analyzed. Parameters related to optimal division of an image during video compression may be determined, selected, or defined.
  • Embodiments of the invention make use of the known fact that commonly in video streams there is no change in a large number of pixels in two consecutive images of the same video stream.
  • embodiments of the invention may divide each image frame into a plurality of smaller images, image blocks, sub-images or parts of the image.
  • a plurality of testing calculations may be performed on the image data, e.g., on screenshots, to determine the division parameters for the video compression. For example, calculations may be performed to determine the width and height of the sub-images or image parts. The dimensions of the sub-images defined by the selected division parameters may directly impact the bandwidth required for transferring the video stream and may be used to improve performance of video data transferring or streaming.
  • FIG. 7 is a flowchart of compression parameters determination for video compression, according to embodiments of the invention.
  • the flowchart of Fig. 7 shows in detail the processes, steps, operations and/or actions performed in operation 370 of Fig. 3.
  • division parameters may be set.
  • Setting the division parameters may include defining preliminary, initial, opening or first parameters which may be checked or tested in order to find or determine the preferred division parameters which may allow optimal division of a full image, an image frame, or an image block for highest compression ratio in minimal compression time.
  • a full image may be divided into a plurality of sub-images or partial images.
  • An initial set of two parameters may be defined or set, e.g., parameter “X” may define a number of horizontal division lines and parameter “Y” may define a number of vertical division lines.
  • An exemplary initial value of “1” may be set to “X” and “Y” parameters as an initial value.
  • Other parameters and initial values may be set or defined.
  • a full image, an image frame or an image block may be compressed using the division parameters. For example, if “X” and “Y” parameters are set to “1”, a full image or an image block may be divided into 4 sub-images which may be encoded and compressed according to encoding and compression methods described in embodiments of the invention.
  • the compression results may be saved, for example, in a dedicated analysis file.
  • the compression results may include the size of each of the compressed sub-images and the compression time.
  • Other data, information or statistics may be saved in a dedicated analysis file.
  • a check is performed regarding additional division parameters to be tested. If additional division parameters may be required, operations 710-740 may be repeated until all division parameters are tested, as indicated by arrow 745.
  • “X” and “Y” parameters may be defined as parameters in a loop, which are increased every time by a predefined value, operations 710-740 may be repeated until all combinations of values of “X” and “Y” parameters may be checked.
  • the preferred division parameters may be selected as indicated in operation 750. A selection may be performed based on the results of all previous compression results with all the division parameters that may have been checked.
  • a local minimum for the size of a single frame according to width and height of the sub-images may be determined. Based on the detected local minimum, the division parameters, e.g., width and height of the sub-images may be determined.
  • the preferred division parameters which may give optimal compression results may be saved.
  • the division parameters values that may obtain the best ratio of compression time and compressed data size may be selected. Any other optimal results may be defined and may be used for selection process.
  • Embodiment of the invention may store all relevant data to perform optimal video compression of the transmitted data while the algorithm is running.
  • Fig. 4 is a flowchart of a method for color scale creation for color-based encoding, according to embodiments of the invention.
  • the flowchart of Fig. 4 shows in detail the processes, steps, operations and/or actions performed in operation 230 of Fig. 2.
  • the color scale creation process may be performed at the first time the color-based encoding is performed, e.g., during preliminary scan process or anytime during the encoding process in run-time when a first full image is scanned.
  • a color scale may be initiated, established, or formed. For example, an empty list which may be suitable to save or store color values of pixels may be formed and values of common color values, e.g., black and white, may be added to the color scale.
  • the color values of pixels may be a red, green, blue (RGB) representation of the pixels.
  • RGB red, green, blue
  • Embodiments of the invention are described with relation to use of RGB representation. It should be understood to a person skilled in the art that any other color values representation may be used.
  • color values of a next pixel may be detected, e.g., RGB values of a next detected pixel in the first image detected or a sample image which is detected.
  • the color values of the detected pixel may be included in the color scale, e.g., the RGB values of the detected pixel. If the color values of the detected pixel are included in the color scale, a value or counter which counts the frequency of occurrences of that color value may be incremented, as indicated in operation 440. If the color values of the detected pixel are not included in the color scale, the color values of the detected pixel may be added to the color scale, as indicated in operation 450.
  • operation 460 it may be checked if all pixels of the image were scanned. If not all the pixels of the image were scanned, as indicated by arrow 465, operation 420 may be repeated and a next pixel in the image may be detected as described in operation 420.
  • a color scale may be a table which may include all colors, e.g., RGB values of each color in an image frame, image block or video stream, and an index or a value assigned to each color by which it can be referenced or encoded.
  • the color scale may further include, and/or may be arranged according to, a respective frequency of each color in the image frame, image block or video stream.
  • a color scale may be created to represent an indexed value of pixel colors based on a frequency of pixel colors in an image frame, image block or video stream such that a size of the indexed value may be dynamically determined based on a number of colors in the scale.
  • the values assigned to each color in the scale may be determined according to the frequency of each color in the scale.
  • the scale may include a list of colors organized, ordered, or arranged such that the most frequent color, e.g., a color which has the largest number of appearances in an image or stream may be first, while the less frequent color, e.g., a color which has the smallest number of appearances in an image, may be last.
  • a color scale may include a plurality of information units, e.g., a plurality of bytes, which may be used to represent the colors of pixels in a scanned image frame and an indexed value assigned to each of the pixel colors.
  • each pixel of a scanned image frame may have an RGB value, also referred to herein as RGB tuple.
  • An exemplary color scale may include 3 data bytes to represent each RGB tuple of each of the pixels in the image frame and one or two data bytes to represent the indexed value assigned to each of the pixel colors.
  • a black color for which its RGB decimal value representation is (0,0,0) may be represented by three bytes while its referencing, representation, or indexed value in the scale may be dynamically determined to be between one and two bytes based on a number of colors in the scale which is related to the number of colors in the image frame or a plurality of image frames scanned to create the scale.
  • each RGB tuple may be saved using a plurality of consecutive bytes with no unused bytes of data in between.
  • a color scale may further include a plurality of information units which may contain a unique combination to represent that this is a color scale, e.g. an identification of the color scale as a color scale.
  • Embodiments of the invention describe a first encoding process which relates to encoding during creation of a color scale.
  • This encoding process may include encoding color information using fewer bits than the original representation, for example, as described in Fig. 5.
  • a second encoding process described in embodiments of the invention relate to encoding of images in a video stream. During encoding of the images in the video the color scale is used, for example, as described in operation 240 of Fig. 2 and as described in Fig. 6.
  • FIG. 5 is a flowchart of a method for color scale encoding, according to embodiments of the invention.
  • the flowchart of Fig. 5 shows in detail the processes, steps, operations and/or actions performed in operation 240 of Fig. 2, and/or operation 470 of Fig. 4.
  • the values in the color scale may be organized according to their frequency. For example, the list of colors may be organized, ordered, or arranged such that the most frequent color, e.g., a color which has the largest number of appearances in an image may be first, while the less frequent color, e.g., a color which has the smallest number of appearances in an image, may be last.
  • a direct representation encoding may include encoding of the plurality of colors in the color scale by a predefined number of information units which may contain a unique combination to represent each of the plurality of colors in the color scale. For example, if there are less than 255 different colors in the color scale, each color may be encoded by a unique one-byte representation.
  • each of the plurality of colors in the color scale may receive representation with relation to the frequency of its appearances in the image and/or in the color scale. For example, the most frequent color in the image may receive a minimal representation, e.g., a one- byte representation of eight “zeros”. The less frequent color in the image may receive a maximal representation, e.g., a one-byte representation of eight “ones”
  • the color scale may be encoded by a dynamic representation, as indicated in block 540 which includes operations 550-580.
  • a predetermined second threshold or level e.g., above a predefined second threshold value. For example, it is determined whether there are more than 32,894 different colors in the color scale.
  • a number equal to second threshold of the most similar, resemble colors may be determined based on known in the art similarity algorithms or method as indicated in operation 560. For example, if it is determined that there are more than 32,894 different colors in the color scale, the most similar 32,894 colors may be determined in order to allow representing all color by two-bytes representation, e.g., as described in operations 570-580.
  • a first group of colors in color scale may be encoded using a first unique representation as indicated in operation 570.
  • the first unique representation may include a predefined first number of information units and an indication to mark the first unique representation.
  • a second group of colors in color scale may be encoded using a second unique representation as indicated in operation 580.
  • the second unique representation may include a predefined second number of information units and an indication to mark the second unique representation.
  • the first 127 colors in color scale may be encoded by one byte representation which may include, a unique seven bit sequence and an indication bit which may include a zero value at the beginning of the two bytes representation sequence.
  • the exemplary number of 127 colors is the maximal number which may be encoded by 7 bits, the additional 8 th indication bit may indicate the 1-byte representation.
  • the next 32,767 colors in color scale (which are the result of 32,894 colors less 127 colors) may be encoded by two-byte representation which may include, a unique 15 bits sequence and an indication bit which may include a “one” value at the beginning of the two bytes representation sequence.
  • the encoded color scale may be saved in a dedicated file or database, e.g., in memory 123 of host device 120, and/or memory of remote processor 111 of Fig. 1.
  • Fig. 6 is a flowchart of a method for color based- encoding, according to embodiments of the invention.
  • the flowchart of Fig. 6 shows in detail the processes, steps, operations and/or actions performed in operation 240 of Fig. 2.
  • the color-based encoding process may be performed at the run time during transmission or streaming of video data from a first device to a second device. For example, when video data stream is transmitted between host device 120 and/or remote services 110 and target device 130 of Fig. 1.
  • a check may be performed to ensure color scale exists, e.g., in memory 123 of host device 120, and/or memory of remote processor 111 of Fig. 1. If it is determined that no color scale was created, a color scale may be created as indicated in operation 620 and described in detail with reference to Fig. 4 and Fig. 5. If a color scale exists, encoding may be proceeded and video data which may include a plurality of full image data may be obtained as indicated in operation 630, e.g., as described in operation 210 of Fig. 2.
  • color values of a next pixel may be detected, e.g., RGB values of a next detected pixel in a detected full image.
  • operation 650 it may be checked if the color values of the detected pixel may be included in the color scale, e.g., the RGB values of the detected pixel. If the color values of the detected pixel are not included in the color scale, the color values of the detected pixel may be added to the color scale, as indicated in operation 660. This may allow to update the color scale in run-time during encoding of the streamed video data.
  • the encoded image may be saved in a dedicated memory, e.g., memory 1120 of Fig.
  • each detected image may be scanned to find all colors existing in the detected image, e.g., the unique sets of RGB triplets in detected input image.
  • the encoding may be performed according to or based on the number of colors detected and their frequency based on colors in the color scale and according to a required image quality and data size considerations. For example, a number of unique colors may be selected using one of, for example, two options based on the required result.
  • a first option may include, for example, up to 256 unique colors to improve transmission speed of the encoded data and a second option may include 32,894 unique colors to improve image quality.
  • operation 680 it is checked and determined if all pixels of the full image were encoded. If not all pixels of the full image were encoded, the process may be repeated by returning to operation 640 and a next pixel of the full image may be examined and encoded according to operations 640-670. If all pixels of the full image are encoded one or more additional filter algorithms and/or compression algorithms may be run, executed and/or processed as indicated in operation 690. Additional filter algorithms and/or compression algorithms may be applied to the encoded images to further reduce size of the encoded data and/or to prepare the encoded images for transmission, e.g., for streaming purposes.
  • An additional filtering algorithm that may be applied according to embodiments of the invention may include, “scan line serialization” which may analyze one or more horizontal lines of pixels, also referred to herein as “scanline”.
  • scan line serialization each pixel may be represented by one entry in the color scale in a predetermined order, e.g., ordered left to right in a scanline and the plurality of scanlines may also be ordered in a predetermined manner, e.g., from top to bottom.
  • further filtering algorithms may include transforming each scanline into a filtered scanline using one of the defined filter types to prepare the scanline for additional image compression.
  • Some embodiments may include additional deflate compression algorithms applied on all the filtered scanlines in the image.
  • data stream construction may be performed to generate the final required data stream consisting of data chunks.
  • video compression may serve as an additional layer of compression on top of the selected image encoding.
  • Video compression algorithms may be based on the assumption that in most video streams there are pixels that may not be changed between consecutive images of the video.
  • Each of the plurality of full images or image blocks received within the video stream may be one of two types of images: a) a full image data which may contain a full resolution image from the input video; and b) an image divided to sub-images or a grid of smaller images.
  • the divided image data may contain a list of smaller parts inside the image that have changed in comparison to the last displayed frame. These parts may be drawn on top and/or instead of the same sections in the previous frame. Every sub-image may contain coordinates of the top left corner of that sub-image part in the full image, so the full image may be reconstruct based on that information.
  • FIG. 8 is a flowchart of a method for video compression, according to embodiments of the invention.
  • the flowchart of Fig. 8 shows in detail the processes, steps, operations and/or actions performed during transmission of compresses video data in operation 260 of Fig. 2.
  • a next full image may be obtained, e.g., video data may be obtained or received as describe in operation 210 of Fig. 2.
  • a full frame may be divided based on or according to the division parameters selected or determined in the preliminary scan, as described in detail the processes, steps, operations and/or actions performed in operation 370 of Fig. 3 and in Fig. 7.
  • a full image may be divided to a plurality of sub-images. The number of sub-images may be defined according to the division parameters.
  • each sub-image may be compared to a respective sub-image of a previous full image.
  • operation 840 it is checked if a change from a previous sub-image is detected. If no change from a previous sub-image is detected, data from a previous sub-image may be used as indicated in operation 860. If a change from a previous sub-image is detected, data of the current sub image may be used as indicated in operation 850.
  • operation 870 it is checked if all sub images of a full image may be checked or scanned. If not all sub images of a full image are checked, the process may be repeated by returning to operation 810 and a next full image may be obtained and examined according to operations 810-870.
  • the video stream may be prepared for transmission as indicated in operation 880.
  • Preparing the video stream for transmission may include, for example, preparing a list of smaller parts inside the image that have changed in comparison to the last displayed frame. These parts may be received by the end device and drawn on top and/or instead of the same sections in the previous frame. Every sub-image may contain coordinates of the top left corner of that sub-image part in the full image, so the full image may be reconstruct based on that information e.g., a full image may contain a plurality of sub-images, each positioned according to its coordinates inside the full image.
  • compressed video data and command data may be transmitted.
  • the transmitted stream may contain data messages which may include compressed video data and command data which may include a dedicated command data transfer channel that may act as a controller and may allow to control devices such as for example, smart televisions and live gaming consoles with minimal bandwidth requirements.
  • data and command streams may be treated as independent streams even though both types of messages are sent on the same communication data stream.
  • a dedicated command channel, or command message may include a plurality of command.
  • the following four types of commands may be used: a) Universal device commands which may include common device control functionality commands available on most platforms such as, for example, volume control, channel selection, chapter selection, and the like; b) Device specific commands which may include a set of dedicated command slots to define a set of device commands unique to a specific end platform; c) Universal game commands which may include common game control functionality supported by most games, e.g., arrow keys, enter button, and the like; and d) Game specific commands which may include a set of dedicated command slots to define a set of game commands unique to a specific game. Other commands and other types of commands may be defined and used with embodiments of the invention.
  • Figs. 10A, 10B and IOC depict data message forms according to embodiments of the invention.
  • Figs 10A and 10B depict image data messages and Fig. IOC depict protocol data message.
  • Fig. 10A shows an exemplary representation of a message which may include data related to image packet.
  • the exemplary representation of Fig. 10A may be used according to embodiments of the invention when information or data related to image packets, e.g., image packet 920 of Fig. 9 may be transmitted.
  • Image packet message 1000 may include a plurality of information units, e.g., a plurality of bytes, which may represent the information carried by image packet message 1000.
  • the first group of data bytes may represent a header 990 of image packet message 1000 and a second group of data bytes may represent data 991 carried by image packet message 1000.
  • Header 990 may include, for example, a plurality of fields or areas, each may carry, include, or represent specific data.
  • header 990 may include 19 bytes which may be divided as follows: 1 byte for a message type 940, 4 bytes for a video frame size 941, 4 bytes for a frame write location indication 942, 4 bytes for a data size indication 943, 2 bytes for a packet number 944, 2 bytes for a total packet indication 945, 2 bytes for a message number indication 946.
  • Data 991 may include up to 65 Megabyte (MB) of image packet data. Any other number of bytes may be used.
  • Message type 940 may include an indication for the type of the message, e.g, “1” may represent a protocol message, “10” may represent a control message, “120” may represent image packet message data and “220” may represent image frame or image block message.
  • Video frame size 941 may include a size of a current image frame or image block carried by image packet message 1000.
  • Frame write location indication 942 may include an indication of a byte number inside an image frame or image block where the current image packet starts.
  • Data size indication 943 may include an indication of a size of the data portion of a message.
  • Packet number 944 may include a location of the current image packet in a message.
  • Total packet indication 945 may include a number of packets in the message.
  • Fig 10B shows an exemplary representation of a message which may include data related to an image frame.
  • the exemplary representation of Fig. 10B may be used according to embodiments of the invention when information or data related to an image frame, e.g., image frame 910 of Fig. 9 may be transmitted.
  • Image frame message 1100 may include a plurality of information units, e.g., a plurality of bytes, which may represent the following information carried by image frame message 1100.
  • the first group of data bytes may represent a header 992 of image frame message 1100 and a second group of data bytes may represent data 993 carried by image frame message 1100.
  • Header 992 may include, for example, a plurality of fields or areas, each may carry, include, or represent specific data.
  • header 992 may include 19 bytes which may be divided as follows: 1 byte for a message type 940, 4 bytes for x coordinate start location 951, 4 bytes for y coordinate start location 952, 4 bytes for a data size indication 943, 2 bytes for a packet number 944, 2 bytes for a total packet indication 945, 2 bytes for a message number indication 946.
  • Data 993 may include up to 65 megabytes (MB) of image frame data. Any other number of bytes may be used.
  • Message type 940, Data size indication 943, Packet number 944, Total packet indication 945 and message number indication 946 may be similar to their description in Fig. 10A.
  • X coordinate start location 951 may include an X coordinate of the top left corner of the image frame
  • y coordinate start location 952 may include a Y coordinate of the top left corner of the image frame.
  • Protocol or commands message 1200 may include a plurality of information units, e.g., a plurality of bytes, which may represent information related to control commands. Protocol message 1200 may include unique communication commands which may be transmitted within the compressed video data or alternately with compressed video data on a same communication channel.
  • a first group of data bytes may represent a header 994 of protocol message 1200 and a second group of data bytes may represent data 995 carried by protocol message 1200.
  • Header 994 may include, for example, a plurality of fields or areas, each may carry, include, or represent specific data.
  • header 994 may include 5 bytes which may be divided as follows: 1 byte for a message type 940 and 4 bytes for message length field 961.
  • Data 995 may include up to 65 Megabyte (MB) of protocol command message data 962. Any other number of bytes may be used.
  • message type 940 may be similar to its description in Fig. 10A
  • message length field 961 may include an indication of the length of the data portion of the protocol message, e.g., message data 962.
  • Message data 962 may include data related to unique communication commands or control commands to be used in accordance with embodiments of the invention.
  • FIG. 10A, 10B and IOC shows exemplary representations of data message forms. Any other data message form may be used in accordance with embodiments of the invention.
  • Fig. 11 illustrates an exemplary computing device according to an embodiment of the invention.
  • a computing device 1100 with a processor 1105 may be used to encode a digital image or a plurality of digital images and to perform color-based encoding operations, according to embodiments of the invention.
  • Each of the operation and/or processes described in embodiments of the invention may be performed by one or more systems or elements of Fig. 11.
  • Computing device 1100 may include a processor 115 that may be, for example, a central processing unit processor (CPU), a chip or any suitable computing or computational device, an operating system 1115, a memory 1120, a storage 1130, input devices 1135, graphics processing unit (GPU) 1180 and output devices 1140.
  • Processor 1105 may be or include one or more processors, etc., co-located or distributed.
  • Computing device 1100 may be for example a smart device, a smartphone, workstation or a personal computer, a laptop, or may be at least partially implemented by one or more remote servers (e.g., in the “cloud”).
  • Operating system 1115 may be or may include any code segment designed and/or configured to perform tasks involving coordination, scheduling, arbitration, supervising, controlling or otherwise managing operation of computing device 1100, for example. Operating system 1115 may be a commercial operating system.
  • Memory 1120 may be or may include, for example, a Random Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a non- volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units. Memory 1120 may be or may include a plurality of possibly different memory units.
  • Executable code 1125 may be any executable code, e.g., an application, a program, a process, task or script. Executable code 1125 may be executed by processor 1105 possibly under control of operating system 1115. For example, executable code 1125 may be or include code for encoding one or more digital images, according to embodiments of the invention.
  • Storage 1130 may be or may include, for example, a hard disk drive, a floppy disk drive, a Compact Disk (CD) drive, a CD-Recordable (CD-R) drive, a universal serial bus (USB) device or other suitable removable and/or fixed storage unit. In some embodiments, some of the components shown in Fig. 11 may be omitted.
  • memory 1120 may be a non-volatile memory having the storage capacity of storage 1130. Accordingly, although shown as a separate component, storage 1130 may be embedded or included in memory 1120.
  • Storage 1130 and or memory 1120 may be configured to store an electronic or digital image gallery including image files 732, including a digital image 733 and metadata 734, and any other parameters required for performing embodiments of the invention.
  • Input devices 1135 may be or may include a camera, a mouse, a keyboard, a touch screen or pad or any suitable input device. It will be recognized that any suitable number of input devices may be operatively connected to computing device 1100 as shown by block 1135.
  • Output devices 1140 may include one or more displays, speakers and/or any other suitable output devices. It will be recognized that any suitable number of output devices may be operatively connected to computing device 1100 as shown by block 1140.
  • Any applicable input/output (I/O) devices may be connected to computing device 1100 as shown by blocks 1135 and 1140. For example, a wired or wireless network interface card (NIC), a modem, printer or facsimile machine, a universal serial bus (USB) device or external hard drive may be included in input devices 1135 and/or output devices 1140.
  • NIC network interface card
  • USB universal serial bus
  • Network interface 1150 may enable device 1100 to communicate with one or more other computers or networks.
  • network interface 1150 may include a Wi-Fi or Bluetooth device or connection, a connection to an intranet or the internet, an antenna etc.
  • GPU 1180 may enable computer graphics manipulations, image processing processes and/or accelerating real-time graphics applications.
  • GPU 1180 may be implemented as an integrated or discrete unit and may be used in embodiments of the invention, e.g., by host device 120 and/or remote services 110 of Fig. 1.
  • Embodiments described in this disclosure may include the use of a special purpose or general-purpose computer including various computer hardware or software modules, as discussed in greater detail below.
  • Embodiments within the scope of this disclosure also include computer-readable media, or non- transitory computer storage medium, for carrying or having computer-executable instructions or data structures stored thereon.
  • the instructions when executed may cause the processor to carry out embodiments of the invention.
  • Such computer-readable media, or computer storage medium can be any available media that can be accessed by a general purpose or special purpose computer.
  • Such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
  • Computer-executable instructions comprise, for example, instructions and data which cause a general-purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
  • module can refer to software objects or routines that execute on the computing system.
  • the different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system (e.g., as separate threads). While the system and methods described herein are preferably implemented in software, implementations in hardware or a combination of software and hardware are also possible and contemplated.
  • a “computer” may be any computing system as previously defined herein, or any module or combination of modulates running on a computing system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A system and method for encoding and decoding a digital image may include scanning one or more image frames from video data that includes a plurality of image frames, creating a color scale to represent an indexed value of pixel colors based on frequency of pixel colors in an image frame, wherein a size of the indexed value is dynamically determined based on a number of colors in the scale, encoding each of the plurality of image frames based on the color scale.

Description

SYSTEM AND METHOD FOR DYNAMIC VIDEO COMPRESSION
FIELD OF THE INVENTION
The present invention relates generally to video compression, and specifically to dynamic encoding of digital images for video streaming.
BACKGROUND
Streaming media technology refers to the process of delivering media to an end-user. An end-user may wish to play digital video or digital audio content before transmission of an entire file of content, as opposed to a downloading action that requires a complete transmission of a file before playing. Streaming media techniques were only made possible with advances in data compression, due to high bandwidth requirements of uncompressed media. The usage of streaming technologies and specifically online video streaming technologies is prevalent, and the need for fast, high quality, and reliable streaming is strongly required. Commonly used digital video streaming methods require a bandwidth that a common household or home wireless network cannot support. For example, raw digital video requires a bandwidth of 168 Mbit/s for standard definition (SD) video and over 1000 Mbit/s for full high definition (FHD) video.
A variety of compression techniques which enable practical streaming media are based on the discrete cosine transform (DCT) which formed the basis for the first practical video coding format, H.261 and other video coding standards such as MPEG and MPEG-4 (MP4). MP4 is commonly used for defining compression of audio and video digital data. MP4 may be used for compression of audio and video data for web applications (e.g., streaming media), voice (e.g., telephone and videophone applications), and broadcast television applications. There is a need for an improved compression technique that may lower bandwidth requirements and improve reliability and speed in comparison to common compression techniques. In addition, most compression algorithms employ a constant algorithm for video compression. Due to differences in video content, using a single compression method for different types of video data may not give optimal results for all types of video data. A compression method which uses optimal compression algorithms for every type of video content is strongly required. SUMMARY
According to embodiments of the invention, a system and a method for color-based encoding of image frames in a video stream, the method may include scanning one or more image frames from video data, the video data may include a plurality of image frames. Creating a color scale to represent an indexed value of pixel colors based on frequency of pixel colors in an image frame and encoding each of the plurality of image frames based on the color scale. In some embodiments of the invention, a size of the indexed value may be dynamically determined based on a number of colors in the scale. According to embodiments of the invention, the color scale size may be dynamically determined based on a number of colors in the image frame.
According to embodiments of the invention, a size of an encoded image frame corresponds to the number of colors in the image.
According to embodiments of the invention, scanning the one or more image frames may include determining division parameters of an image.
According to embodiments of the invention, scanning the one or more image frames may include encoding the one or more of image frames according to a plurality of encoding methods and selecting an encoding method from the plurality of encoding methods according to results of the encoding. According to embodiments of the invention, selecting an encoding method from the plurality of encoding methods may include selecting an encoding method based on a size of one or more encoded image frames.
Embodiments of the invention may include creating a color scale may include identifying frequency of sets of red, green, blue (RGB) values of pixel and updating the color scale when a new set of red, green, blue (RGB) values of pixel is detected.
Embodiments of the invention may include encoding each of the plurality of image frames based on the color scale may include determining a number of data bytes to represent the colors in the encoded image.
In some embodiments of the invention, determining a number of data bytes to represent the colors in the encoded image may include using a direct representation of colors in the encoded image if a number of colors is below a predefined threshold and using a dynamic representation of colors in the encoded image if a number of colors is above a predefined threshold.
According to embodiments of the invention, transmitting the compressed video data may include compressing and decompressing the video data based on the division parameters. In some embodiments of the invention, compressing of the video data may include dividing each of the plurality of image frames to a plurality of sub-images, detecting a sub-image that includes changes in comparison to a previous sub-image and transmitting the sub-image that includes changes. According to embodiments of the invention, transmitting the compressed video data may include alternately transmitting unique communication commands.
According to embodiments of the invention, the encoded image frame is decoded.
According to some embodiments, decoding includes reversing one or more steps used in encoding the image frame.
BRIEF DESCRIPTION OF THE DRAWINGS
The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. Embodiments of the invention, however, both as to organization and method of operation, together with objects, features and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanied drawings. Embodiments of the invention are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like reference numerals indicate corresponding, analogous or similar elements, and in which:
Fig. 1 is a schematic illustration of a system in accordance with embodiments of the invention;
Fig. 2 is a flowchart of a method for image color-based encoding and video compression, according to embodiments of the invention;
Fig. 3 illustrates a process for selecting image encoding method, according to embodiments of the invention;
Fig. 4 is a flowchart of a method for color scale creation for color-based encoding, according to embodiments of the invention;
Fig. 5 is a flowchart of a method for color scale encoding, according to embodiments of the invention; Fig. 6 is a flowchart of a method for color-based encoding, according to embodiments of the invention;
Fig. 7 is a flowchart of compression parameters determination for video compression, according to embodiments of the invention;
Fig. 8 is a flowchart of a method for video compression, according to embodiments of the invention; Fig. 9 is a schematical illustration of video stream data structure, according to embodiments of the invention; Figs. 10A, 10B and IOC depict data message forms according to embodiments of the invention; and Fig. 11 illustrates an exemplary computing device according to an embodiment of the invention.
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
DETAILED DESCRIPTION
In the following description, various aspects of the present invention will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will also be apparent to one skilled in the art that the present invention may be practiced without the specific details presented herein. Furthermore, well known features may be omitted or simplified in order not to obscure the present invention. Although some embodiments of the invention are not limited in this regard, discussions utilizing terms such as, for example, "processing," "computing," "calculating," "determining," "establishing", "analyzing", "checking", or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device that manipulates and/or transforms data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information transitory or non-transitory or processor- readable storage medium that may store instructions, which when executed by the processor, cause the processor to execute operations and/or processes. Although embodiments of the invention are not limited in this regard, the terms "plurality" and "a plurality" as used herein may include, for example, "multiple" or "two or more". The terms "plurality" or "a plurality" may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like. The term "set" when used herein may include one or more items unless otherwise stated. Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed in a different order from that described, simultaneously, at the same point in time, or concurrently.
Embodiments of the invention may include image data encoding and video compression algorithms capable of transferring a video stream using a variety of compression techniques determined at runtime according to the structure of the data. It will be understood that the decompression process may be achieved by following the same compression chain steps in reverse order. Dynamic compression according to embodiments of the invention may lower bandwidth transmission requirements of any type of given video stream in comparison to methods known in the art . This may allow seamless video transmission to a plurality of end target platforms as well as any other platform or system. According to embodiments of the invention, a video stream may contain, in addition to the compressed video data, a dedicated and/or optional command data transfer channel that may act as a controller and may allow controlling devices (such as for example, smart televisions and live gaming consoles) with minimal bandwidth requirements.
Fig. 1 is a schematic illustration of a system in accordance with embodiments of the invention. System 100 may be configured to encode image data frames, compress a video stream and transmit a compressed video stream across one or more communication networks 150, where it may be decompressed (which may include decoding) at an end device, e.g. a target device 130 . The video data stream may be transmitted among a host device 120, remote services 110 and a target device 130. Data played on host device 120 may be efficiently transmitted and displayed on a screen 140 connected to target device 130 according to embodiments of the invention. For example, a game running on host device 120, e.g., a mobile device or a movie streamed from remote services 110 may be displayed to a user on screen 140, e.g., a screen of a smart television.
Host device 120 may be any computerized end-device such as a mobile device, for example, a laptop, a smartphone, a workstation, a personal computer, a game console, IPad or any other computerized end-device. Host device 120 may include a processor 121, a controller 122 and a memory 123. Other modules, elements or units may be included in host device 120. Target device 130 may be any computerized device that may receive data transmission and may allow displaying data on screen 140. Target device 130 may include or may be connected to screen or monitor 140 in a wireless or wired connection 135. In some embodiments of the invention screen 140 may be connected to target device 130 while in other embodiments of the invention screen 140 may be embedded or included in target device 130.
Remote services 110 may include one or more services separated from host device 120 and/or target device 130 which may connect to host device 120 and/or target device 130 by one or more communication networks 150. Remote services 110 may include, for example, a remote processor, a gateway server 112 and a database 113. Any other device, service or platform may be included in remote services 110. Any of the components included in system 100 may be the example computer system shown in Fig. 11 and any of the operations described with relation to system 100 may be performed, for example, by the example computer system shown in Fig. 11.
Communication network 150 may include one or more communication networks and may connect between remote services 110, host device 120 and target device 130. Communication network 150 may include, for example, a local area communication network (LAN) that interconnects computers within a limited area, a wide area communication network (WAN), that covers a larger geographic distance. For example, communication network 150 may include one or more wired or wireless communication networks or communication technologies such as for example, Internet, Wi-fi, Ethernet, or any other known in the art communication method.
In some embodiments of the invention, streaming video data may be rendered by host device 120 e.g., a mobile smartphone. For example, processor 121 and controller 122 of host device 120, may execute a computer game and may transmit video data via communication network 150 to target device 130, e.g., a smart television. Host device 120, may communicate by controller 122 with remote services 110 via communication network 150. For example, when a multiplayer online game is rendered in mobile device 120, the game host may run in remote processor 111. The online game may run, e.g. executed by controller 122, in host device 120 and a request for communication may be transmitted from host device 120 to gateway server 112. Game data and user account data may be transmitted, transferred, or streamed between host device 120 and remote services 110, e.g., from host device 120 to remote services 110 and from remote services 110 to host device 120. User account data may include information of host device 120 and/or a user of host device 120. The user account data may be transmitted to and received from database 113 via gateway server 112. Game data may include data or information on other players participating in the online game. Game data may be transmitted to and received from remote processor 111 via gateway server 112. Host device 120 may run or execute the executable code of the online game and the control commands of the online game and the video stream related to the online game may be streamed from host device 120 to target device 130.
In some embodiments of the invention, streaming video data may be rendered by remote services 110 e.g., remote processor 111. For example, for an online game which requires high processing power, host device 120 may not be sufficient to run the online game. Remote processor 111 may execute the online game and may transmit video data via communication network 150 directly to target device 130 and not via host device 120. Host device 120 may communicate, e.g. using controller 122, with remote services 110 via communication network 150. Game data and user account data may be transmitted from host device 120 to remote services 110, e.g., to gateway server 112 and from remote services 110 to host device 120. The user account data may be transmitted to and received from database 113 via gateway server 112. Game data may be transmitted to and received from remote processor 111 via gateway server 112 to/from host device 120. Remote processor 111 may run the executable code of the online game and the control commands of the online game, and the video stream related to the online game may be streamed from remote processor 111 directly to target device 130. Video data may be compressed and streamed along with dedicated command data transfer channel that may allow controlling target device 130, for example, with minimal bandwidth requirements.
Embodiments of the invention may combine two or more independent data streams which may include, for example, game data, video data, user account data, commands and the like. Each of the combined data streams may be used independently with no loss in throughput. For example, a first data stream may include a compressed video stream and a second data stream may include two directional control data for one or more types of applications, e.g., games, smart TVs, command communication, and the like.
Fig. 2 is a flowchart of a method for image color-based encoding and video compression, according to embodiments of the invention. Embodiments of a method for color-based encoding and video compression may be performed, for example, by the system shown in Fig. 11. As with other method flowcharts, operations of Fig. 2 may be used with operations in other flowcharts shown herein in some embodiment of the invention. Embodiments of the invention may implement two or more innovative compression methods such as: a) image compression, also referred to herein as “image color-based encoding” or “image encoding” and b) video compression.
In operation 210, video data may be obtained or received. Video data may include a plurality of digital images, frames, or image frames, also referred to herein as “full images”. Each of the full images may be divided into a plurality of sub-frames or sub-blocks also referred to herein as “image blocks” or frame blocks”. An image block or frame block is a sub-block of a full image or frame, e.g., a full image may be divided into one or more image blocks that together complete a full image or a frame. Each of the image blocks may be divided into image packets. According to embodiments of the invention, obtaining video data may include obtaining a plurality of images or frames. Operations may be performed on full images, image blocks and/or image packets. For example, encoding, compression and transmission may include performing digital operation on any size of data, e.g., by one or more systems or elements of Fig. 11.
Reference is made to Fig. 9, which is a schematic illustration of video stream data structure, according to embodiments of the invention. A video stream may include a plurality of full images 900. Each of the full images may include a plurality of image blocks 910. Each image blocks 910 may be a part of a full image 900. Each of image blocks 910 may include a plurality of image packets 920. It should be understood to a person skilled in the art that the division of a full image into sub-images or image blocks 910 may be performed according to the application and/or communication link and/or content consideration and/or any other consideration or reason. Operations, processes and methods described in embodiments of the invention, may be performed on full images, image blocks and/or image packets. For example, encoding operations or streaming data described in embodiments of the invention may be performed on full images, image blocks and/or image packets. An exemplary image block may have a size of 4.28 gigabyte and may include 65,535 image packets. Other image blocks and image packet sizes may be used.
Reference is made back to Fig. 2. According to embodiments of the invention, two or more encoding methods may be examined, assessed, tested and/or analyzed in order to select an encoding method which may allow to reduce data usage, processing time, transfer bandwidth and/or storage usage.
In operation 220, a preliminary scan may be performed to decide or determine which encoding method may be selected from a plurality of encoding methods. One or more image frames or image blocks may be scanned in order to select an image encoding method and to determine division parameters of the one or more images for the video compression. The preliminary scan, also referred to herein as “scan step” or “preliminary scan step” may allow selecting an encoding method which may result in the highest compression ratio of the encoded full image. Each frame may be encoded by two or more encoding methods and the size of the encoded images may be saved for comparison purposes.
According to embodiments of the invention, scanning the one or more image frames or image blocks may include encoding one or more image frames or image blocks according to a plurality of encoding methods and selecting an encoding method from the plurality of encoding methods according to results of the encoding. During a scan step, a preferred encoding method may be selected from a plurality of encoding methods. Scanning one or more image frames or image blocks may be performed and may include saving a size of each of the encoded image frames or image blocks. Selecting an encoding method may be performed based on the size of encoded image frames or image blocks.
In order to match an optimal encoding method to the content of the encoded image frames or image blocks, a plurality of encoding methods may be tested, examined and/or analyzed and the method which gets optimal results, e.g., minimal storage usage and/or minimal transfer bandwidth may be selected during the preliminary scan. The preliminary scan step may include selection of an optimal encoding method which may allow maximum compression for any type of video and/or gaming content by choosing at runtime an optimal encoding method and by determining or defining division parameters to be used during video compression. Further processes, steps, operations and/or actions performed during preliminary scan in operation 220 of Fig. 2 are described with reference to Fig. 3. If a color-based encoding method is selected, a color scale may be created as described in operation 230. If any other encoding method is selected, the color scale creation process may be skipped, as shown by arrow 225, and encoding process may start immediately after operation 220.
In operation 230, a color scale may be created as an initial step of color-based encoding. A color scale may be created to represent frequency of pixel colors in an image frame or image block. The color scale may represent an indexed value of pixel colors based on frequency of pixel colors in an image frame. The color scale size may be dynamically determined based on a number of colors in the image frame or image block. Creating a color scale may include identifying frequency of sets of RGB values of pixel and updating the color scale when a new set of RGB values of pixel is detected. Color-based encoding may include applying color indexing to the input image data based on a color scale, which may be implemented by any other format or arrangement of data which may allow to remap, translate, or assign colors from one color space to colors from another. For example, a color lookup table (CLUT) may be used. CLUT is a correspondence table in which all colors from a certain color space are assigned to an index, by which they may be referenced. Referencing colors via or by an index may require less information or data to represent the actual colors.
Further processes, steps, operations and/or actions performed during color scale creation in operation 230 of Fig. 2 are described with reference to Fig. 4.
In operation 240, each of the plurality of image frames or image blocks may be encoded based on the color scale. Encoding each of the plurality of image frames or image blocks based on the color scale may include determining a number of data bytes to represent the colors in the encoded image. A size of the encoded image frame or image block may correspond to the number of colors in the image. Determining a number of data bytes may include using a direct representation of colors in the encoded image if a number of colors is below a predefined threshold and using a dynamic representation of colors in the encoded image if a number of colors is above a predefined threshold. Embodiments of the invention which relate to operation 240 including further processes, steps, operations and/or actions performed during encoding image frames or image blocks are described with reference to Fig. 5 and Fig. 6.
In operation 250, video data, which includes the plurality of encoded image frames or image blocks, may be compressed based on the division parameters and transmitted to a dedicated target, e.g., target device 130 of Fig. 1. According to embodiments of the invention, compressing the video data may include dividing each of the plurality of image frames or image blocks into a plurality of sub-frames, detecting a sub-frame that includes changes in comparison to a previous sub-frame and transmitting the sub-frame and transmitting the sub-frame that includes changes as indicated in operation 260.
In operation 260, the compressed video data may be transmitted. According to embodiments of the invention, transmitting the compressed video data may include transmitting unique communication commands within the compressed video data or transmitted alternately with the compressed video data. The unique communication commands may include a plurality of unique communication messages.
According to embodiments of the invention, to reduce data usage, a data stream used for compressed video data may contain a plurality of types of messages. For example, the data stream may include data messages and commands messages. Using both data and commands embedded within a single data stream may be used for interactive applications such as video games, online games, and the like. According to some embodiments of the invention, the plurality of data streams may be treated as independent streams even though all types of messages may be sent on the same communication data stream.
Embodiments of the invention which relate to operations 250 and 260 including further processes, steps, operations and/or actions performed during compression and transmission of video data are described with reference to Fig. 8.
Reference is made to Fig. 3, which illustrates a process for selecting an image encoding method, according to embodiments of the invention. The flowchart of Fig. 3 shows in detail the processes, steps, operations and/or actions performed in operation 220 of Fig. 2.
Due to differences in video content, there is no one compression method that can give optimal results for all types of video or image frame/image blocks content. During the preliminary scan step period, one or more image frames, e.g., a predetermined number of image frames or image blocks, may be encoded according to a plurality of encoding methods including color-based encoding, and a selected encoding method may be determined according to, or based on, the results of the encoding.
In operation 310, image data may be obtained. In some embodiments, the operation may be performed when a complete full image is generated by a host device or a remote device, e.g., by a graphic card included in host device 120 of Fig. 1 or by a graphic card included in remote processor 111 of Fig. 1. A full image may be obtained by host device 120 and may be transferred from a graphic card included in host device 120 to memory 123 of host device 120. In operation 320, color-based encoding may be applied on image data, e.g., on one or more full images. Color-based encoding method may be applied, employed, or executed as described in accordance with embodiments of the invention. The one or more full images that may be scanned during the preliminary scan process may be a predetermined number of full images, semi -random number of images or a number of full images that may dynamically be changed during the scan process. For example, the color-based encoding method may be applied to a first full image, and additional images may be encoded if required.
In operation 330, color-based encoding data or results may be saved, for example in a dedicated analysis file, register, or any applicable data storage element. A size of the encoded image frame or image block may be saved and, in addition, the encoded image itself may be saved. Other information or data related to the color-based encoding of the one or more sample full images may be saved in an analysis file. The analysis file may include encoded image size, performance parameters such as, encoding time, file size, maximal and/or minimal compression records and the like. In some embodiments of the invention, one or more encoded full images may be saved.
In operation 340, one or more other encoding methods, different from the color-based encoding may be performed on image data, e.g., on one or more full images. Exemplary encoding methods may include lossy or lossless compression method such as Deflate, Discrete Cosine Transform (DCT), Joint Photographic Experts Group (JPEG), wavelet compression, fractal compression, run-length encoding, area image compression, adaptive dictionary algorithms, predictive coding, or any other encoding method and/or algorithm.
In operation 350, encoding data or results of any encoding method, other than the color-based method, may be saved, for example in a dedicated analysis file. An analysis file may be generated for each encoding method which was used in the preliminary scan step in addition, the one or more encoded image frame or image block, a size of each encoded image frame or image block may be saved. In some embodiments the data or information collected for each of the encoding method may be saved in a single file or database, or in any manner which may allow to perform a comparison between performances of different encoding methods.
If there is a need for additional information in order to analyze the different encoding methods, operations 310-350 may be repeated as indicated by arrow 355.
In operation 360, an encoding method may be selected. Selecting an encoding method from the plurality of encoding methods based on results of encoding process in the preliminary scan. Selecting an encoding method may be performed based on the size of encoded image frames or encoded image blocks in each encoding method. In some embodiments of the invention, the scan process may be employed according to a predefined definition, e.g., of a system operator, a user or as a default process. During the preliminary scan, encoding by one or more encoding methods may be performed on all pixels in one or more sample images, screen shoots and/or full frames. Embodiments of the invention may allow selecting the encoding method based on the compressed data size and/or compression time of one or more encoded image frames or image blocks. For example, an encoding method having the highest compression ratio of the sample image may be selected. When the preliminary scan is completed, the final values saved (e.g., by dedicated counters) may be compared to decide on the optimal image encoding method to apply while encoding frames in the video stream.
In operation 370, compression parameters may be determined based on the encoding process during the preliminary scan. While encoding the sample images, data related to video compression may be stored and analyzed. Parameters related to optimal division of an image during video compression may be determined, selected, or defined.
Embodiments of the invention make use of the known fact that commonly in video streams there is no change in a large number of pixels in two consecutive images of the same video stream. In order to reduce the data transmission bandwidth, embodiments of the invention may divide each image frame into a plurality of smaller images, image blocks, sub-images or parts of the image.
According to embodiments of the invention, during the preliminary scan, a plurality of testing calculations may be performed on the image data, e.g., on screenshots, to determine the division parameters for the video compression. For example, calculations may be performed to determine the width and height of the sub-images or image parts. The dimensions of the sub-images defined by the selected division parameters may directly impact the bandwidth required for transferring the video stream and may be used to improve performance of video data transferring or streaming.
Reference is made to Fig. 7, which is a flowchart of compression parameters determination for video compression, according to embodiments of the invention. The flowchart of Fig. 7 shows in detail the processes, steps, operations and/or actions performed in operation 370 of Fig. 3.
In operation 710, division parameters may be set. Setting the division parameters may include defining preliminary, initial, opening or first parameters which may be checked or tested in order to find or determine the preferred division parameters which may allow optimal division of a full image, an image frame, or an image block for highest compression ratio in minimal compression time. For example, a full image may be divided into a plurality of sub-images or partial images. An initial set of two parameters may be defined or set, e.g., parameter “X” may define a number of horizontal division lines and parameter “Y” may define a number of vertical division lines. An exemplary initial value of “1” may be set to “X” and “Y” parameters as an initial value. Other parameters and initial values may be set or defined.
I operation 720, a full image, an image frame or an image block may be compressed using the division parameters. For example, if “X” and “Y” parameters are set to “1”, a full image or an image block may be divided into 4 sub-images which may be encoded and compressed according to encoding and compression methods described in embodiments of the invention.
In operation 730, the compression results may be saved, for example, in a dedicated analysis file. For example, the compression results may include the size of each of the compressed sub-images and the compression time. Other data, information or statistics may be saved in a dedicated analysis file.
In operation 740, a check is performed regarding additional division parameters to be tested. If additional division parameters may be required, operations 710-740 may be repeated until all division parameters are tested, as indicated by arrow 745. For example, “X” and “Y” parameters may be defined as parameters in a loop, which are increased every time by a predefined value, operations 710-740 may be repeated until all combinations of values of “X” and “Y” parameters may be checked. If there are no additional division parameters to be checked, the preferred division parameters may be selected as indicated in operation 750. A selection may be performed based on the results of all previous compression results with all the division parameters that may have been checked. For example, performing a binary search on all compression results, a local minimum for the size of a single frame according to width and height of the sub-images may be determined. Based on the detected local minimum, the division parameters, e.g., width and height of the sub-images may be determined.
In operation 760, the preferred division parameters which may give optimal compression results may be saved. The division parameters values that may obtain the best ratio of compression time and compressed data size may be selected. Any other optimal results may be defined and may be used for selection process. Embodiment of the invention may store all relevant data to perform optimal video compression of the transmitted data while the algorithm is running.
Reference is made to Fig. 4, which is a flowchart of a method for color scale creation for color-based encoding, according to embodiments of the invention. The flowchart of Fig. 4 shows in detail the processes, steps, operations and/or actions performed in operation 230 of Fig. 2. The color scale creation process may be performed at the first time the color-based encoding is performed, e.g., during preliminary scan process or anytime during the encoding process in run-time when a first full image is scanned. In operation 410, a color scale may be initiated, established, or formed. For example, an empty list which may be suitable to save or store color values of pixels may be formed and values of common color values, e.g., black and white, may be added to the color scale. In some embodiments the color values of pixels may be a red, green, blue (RGB) representation of the pixels. Embodiments of the invention are described with relation to use of RGB representation. It should be understood to a person skilled in the art that any other color values representation may be used.
In operation 420, color values of a next pixel may be detected, e.g., RGB values of a next detected pixel in the first image detected or a sample image which is detected.
In operation 430, it may be checked if the color values of the detected pixel may be included in the color scale, e.g., the RGB values of the detected pixel. If the color values of the detected pixel are included in the color scale, a value or counter which counts the frequency of occurrences of that color value may be incremented, as indicated in operation 440. If the color values of the detected pixel are not included in the color scale, the color values of the detected pixel may be added to the color scale, as indicated in operation 450.
In operation 460, it may be checked if all pixels of the image were scanned. If not all the pixels of the image were scanned, as indicated by arrow 465, operation 420 may be repeated and a next pixel in the image may be detected as described in operation 420.
According to embodiments of the invention, a color scale may be a table which may include all colors, e.g., RGB values of each color in an image frame, image block or video stream, and an index or a value assigned to each color by which it can be referenced or encoded. The color scale may further include, and/or may be arranged according to, a respective frequency of each color in the image frame, image block or video stream. A color scale may be created to represent an indexed value of pixel colors based on a frequency of pixel colors in an image frame, image block or video stream such that a size of the indexed value may be dynamically determined based on a number of colors in the scale. According to embodiments of the invention, the values assigned to each color in the scale may be determined according to the frequency of each color in the scale. For example, the scale may include a list of colors organized, ordered, or arranged such that the most frequent color, e.g., a color which has the largest number of appearances in an image or stream may be first, while the less frequent color, e.g., a color which has the smallest number of appearances in an image, may be last. According to embodiments of the invention, a color scale may include a plurality of information units, e.g., a plurality of bytes, which may be used to represent the colors of pixels in a scanned image frame and an indexed value assigned to each of the pixel colors. For example, each pixel of a scanned image frame may have an RGB value, also referred to herein as RGB tuple. An exemplary color scale may include 3 data bytes to represent each RGB tuple of each of the pixels in the image frame and one or two data bytes to represent the indexed value assigned to each of the pixel colors. For example, a black color for which its RGB decimal value representation is (0,0,0) may be represented by three bytes while its referencing, representation, or indexed value in the scale may be dynamically determined to be between one and two bytes based on a number of colors in the scale which is related to the number of colors in the image frame or a plurality of image frames scanned to create the scale. In some embodiments of the invention, each RGB tuple may be saved using a plurality of consecutive bytes with no unused bytes of data in between. A color scale may further include a plurality of information units which may contain a unique combination to represent that this is a color scale, e.g. an identification of the color scale as a color scale.
If all the pixels of the image were scanned, added to the color scale and the frequency of appearances may be updated, encoding of the color scale may be performed as indicated in operation 470 and is further described with reference to Fig. 5.
Embodiments of the invention describe a first encoding process which relates to encoding during creation of a color scale. This encoding process may include encoding color information using fewer bits than the original representation, for example, as described in Fig. 5. A second encoding process described in embodiments of the invention relate to encoding of images in a video stream. During encoding of the images in the video the color scale is used, for example, as described in operation 240 of Fig. 2 and as described in Fig. 6.
Reference is made to Fig. 5, which is a flowchart of a method for color scale encoding, according to embodiments of the invention. The flowchart of Fig. 5 shows in detail the processes, steps, operations and/or actions performed in operation 240 of Fig. 2, and/or operation 470 of Fig. 4.
In operation 510, the values in the color scale may be organized according to their frequency. For example, the list of colors may be organized, ordered, or arranged such that the most frequent color, e.g., a color which has the largest number of appearances in an image may be first, while the less frequent color, e.g., a color which has the smallest number of appearances in an image, may be last. In operation 520, it is checked if the number of colors in the color scale is greater than a predetermined first threshold or level, e.g., above a predefined first threshold value. For example, it may be determined whether there are more than 255 different colors in the color scale. If the number of colors in the color scale is not greater than a predetermined first threshold, the color scale may be encoded by a direct representation, as indicated in operation 530.
A direct representation encoding may include encoding of the plurality of colors in the color scale by a predefined number of information units which may contain a unique combination to represent each of the plurality of colors in the color scale. For example, if there are less than 255 different colors in the color scale, each color may be encoded by a unique one-byte representation. According to embodiments of the invention, each of the plurality of colors in the color scale may receive representation with relation to the frequency of its appearances in the image and/or in the color scale. For example, the most frequent color in the image may receive a minimal representation, e.g., a one- byte representation of eight “zeros”. The less frequent color in the image may receive a maximal representation, e.g., a one-byte representation of eight “ones”
If the number of colors in the color scale is greater than a predetermined first threshold, or above a predefined first threshold value, the color scale may be encoded by a dynamic representation, as indicated in block 540 which includes operations 550-580.
In operation 550, it is checked if the number of colors in the color scale is greater than a predetermined second threshold or level, e.g., above a predefined second threshold value. For example, it is determined whether there are more than 32,894 different colors in the color scale.
If the number of colors in the color scale is greater than a predetermined second threshold, a number equal to second threshold of the most similar, resemble colors may be determined based on known in the art similarity algorithms or method as indicated in operation 560. For example, if it is determined that there are more than 32,894 different colors in the color scale, the most similar 32,894 colors may be determined in order to allow representing all color by two-bytes representation, e.g., as described in operations 570-580.
If the number of colors in the color scale is less than a predetermined second threshold, e.g., less than 32,894 colors, a first group of colors in color scale may be encoded using a first unique representation as indicated in operation 570. The first unique representation may include a predefined first number of information units and an indication to mark the first unique representation. A second group of colors in color scale may be encoded using a second unique representation as indicated in operation 580. The second unique representation may include a predefined second number of information units and an indication to mark the second unique representation.
For example, if there are less 32,894 colors in color scale, the first 127 colors in color scale may be encoded by one byte representation which may include, a unique seven bit sequence and an indication bit which may include a zero value at the beginning of the two bytes representation sequence. The exemplary number of 127 colors is the maximal number which may be encoded by 7 bits, the additional 8th indication bit may indicate the 1-byte representation. The next 32,767 colors in color scale (which are the result of 32,894 colors less 127 colors) may be encoded by two-byte representation which may include, a unique 15 bits sequence and an indication bit which may include a “one” value at the beginning of the two bytes representation sequence.
In operation 590, the encoded color scale may be saved in a dedicated file or database, e.g., in memory 123 of host device 120, and/or memory of remote processor 111 of Fig. 1.
Reference is made to Fig. 6, which is a flowchart of a method for color based- encoding, according to embodiments of the invention. The flowchart of Fig. 6 shows in detail the processes, steps, operations and/or actions performed in operation 240 of Fig. 2. The color-based encoding process may be performed at the run time during transmission or streaming of video data from a first device to a second device. For example, when video data stream is transmitted between host device 120 and/or remote services 110 and target device 130 of Fig. 1.
In operation 610, a check may be performed to ensure color scale exists, e.g., in memory 123 of host device 120, and/or memory of remote processor 111 of Fig. 1. If it is determined that no color scale was created, a color scale may be created as indicated in operation 620 and described in detail with reference to Fig. 4 and Fig. 5. If a color scale exists, encoding may be proceeded and video data which may include a plurality of full image data may be obtained as indicated in operation 630, e.g., as described in operation 210 of Fig. 2.
In operation 640, color values of a next pixel may be detected, e.g., RGB values of a next detected pixel in a detected full image.
In operation 650, it may be checked if the color values of the detected pixel may be included in the color scale, e.g., the RGB values of the detected pixel. If the color values of the detected pixel are not included in the color scale, the color values of the detected pixel may be added to the color scale, as indicated in operation 660. This may allow to update the color scale in run-time during encoding of the streamed video data.
If the color values of the detected pixel are included in the color scale, as indicated by operation 670, encoding of the full image may be performed based on or by color scale. Determining a number of data bytes to represent the colors in the encoded image may include using a direct representation of colors in the encoded image if a number of colors is below a predefined threshold, and using a dynamic representation of colors in the encoded image if a number of colors is above a predefined threshold. According to embodiments of the invention, the encoding process may be similar to the encoding process of the color scale as described with reference to Fig. 5. The encoded image may be saved in a dedicated memory, e.g., memory 1120 of Fig. 11, in order to further process, filter and/or additional required operations. During encoding process all pixels of each detected image may be scanned to find all colors existing in the detected image, e.g., the unique sets of RGB triplets in detected input image. The encoding may be performed according to or based on the number of colors detected and their frequency based on colors in the color scale and according to a required image quality and data size considerations. For example, a number of unique colors may be selected using one of, for example, two options based on the required result. A first option may include, for example, up to 256 unique colors to improve transmission speed of the encoded data and a second option may include 32,894 unique colors to improve image quality.
In operation 680, it is checked and determined if all pixels of the full image were encoded. If not all pixels of the full image were encoded, the process may be repeated by returning to operation 640 and a next pixel of the full image may be examined and encoded according to operations 640-670. If all pixels of the full image are encoded one or more additional filter algorithms and/or compression algorithms may be run, executed and/or processed as indicated in operation 690. Additional filter algorithms and/or compression algorithms may be applied to the encoded images to further reduce size of the encoded data and/or to prepare the encoded images for transmission, e.g., for streaming purposes.
An additional filtering algorithm that may be applied according to embodiments of the invention may include, “scan line serialization” which may analyze one or more horizontal lines of pixels, also referred to herein as “scanline”. During scan line serialization, each pixel may be represented by one entry in the color scale in a predetermined order, e.g., ordered left to right in a scanline and the plurality of scanlines may also be ordered in a predetermined manner, e.g., from top to bottom. In some embodiments of the invention, further filtering algorithms may include transforming each scanline into a filtered scanline using one of the defined filter types to prepare the scanline for additional image compression. Some embodiments may include additional deflate compression algorithms applied on all the filtered scanlines in the image. In addition, data stream construction may be performed to generate the final required data stream consisting of data chunks.
According to embodiments of the invention, video compression may serve as an additional layer of compression on top of the selected image encoding. Video compression algorithms may be based on the assumption that in most video streams there are pixels that may not be changed between consecutive images of the video. Each of the plurality of full images or image blocks received within the video stream may be one of two types of images: a) a full image data which may contain a full resolution image from the input video; and b) an image divided to sub-images or a grid of smaller images. The divided image data may contain a list of smaller parts inside the image that have changed in comparison to the last displayed frame. These parts may be drawn on top and/or instead of the same sections in the previous frame. Every sub-image may contain coordinates of the top left corner of that sub-image part in the full image, so the full image may be reconstruct based on that information.
Reference is made to Fig. 8, which is a flowchart of a method for video compression, according to embodiments of the invention. The flowchart of Fig. 8 shows in detail the processes, steps, operations and/or actions performed during transmission of compresses video data in operation 260 of Fig. 2.
In operation 810, a next full image may be obtained, e.g., video data may be obtained or received as describe in operation 210 of Fig. 2.
In operation 820, a full frame may be divided based on or according to the division parameters selected or determined in the preliminary scan, as described in detail the processes, steps, operations and/or actions performed in operation 370 of Fig. 3 and in Fig. 7. A full image may be divided to a plurality of sub-images. The number of sub-images may be defined according to the division parameters.
In operation 830, each sub-image may be compared to a respective sub-image of a previous full image.
In operation 840, it is checked if a change from a previous sub-image is detected. If no change from a previous sub-image is detected, data from a previous sub-image may be used as indicated in operation 860. If a change from a previous sub-image is detected, data of the current sub image may be used as indicated in operation 850.
In operation 870, it is checked if all sub images of a full image may be checked or scanned. If not all sub images of a full image are checked, the process may be repeated by returning to operation 810 and a next full image may be obtained and examined according to operations 810-870.
If all full images of a video stream are checked, the video stream may be prepared for transmission as indicated in operation 880. Preparing the video stream for transmission may include, for example, preparing a list of smaller parts inside the image that have changed in comparison to the last displayed frame. These parts may be received by the end device and drawn on top and/or instead of the same sections in the previous frame. Every sub-image may contain coordinates of the top left corner of that sub-image part in the full image, so the full image may be reconstruct based on that information e.g., a full image may contain a plurality of sub-images, each positioned according to its coordinates inside the full image.
In operation 890, compressed video data and command data may be transmitted. According to embodiments of the invention, in order to reduce data transfer, e.g., for interactive applications the transmitted stream may contain data messages which may include compressed video data and command data which may include a dedicated command data transfer channel that may act as a controller and may allow to control devices such as for example, smart televisions and live gaming consoles with minimal bandwidth requirements. For example, data and command streams may be treated as independent streams even though both types of messages are sent on the same communication data stream.
According to embodiments of the invention, a dedicated command channel, or command message may include a plurality of command. For example, the following four types of commands may be used: a) Universal device commands which may include common device control functionality commands available on most platforms such as, for example, volume control, channel selection, chapter selection, and the like; b) Device specific commands which may include a set of dedicated command slots to define a set of device commands unique to a specific end platform; c) Universal game commands which may include common game control functionality supported by most games, e.g., arrow keys, enter button, and the like; and d) Game specific commands which may include a set of dedicated command slots to define a set of game commands unique to a specific game. Other commands and other types of commands may be defined and used with embodiments of the invention.
Reference is made to Figs. 10A, 10B and IOC, which depict data message forms according to embodiments of the invention. Figs 10A and 10B depict image data messages and Fig. IOC depict protocol data message.
Fig. 10A shows an exemplary representation of a message which may include data related to image packet. The exemplary representation of Fig. 10A may be used according to embodiments of the invention when information or data related to image packets, e.g., image packet 920 of Fig. 9 may be transmitted. Image packet message 1000 may include a plurality of information units, e.g., a plurality of bytes, which may represent the information carried by image packet message 1000. The first group of data bytes may represent a header 990 of image packet message 1000 and a second group of data bytes may represent data 991 carried by image packet message 1000. Header 990 may include, for example, a plurality of fields or areas, each may carry, include, or represent specific data.
For example, header 990 may include 19 bytes which may be divided as follows: 1 byte for a message type 940, 4 bytes for a video frame size 941, 4 bytes for a frame write location indication 942, 4 bytes for a data size indication 943, 2 bytes for a packet number 944, 2 bytes for a total packet indication 945, 2 bytes for a message number indication 946. Data 991 may include up to 65 Megabyte (MB) of image packet data. Any other number of bytes may be used.
Message type 940 may include an indication for the type of the message, e.g, “1” may represent a protocol message, “10” may represent a control message, “120” may represent image packet message data and “220” may represent image frame or image block message. Video frame size 941 may include a size of a current image frame or image block carried by image packet message 1000. Frame write location indication 942 may include an indication of a byte number inside an image frame or image block where the current image packet starts. Data size indication 943 may include an indication of a size of the data portion of a message. Packet number 944 may include a location of the current image packet in a message. Total packet indication 945 may include a number of packets in the message.
Fig 10B shows an exemplary representation of a message which may include data related to an image frame. The exemplary representation of Fig. 10B may be used according to embodiments of the invention when information or data related to an image frame, e.g., image frame 910 of Fig. 9 may be transmitted. Image frame message 1100 may include a plurality of information units, e.g., a plurality of bytes, which may represent the following information carried by image frame message 1100. The first group of data bytes may represent a header 992 of image frame message 1100 and a second group of data bytes may represent data 993 carried by image frame message 1100. Header 992 may include, for example, a plurality of fields or areas, each may carry, include, or represent specific data.
For example, header 992 may include 19 bytes which may be divided as follows: 1 byte for a message type 940, 4 bytes for x coordinate start location 951, 4 bytes for y coordinate start location 952, 4 bytes for a data size indication 943, 2 bytes for a packet number 944, 2 bytes for a total packet indication 945, 2 bytes for a message number indication 946. Data 993 may include up to 65 megabytes (MB) of image frame data. Any other number of bytes may be used.
Message type 940, Data size indication 943, Packet number 944, Total packet indication 945 and message number indication 946 may be similar to their description in Fig. 10A. X coordinate start location 951 may include an X coordinate of the top left corner of the image frame, y coordinate start location 952 may include a Y coordinate of the top left corner of the image frame.
Fig IOC shows an exemplary representation of a protocol data message which may include data related to control commands. Protocol or commands message 1200 may include a plurality of information units, e.g., a plurality of bytes, which may represent information related to control commands. Protocol message 1200 may include unique communication commands which may be transmitted within the compressed video data or alternately with compressed video data on a same communication channel.
A first group of data bytes may represent a header 994 of protocol message 1200 and a second group of data bytes may represent data 995 carried by protocol message 1200. Header 994 may include, for example, a plurality of fields or areas, each may carry, include, or represent specific data.
For example, header 994 may include 5 bytes which may be divided as follows: 1 byte for a message type 940 and 4 bytes for message length field 961. Data 995 may include up to 65 Megabyte (MB) of protocol command message data 962. Any other number of bytes may be used. Message type 940 may be similar to its description in Fig. 10A, message length field 961may include an indication of the length of the data portion of the protocol message, e.g., message data 962. Message data 962 may include data related to unique communication commands or control commands to be used in accordance with embodiments of the invention.
It should be understood to a person skilled in the art that the presentation depicted in Figs. 10A, 10B and IOC shows exemplary representations of data message forms. Any other data message form may be used in accordance with embodiments of the invention.
Fig. 11 illustrates an exemplary computing device according to an embodiment of the invention. For example, a computing device 1100 with a processor 1105 may be used to encode a digital image or a plurality of digital images and to perform color-based encoding operations, according to embodiments of the invention. Each of the operation and/or processes described in embodiments of the invention may be performed by one or more systems or elements of Fig. 11.
Computing device 1100 may include a processor 115 that may be, for example, a central processing unit processor (CPU), a chip or any suitable computing or computational device, an operating system 1115, a memory 1120, a storage 1130, input devices 1135, graphics processing unit (GPU) 1180 and output devices 1140. Processor 1105 may be or include one or more processors, etc., co-located or distributed. Computing device 1100 may be for example a smart device, a smartphone, workstation or a personal computer, a laptop, or may be at least partially implemented by one or more remote servers (e.g., in the “cloud”).
Operating system 1115 may be or may include any code segment designed and/or configured to perform tasks involving coordination, scheduling, arbitration, supervising, controlling or otherwise managing operation of computing device 1100, for example. Operating system 1115 may be a commercial operating system. Memory 1120 may be or may include, for example, a Random Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a non- volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units. Memory 1120 may be or may include a plurality of possibly different memory units.
Executable code 1125 may be any executable code, e.g., an application, a program, a process, task or script. Executable code 1125 may be executed by processor 1105 possibly under control of operating system 1115. For example, executable code 1125 may be or include code for encoding one or more digital images, according to embodiments of the invention.
Storage 1130 may be or may include, for example, a hard disk drive, a floppy disk drive, a Compact Disk (CD) drive, a CD-Recordable (CD-R) drive, a universal serial bus (USB) device or other suitable removable and/or fixed storage unit. In some embodiments, some of the components shown in Fig. 11 may be omitted. For example, memory 1120 may be a non-volatile memory having the storage capacity of storage 1130. Accordingly, although shown as a separate component, storage 1130 may be embedded or included in memory 1120. Storage 1130 and or memory 1120 may be configured to store an electronic or digital image gallery including image files 732, including a digital image 733 and metadata 734, and any other parameters required for performing embodiments of the invention. Input devices 1135 may be or may include a camera, a mouse, a keyboard, a touch screen or pad or any suitable input device. It will be recognized that any suitable number of input devices may be operatively connected to computing device 1100 as shown by block 1135. Output devices 1140 may include one or more displays, speakers and/or any other suitable output devices. It will be recognized that any suitable number of output devices may be operatively connected to computing device 1100 as shown by block 1140. Any applicable input/output (I/O) devices may be connected to computing device 1100 as shown by blocks 1135 and 1140. For example, a wired or wireless network interface card (NIC), a modem, printer or facsimile machine, a universal serial bus (USB) device or external hard drive may be included in input devices 1135 and/or output devices 1140.
Network interface 1150 may enable device 1100 to communicate with one or more other computers or networks. For example, network interface 1150 may include a Wi-Fi or Bluetooth device or connection, a connection to an intranet or the internet, an antenna etc. GPU 1180 may enable computer graphics manipulations, image processing processes and/or accelerating real-time graphics applications. GPU 1180 may be implemented as an integrated or discrete unit and may be used in embodiments of the invention, e.g., by host device 120 and/or remote services 110 of Fig. 1. Embodiments described in this disclosure may include the use of a special purpose or general-purpose computer including various computer hardware or software modules, as discussed in greater detail below. Embodiments within the scope of this disclosure also include computer-readable media, or non- transitory computer storage medium, for carrying or having computer-executable instructions or data structures stored thereon. The instructions when executed may cause the processor to carry out embodiments of the invention. Such computer-readable media, or computer storage medium, can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of computer-readable media.
Computer-executable instructions comprise, for example, instructions and data which cause a general-purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
As used herein, the term "module" or "component" can refer to software objects or routines that execute on the computing system. The different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system (e.g., as separate threads). While the system and methods described herein are preferably implemented in software, implementations in hardware or a combination of software and hardware are also possible and contemplated. In this description, a “computer” may be any computing system as previously defined herein, or any module or combination of modulates running on a computing system.
For the processes and/or methods disclosed, the functions performed in the processes and methods may be implemented in differing order as may be indicated by context. Furthermore, the outlined steps and operations are only provided as examples, and some of the steps and operations may be optional, combined into fewer steps and operations, or expanded into additional steps and operations. The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its scope. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. The present disclosure is to be limited only by the terms of the appended claims, along with the full scope of equivalents to which such claims are entitled. It is also to be understood that the terminology used in this disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting. This disclosure may sometimes illustrate different components contained within, or connected with, different other components. Such depicted architectures are merely exemplary, and many other architectures can be implemented which achieve the same or similar functionality.
Aspects of the present disclosure may be embodied in other forms without departing from its spirit or essential characteristics. The described aspects are to be considered in all respects illustrative and not restrictive. The claimed subject matter is indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims

1. A method for color-based encoding of image frames in a video stream, the method comprising: scanning one or more image frames from video data, wherein the video data comprises a plurality of image frames; creating a color scale to represent an indexed value of pixel colors based on a frequency of pixel colors in an image frame, wherein a size of the indexed value is dynamically determined based on a number of colors in the scale; and encoding each of the plurality of image frames based on the color scale.
2. The method of claim 1, wherein the color scale size is dynamically determined based on a number of colors in the image frame.
3. The method of claim 1, wherein a size of an encoded image frame corresponds to the number of colors in the image.
4. The method of claim 1 , wherein scanning the one or more image frames comprises determining one or more division parameters of an image.
5. The method of claim 1 wherein scanning the one or more image frames comprises: encoding the one or more image frames according to a plurality of encoding methods; and selecting an encoding method from the plurality of encoding methods according to the results of the encoding.
6. The method of claim 5, wherein selecting an encoding method from the plurality of encoding methods comprises selecting an encoding method based on a size of one or more encoded image frames.
7. The method of claim 1, wherein creating a color scale comprises identifying a frequency of sets of red, green, blue (RGB) values of a plurality of pixels.
8. The method of claim 1, wherein creating a color scale comprises updating the color scale when a new set of red, green, blue (RGB) values of pixel is detected.
9. The method of claim 1 , wherein encoding each of the plurality of image frames based on the color scale comprises determining a number of data bytes to represent the colors in the encoded image.
10. The method of claim 9, wherein determining a number of data bytes to represent the colors in the encoded image comprises using a direct representation of colors in the encoded image if a number of colors is below a predefined threshold and using a dynamic representation of colors in the encoded image if a number of colors is above a predefined threshold.
11. The method of claim 1 further comprising: compressing the video data based on the division parameters; and transmitting the compressed video.
12. The method of claim 11, wherein compressing of the video data comprises: dividing each of the plurality of image frames to a plurality of sub-images; detecting a sub-image that includes changes in comparison to a previous sub-image; and transmitting the sub-image that includes changes.
13. The method of claim 11, wherein transmitting the compressed video data comprises transmitting alternately unique communication commands.
14. The method of any preceding claim, comprising decoding an encoded image frame.
15. The method of claim 14, wherein decoding comprises reversing one or more steps used in encoding the image frame.
16. A system for color-based encoding of image frames in a video stream, the system comprising: a memory; and a processor configured to: scan one or more image frames from video data, wherein the video data comprises a plurality of image frames; create a color scale to represent an indexed value of pixel colors based on frequency of pixel colors in an image frame, wherein a size of the indexed value is dynamically determined based on a number of colors in the scale; and encode each of the plurality of image frames based on the color scale.
17. The system of claim 16, wherein the color scale size is dynamically determined based on a number of colors in the image frame.
18. The system of claim 16, wherein a size of an encoded image frame corresponds to the number of colors in the image.
19. The system of claim 16, wherein the processor is configured to scan one or more image frames by: determining division parameters of an image; encoding the one or more of image frames according to a plurality of encoding methods; and selecting an encoding method from the plurality of encoding methods according to results of the encoding.
20. The system of claim 19, wherein the processor is configured to select an encoding method from the plurality of encoding methods by selecting an encoding method based on a size of one or more encoded image frames.
21. The system of claim 16, wherein the processor is configured to create a color scale by identifying frequency of sets of red, green, blue (RGB) values of pixel and updating the color scale when a new set of red, green, blue (RGB) values of pixel is detected.
22. The system of claim 16, wherein the processor is configured to encode each of the plurality of image frames based on the color scale by determining a number of data bytes to represent the colors in the encoded image.
23. The system of claim 22, wherein determining a number of data bytes to represent the colors in the encoded image comprises using a direct representation of colors in the encoded image if a number of colors is below a predefined threshold and using a dynamic representation of colors in the encoded image if a number of colors is above a predefined threshold.
24. The system of claim 16, wherein the processor is configured to transmit the compressed video data by compressing the video data based on the division parameters.
25. The system of claim 24, wherein the processor is configured to compress the video data by: dividing each of the plurality of image frames to a plurality of sub-images; detecting a sub-image that includes changes in comparison to a previous sub-image; and transmitting the sub-image that includes changes.
26. The system of claim 16, wherein the processor is configured to transmit compressed video data by transmitting unique communication commands within the compressed video data.
PCT/IL2022/050457 2021-05-05 2022-05-02 System and method for dynamic video compression WO2022234575A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163184216P 2021-05-05 2021-05-05
US63/184,216 2021-05-05

Publications (1)

Publication Number Publication Date
WO2022234575A1 true WO2022234575A1 (en) 2022-11-10

Family

ID=83932637

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2022/050457 WO2022234575A1 (en) 2021-05-05 2022-05-02 System and method for dynamic video compression

Country Status (1)

Country Link
WO (1) WO2022234575A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5956431A (en) * 1997-10-02 1999-09-21 S3 Incorporated System and method for fixed-rate block-based image compression with inferred pixel values
WO2010018494A1 (en) * 2008-08-11 2010-02-18 Nxp B.V. Image compression
CN106331716A (en) * 2016-08-31 2017-01-11 钟炎培 Video compression method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5956431A (en) * 1997-10-02 1999-09-21 S3 Incorporated System and method for fixed-rate block-based image compression with inferred pixel values
WO2010018494A1 (en) * 2008-08-11 2010-02-18 Nxp B.V. Image compression
CN106331716A (en) * 2016-08-31 2017-01-11 钟炎培 Video compression method and device

Similar Documents

Publication Publication Date Title
US11785215B2 (en) Encoding method, decoding method, encoding/decoding system, encoder, and decoder
CN111681167B (en) Image quality adjusting method and device, storage medium and electronic equipment
US20160029079A1 (en) Method and Device for Playing and Processing a Video Based on a Virtual Desktop
US10652591B2 (en) System for cloud streaming service, method for same using still-image compression technique and apparatus therefor
CN108848082B (en) Data processing method, data processing device, storage medium and computer equipment
WO2022257699A1 (en) Image picture display method and apparatus, device, storage medium and program product
CN111901666B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN110740352B (en) SPICE protocol-based difference image display method in video card transparent transmission environment
TWI487366B (en) Bitstream syntax for graphics-mode compression in wireless hd 1.1
US10110896B2 (en) Adaptive motion JPEG encoding method and system
US10462200B2 (en) System for cloud streaming service, method for still image-based cloud streaming service and apparatus therefor
US8681860B2 (en) Moving picture compression apparatus and method of controlling operation of same
US11438599B2 (en) Video compression for video games
WO2022234575A1 (en) System and method for dynamic video compression
KR20160015136A (en) System for cloud streaming service, method of cloud streaming service using still image compression technique and apparatus for the same
CN110401835B (en) Image processing method and device
CN116567247A (en) Video encoding method, real-time communication method, device, equipment and storage medium
CN110868614B (en) SPICE protocol-based difference image display system in video card transparent transmission environment
CN110798715A (en) Video playing method and system based on image string
JP6377222B2 (en) Information processing apparatus, control method, program, and recording medium
CN116405606A (en) Method for improving transmission efficiency of image network
CN115546328A (en) Picture mapping method, compression method, decoding method and electronic device
CN117333559A (en) Image compression method, device, electronic equipment and storage medium
TW202005378A (en) Method for operating an image processing device and image processing device
JP2002374174A (en) Variable length code decoder, variable length code decoding method and program for making computer execute the, method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22798776

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22798776

Country of ref document: EP

Kind code of ref document: A1