AU2002309519A1 - Digital image compression - Google Patents

Digital image compression

Info

Publication number
AU2002309519A1
AU2002309519A1 AU2002309519A AU2002309519A AU2002309519A1 AU 2002309519 A1 AU2002309519 A1 AU 2002309519A1 AU 2002309519 A AU2002309519 A AU 2002309519A AU 2002309519 A AU2002309519 A AU 2002309519A AU 2002309519 A1 AU2002309519 A1 AU 2002309519A1
Authority
AU
Australia
Prior art keywords
areas
interest
accordance
image
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
AU2002309519A
Other versions
AU2002309519B2 (en
Inventor
Richard A. Keeny
Thor A. Olson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronics for Imaging Inc
Original Assignee
Electronics for Imaging Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/821,104 external-priority patent/US7027655B2/en
Application filed by Electronics for Imaging Inc filed Critical Electronics for Imaging Inc
Publication of AU2002309519A1 publication Critical patent/AU2002309519A1/en
Application granted granted Critical
Publication of AU2002309519B2 publication Critical patent/AU2002309519B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Description

DIGITAL IMAGE COMPRESSION WITH SPATIALLY VARYING QUALITY LEVELS DETERMINED BY IDENTIFYING AREAS OF INTEREST
BACKGROUND OF THE INVENTION
The present invention provides methods and systems for compression of digital images (still or motion sequences) wherein predetermined criteria may be used to identify a plurality of areas of interest in the image, and each area of interest is encoded with a corresponding quality level (Q-factor). In particular, the predetermined criteria may be derived from measurements of where a viewing audience is focusing their gaze (area of interest). Portions of the image outside of the areas of interest are encoded at a lower quality factor and bit rate. The result is higher compression ratios without adversely affecting a viewer's perception of the overall quality of the image.
The invention is an improvement to the common practice of encoding, compressing, and transmitting digital image data files. Due to the large size of the data files required to produce a high quality representation of a digitally sampled image, it is common practice to apply various forms of compression to the data file in an attempt to reduce the size of the data file without significant adverse effects on the perceived quality of the image.
Various well-known techniques and standards have evolved to address this need. Representative of these techniques is the JPEG standard for image encoding. Similar to JPEG, but with the addition of inter-frame encoding to take advantage of the similarity of consecutive frames in a motion sequence is the MPEG standard. Other standards and proprietary systems have been developed based on wavelet transforms.
These prior art techniques all transform the image samples into the frequency domain and then quantize and/or truncate the number of bits used to sample the higher frequency components. This step is typically followed by entropy encoding of the frequency coefficients. MPEG and JPEG use a discrete cosine transform on 8x8 pixel blocks to transform the image samples into the frequency domain while wavelet techniques use more sophisticated methods on larger areas of pixels.
The quantization or truncation step is where the loss of information is introduced. All of the other steps are reversible without loss of information. The degree of quantization and truncation is controlled by the encoding system to produce the desired data compression ratio. Although the method of controlling the quantization and truncation varies from system to system, the concept is generalized by those working in the field to that of a quality or "Q" factor. The Q factor is representative of the resulting fidelity or quality of the image that remains after this step.
In the JPEG standard, control of the Q factor is set almost directly by the user at the time of encoding. In most encoders, it is global to the entire image. An image encoded using a standard JPEG encoder will result in degradation which is uniform over the entire image. Regardless of the importance of a particular part of an image to a viewer, the JPEG encoder simply truncates the higher frequency coefficients to produce a smaller file size at the expense of image fidelity. Prior art JPEG image compression makes no provisions to include high level cognitive information in the compression process.
In the MPEG standard, the Q factor is controlled indirectly by the bit-rate control mechanism of the encoder. The user (or system requirements such as the bandwidth of a DVD player or Satellite channel) typically set the maximum bit rate. Due to the complex interaction of the inter-frame encoding and the hard to predict relationship between the Q factor used during compression and the resulting data file size, the bit rate control is typically implemented as a feed-back mechanism. As the bit rate budget for a sequence of frames starts to run low, a global Q factor is decreased, and conversely if the bit rate is under budget, the Q factor is increased.
The MPEG standard also makes provisions for block-by-block Q factor control. Typically this level of control is accomplished by a measurement of the "activity" level contained in the block. Blocks with more "activity" are encoded with higher Q factors. The activity level is usually a simple weighted average of some important frequency coefficients, or based on the difference (motion) from the previous frame in that portion of the image.
Wavelet system standards are just starting to emerge. Some of these standards make provisions for varying Q factors over the area of the image. These prior art systems attempt to preserve the image data content according to those portions most important to the human visual system (or a simplified model of it). Such prior art systems typically have no ability to make higher level decisions based on image content such as recognizable objects and features.
Some research in higher level image content recognition has been undertaken. Systems have been demonstrated that are able to identify specific objects in a scene and even, for example, recognize faces. The prior art in these areas does not describe using this information to control compression.
Certain prior art systems provide for a viewer determined area of interest. For example, US 4,028,725 to Lewis provides a vision system where the resolution of the display is increased in the viewer's line of sight. US 5,909,240 to Hori describes block compression of a video image performed during recording of the image based on the camera operator's viewpoint, which is determined using an eye tracking device associated with the recording device. US 5,103,306 to Weiman, et al discloses a system of image encoding with variable resolution centered around a point responsive to a single viewer's eye gaze.
In all such prior art, the area of interest is limited to one area designated by one viewer. This works fine for that one viewer at the time they are actually viewing it, but other viewers or even the same viewer re-watching the recorded scene may not always direct their viewpoint to the same single location. In general, the prior art does not describe or suggest a system of image compression based on the ability to predict or determine multiple areas of interest and encode the areas of interest at a higher Q-factor. It would be advantageous to provide a system whereby encoding is based on area of interest classification using predetermined criteria such that higher Q-factors are assigned to the areas of interest. It would be further advantageous to provide a system whereby the predetermined criteria may be based on measurements of a viewing audience's eye gaze.
Of significant importance in being able to effectively include high quality image content that anticipates the variety of viewpoints various viewers may choose is the ability to determine multiple areas of interest and encode and compress the image so as to include all of the areas of interest at high quality, while improving the compression ratio. Corresponding methods and systems are provided.
SUMMARY OF THE INVENTION
The present invention provides methods and systems for compression of digital images (still or motion sequences) wherein predetermined criteria may be used to identify a plurality of areas of interest in the image, and each area of interest is encoded with a corresponding quality level (Q-factor). In particular, the predetermined criteria may be derived from measurements of where a viewing audience is focusing their gaze (area of interest). In addition, the predetermined criteria may be used to create areas of interest in an image in order to focus an observer's attention to that area. Portions of the image outside of the areas of interest are encoded at a lower quality factor and bit rate. The result is higher compression ratios without adversely affecting a viewer's perception of the overall quality of the image.
In an illustrative embodiment of the invention, a digital image is displayed. Means are provided for identifying a plurality of areas of interest in the digital image. Identified areas of interest are encoded at a first quality level and unidentified areas of the image are encoded at a second and lower quality level than the identified areas.
A quantization map (Q-Map) may be created based on the identified areas of interest. The encoding may then be performed based on the Q-Map.
The digital image may be a single still frame or one digital image in a sequence of images in a digital motion picture. Areas of interest may be identified for each image in a sequence. Alternatively, areas of interest maybe identified only for selected images in the sequence of images. In this instance, areas of interest for any remaining images in the sequence may be extrapolated from the identified areas of interest.
The areas of interest may be determined by displaying an image to a target audience and observing their eye-gaze. The means for identifying areas of interest may comprise, for example, one or more eye tracking mechanisms for tracking the eye gaze point of one or more viewers who view the image. Alternatively, the means for identifying areas of interest may comprise a pointing device for one or more viewers to designate the areas of interest on the displayed image.
The areas of interest may be identified by a single viewer or a group of viewers. The viewers may comprise a representative audience made up of people likely to view the image. A histogram may be used to determine the most popular areas of interest.
In an alternate embodiment, the areas of interest may be identified in real time during live transmission of the image.
The digital image may be a spatially representative version of the image to be encoded.
In a further embodiment of the invention, values may be assigned to each area of interest based on the amount of viewer interest in that area, first values being assigned to areas with higher interest and second values being assigned to areas of lower interest. Each area of interest is encoded at a quality level corresponding to the assigned value, the areas with the first values being encoded at higher quality levels than the areas with the second values.
Encoding of the areas of interest may be performed to provide a gradual transition in quality between an identified area of interest and an unidentified area.
The encoding may be performed using a block discrete cosine transform (DCT). Using DCT, the quality level for blocks of pixels may be adjusted for the areas of interest through the use of a quantization scale factor encoded for each block of pixels. The quality levels of the unidentified areas may be adjusted downward by: (i) truncating one or more DCT frequency coefficients; (ii) setting to zero one or more DCT frequency coefficients; or (iii) otherwise discarding one or more DCT frequency coefficients, on a block by block basis.
Alternatively, the encoding may be performed using a wavelet transform.
In an alternate embodiment of the invention, the quality level for the unidentified areas may be adjusted downward by pre-filtering the image using a spatial frequency filter prior to encoding. In a further embodiment, the identified areas of interest are sampled at a higher spatial resolution than the unidentified areas. The identified areas of interest may then be encoded in one or more additional data streams. The additional data stream(s) may be encoded at a first quality level, and a data stream which contains the unidentified areas may be encoded at a second quality level. In addition, the additional data stream(s) may be encoded using a first method, and a data stream containing the unidentified areas may be encoded using a second method.
The invention may be implemented so that the areas of interest can be identified while the image is in transit (e.g., while the image data is being transmitted from one location to another). Alternatively, the areas of interest may be identified while the image is partially displayed.
Further, the quality level of the unidentified areas of the image may be reduced for security purposes.
The invention can be implemented to maintain a constant bit rate or a constant compression ratio.
In a further embodiment of the invention, the identified areas of interest are transmitted according to level of interest, so that areas with a higher level of interest are transmitted first, with successively lower interest level areas transmitted successively thereafter. The image can then be built up as it is received starting with the areas of highest interest.
The invention can also be used to record statistical data regarding the identified areas of interest. Identified areas of interest from multiple images may be statistically recorded. The multiple images can be from multiple sources.
The invention can be implemented such that the quality levels of certain image areas are enhanced in order to artificially create areas of interest so that, for example, a viewer's attention will be drawn to the artificially created area(s) of interest. These artificially enhanced areas may consist of image areas containing a product, a name of a product, or any other portion of the image which it would be desirable to enhance. BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 shows a block diagram of a simplified exemplary embodiment of the invention;
Figure 2 shows a block diagram of a further exemplary embodiment of the invention;
Figure 3 shows details of the creation of a Q-Map in accordance with the invention; and
Figure 4 shows a block diagram of an alternate embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
The present invention provides methods and systems for compression of digital images (still or motion sequences) wherein predetermined criteria may be used to identify a plurality of areas of interest in the image, and each area of interest is encoded with a corresponding quality level (Q-factor). In particular, the predetermined criteria may be derived from measurements of where a viewing audience is focusing their gaze (area of interest). In addition, the predetermined criteria may be used to create areas of interest in an image in order to focus an observer's attention to that area. Portions of the image outside of the areas of interest are encoded at a lower quality factor and bit rate. The result is higher compression ratios without adversely affecting a viewer's perception of the overall quality of the image.
The invention provides for an improved compression ratio achieved at a given perceived quality level when encoding and compressing digital images. This is accomplished by budgeting higher Q factors for multiple portions of the image (identified areas of interest), and lower Q factors for other portions of the image
(unidentified areas). The invention is advantageous where the data for a digital motion picture is to be transmitted from a central location and stored on multiple (e.g., many hundreds) of servers across the country or around the world. In such a distribution scenario, it is advantageous to spend considerable time and effort to achieve the best possible compression ratio for a given image quality in order to reduce the transmission time and the cost of the storage space on the remote servers.
In a simplified illustrative embodiment as shown in Figure 1, a digital image 10 is displayed on a display device 70. Means 20 are provided for identifying one or more areas of interest in the digital image 10. Information relating to the identified areas of interest are provided to an encoder 40, along with the digital image data. The encoder 40 encodes the identified areas of interest of the image at a first quality level and encodes the unidentified areas of the image at a second and lower quality level than the identified areas. The encoded image data may then be stored or transmitted to theaters for storage and display. In an illustrative embodiment of the invention as shown in Figure 2, a digital image 10 is displayed (previewed) on a display device 70. Means 20 are provided for identifying one or more areas of interest in the digital image. Identified areas of interest are shown at 30. At an encoding device 40, the identified areas of interest (as shown at 30) are encoded at a first quality level and unidentified areas of the image are encoded at a second and lower quality level than the identified areas.
In the example shown in Figure 2, encoder 40 creates a compressed master copy 80 of image 10, with identified areas of interest 30 encoded at a higher quality level than the unidentified areas of image 10. The master copy of image 80, which may be a series of images comprising a digital motion picture, may be, for example, transmitted to theaters via satellite as shown at 85. The compressed master copy of the image (or motion picture) may be stored for playback at multiple theaters 90. A standard decoder 95 (e.g., a standard JPEG or MPEG decoder) can then be used to decode the stored master copy to produce an image 10' for viewing by the intended audience. A Q-Map 50 may be created based on the areas of interest identified during the identifying step. Q-Map 50 provides information to encoder 40 regarding which areas of image 10 have been identified as areas of interest 30. The encoding 40 may then be performed based on Q-Map 50, such that the identified areas of interest 30 are encoded at a higher quality level than unidentified areas of image 10. Figure 3 illustrates an exemplary formation of Q-Map 50. Image 10 is viewed by an observer or multiple observers who designate one or more areas of interest as shown at 12. The locations of these areas of interest 12 are used to create Q-Map 50 (e.g., in software). For example, Q-Map 50 may be added to the internal Q-Map utilized by an MPEG encoder. Although adding Q-Map 50 to the internal Q-Map of an MPEG encoder may result in a slight increase in the bit rate, the bit rate feedback mechanism will compensate by reducing the overall Q factor used.
Digital image 10 may be a single still frame or one digital image in a sequence of images in a digital motion picture.
Areas of interest 30 maybe identified for each image 10 in a sequence. Alternatively, areas of interest 30 may be identified only for selected images in the sequence of images. In this instance, areas of interest 30 for any remaining images in the sequence are extrapolated from the identified areas of interest 30.
As shown in Figure 2, the means for identifying areas of interest 20 may comprise one or more eye tracking mechanisms for tracking the eye gaze point of one or more viewers 60 as the one or more viewers 60 view image 10. Such tracking mechanisms allow for passive participation on the part of the viewers 60. Viewers 60 would then only need to view image(s) 10 or the motion picture sequence as they normally would.
Many eye tracking systems have been described in the prior art, and suitable eye tracking systems are also commercially available, for example the hnagina Eyegaze Eyetracking System marketed by LC Technologies, Inc. of Fairfax, Virginia. These systems have been used in the past for applications such as allowing disabled people to communicate and use computers, as well as academic studies of the psychology of visual perception, studies of the psychology of visual tasks, and other related areas. Measuring of the area of interest information for multiple viewers 60 can be accomplished either by having the multiple viewers 60 view the images 10 one at a time on a single eye-tracking equipped display system, by having multiple systems, one for each viewer, or by a single display system with multiple eye-tracking inputs, one for each viewer. Figure 2 shows multiple eye tracking mechanisms 20 for use by multiple viewers 60 simultaneously viewing the image 10, which results in several identified areas of interest 30.
Alternatively, means 20 for identifying areas of interest 30 may comprise a pointing device for one or more viewers 60 to designate the areas of interest 30 on image 10. For still images 10, pointing can be accomplished with devices such as a digitizing tablet with a hard copy of image 10 placed on it. For moving images or for more convenience, a mouse-controlled cursor on an electronic display of image 10 can be utilized. The pointing may be done with images 10 displayed one at a time or slower than real time. Additionally, the pointing may only need to be done on key frames with the areas of interest for the remaining frames being interpolated. Those skilled in the art will recognize that many alternative methods and devices are available for determining the areas of interest. For example, area of interest determination may be based on empirical measurements of eye-gaze, predictions of areas of interest based on historic eye-gaze data, predictions of area of interest based on pattern matching, or other suitable criteria. Viewers may verbally describe the areas of interest to a system operator, who enters the area of information into the system using, e.g., a pointing device or other suitable means to enter the information into the system. Eye gaze of a viewer or group of viewers may be noted by one or more additional people watching the viewer(s), who are then able to enter this information into the system. Viewers can be presented with several versions of the image, each version having different predetermined areas of interest, such that the viewers can choose a version of the image that they prefer. Software capable of object recognition may be used to determine common predefined areas of interest, such as faces, eyes, and mouths in close-up views of people in the image, hands or any implements contained in the hands, the area of the image towards which people in the image are looking, the area of the image towards which movement in the image is directed, the center of the image, any objects of importance in the image, and the like. Any other suitable means may also be used to determine or identify areas of interest.
Further, those skilled in the art will recognize that, although the invention is described in terms of identifying areas of interest, the invention can be implemented so that areas of non-interest are identified. These areas of non-interest can be encoded at a lower quality level than the other areas of the image. For example, it may be desirable to identify corners or extreme edges of the image as areas of non-interest so that they are encoded at a lower quality level than the remainder of the image. Similarly, background scenes may be identified as areas of non-interest and encoded at lower quality levels than the remainder of the image.
Since the digital image data (e.g., motion picture data) to be transmitted can be prepared several days in advance, it is possible to preview 70 image 10 in front of a representative audience of viewers 60 and gather their area of interest information in a statistical manner. In a preferred embodiment, areas of interest 30 may be identified by a single viewer or a group of viewers. The viewers may comprise a representative audience 60 made up of people likely to view image 10. The representative audience 60 should be a reasonable statistical sample of the intended target audience that will view the image (e.g., at a theater). In order to collect information on multiple areas of interest 30, the representative audience 60 should be comprised of a sufficient number of viewers. In the preferred embodiment, the minimum preview audience size would be ten viewers. The maximum preview audience size is limited by the logistics and costs associated with gathering the area of interest information, typically on the order of 20 to 50 viewers.
A histogram may be used to determine the most popular areas of interest 30. By having a statistical sample of typical viewers, and of their multiple areas of interest for each image frame, there is a very high probability that their preferences in terms of areas of interest will encompass the preferences of most of the general audience most of the time.
The shape of the histogram helps determine how many areas of interest need to be identified in each image 10. If there is one clear maximum in the histogram, then only one area of interest 30 needs to be used. If there are multiple peaks, then multiple areas of interest 30 need to be used. In scenes such as a wide shot with no specific areas of interest, the histogram will have no discemable peaks. In this case, image 10 can be encoded without any specific enhanced areas and the bits will be budgeted uniformly over the area of image 10.
In an alternate embodiment, the areas of interest 30 maybe identified in real time during a live transmission of image 10. There maybe additional steps required to transmit the area of interest information back to the originating encoding site. Also, since the area of interest for a subsequent frame may be based on the viewers attention on the currently displayed frame, there may be some lag in the tracking of areas of interest 30 as they move around. This lag can be significant if the round trip transmission of the compressed image data and/or area of interest information is via a satellite link for example. If size of the area encoded at the higher Q factor is made large enough, adverse effects of this lag can be somewhat mitigated for many situations.
When the lag time is short, it is possible to present the perception of a high quality image everywhere. Especially when there are a small number of viewers, the image areas receiving the higher quality encoding can dynamically track the area of the viewers' attention. The area outside of the viewers' central area of foveal vision (visual axis which affords acute or high-resolution vision) does not contribute to the perceived resolution of the image. This can be utilized in systems where the image is encoded at full resolution everywhere, but the bandwidth of the playback device does not permit it to be displayed at full resolution.
Dynamic tracking of the area of interest 30 can also be used for presentation purposes where the presenter uses a pointing device or other means to select an area that is of particular interest for instructing or informing an audience.
For purposes of a displaying (previewing) image 10 on display device 70, the displayed image at 70 may be a spatially representative version of image 10 to be encoded. For the purposes of displaying image 10 for preview screening at 70, image 10 may optionally be sub-sampled or conventionally compressed using the well known techniques of the prior art for convenience of screening the preview. A simple video transfer and presentation on a video monitor, for example, will suffice for the preview process.
In a further embodiment of the invention, values may be assigned to each area of interest 30 based on the amount of viewer interest in that area, first values being assigned to areas with higher interest and second values being assigned to areas of lower interest. Each area of interest is encoded at a quality level corresponding to the assigned value, the areas with the first values being encoded at higher quality levels than the areas with the second values.
Encoding 40 of the areas of interest 30 may be performed to provide a gradual transition in quality between an identified area of interest and an unidentified area. In other words, in order to avoid introducing distracting artifacts due to a "seam" in the image where the Q factor changes, the change should be gradual. This concept is already included in many MPEG encoders, for example, by filtering or "smoothing" the block-by-block Q factors.
Encoding 40 may be performed using a block discrete cosine transform (DCT). Using DCT, the quality level for blocks of pixels may be adjusted for the areas of interest through the use of a quantization scale factor encoded for each block of pixels. The quality levels of the unidentified areas may be adjusted downward by: (i) truncating one or more DCT frequency coefficients; (ii) setting to zero one or more DCT frequency coefficients; or (iii) otherwise discarding one or more DCT frequency coefficients, on a block by block basis. In the case of file formats such as MPEG that already have variable Q factor control over the area of the image, the block-by-block Q factor control portion of encoder 40 can be modified to incorporate the area of interest data (e.g., from the Q- Map).
Even though the JPEG file standard does not have provisions for block-by-block Q factor control, a JPEG encoder could be modified to have the ability to do additional truncation or filtering of the high frequency coefficients on a block-by-block basis. Encoder 40 will then be able to achieve high compression ratios for those portions of the image due to its ability to efficiently encode these smaller (or zero) values in its entropy encoding stage. In addition, the encoding may be performed using a wavelet transform. Those skilled in the art will appreciate that other image compression systems may also be suitable for use with the invention.
Alternatively, it may be desirable to develop a non-standard format or an extension to a standard format to specifically allow spatially- varying Q factor encoding. Further, the image 10 can be encoded as several layers, each contained in a standard or non-standard file or bit-stream format. The base layer would contain the lowest level of detail. The additional enhancement layer(s) would contain difference information from the base layer to further refme it in the areas of interest. The areas not of interest in the enhancement layer would be completely blank, and would compress at a very high ratio. For example, the base layer could be sampled at 2k while the enhanced layer is at a higher resolution of 4k.
In an alternate embodiment of the invention as shown in Figure 4, the quality level for the unidentified areas may be adjusted downward by pre-filtering the image using a spatial frequency filter 55 prior to encoding. In this embodiment, image 10 is previewed and areas of interest are identified as discussed above in connection with Figure 2. Q-Map 50 is created based on the identified areas of interest. Q-Map 50 is used to control the spatial frequency filter 55 (e.g., a variable low-pass spatial frequency filter). Attenuation or spatial frequency cut-off, or both, may be controlled by Q-Map 50. Higher Q factors would raise the gain of the higher frequency components or raise the spatial frequency cutoff to higher spatial frequencies, preserving more details in the image. Lower Q factor portions of Q-Map 50 would cause filter 55 to attenuate the higher spatial frequencies more and the details in those images would appear blurry. The output of spatial frequency filter 55 is input into a standard encoder 40' (e.g., a standard MPEG, JPEG, or other lossy compression encoder). Due to the way in which such image compression encoders work, the portions of the image that have been pre-filtered by filter 55 will result in fewer output bits in output compressed image data 80. Compressed data 80 can be transmitted and/or stored as discussed in connection with Figure 2. Thus, when an unmodified encoder 40' is to be used, image data 10 can be pre- filtered 55 to selectively remove detail from the unidentified areas. The filtered areas will contain less (or perhaps zero) information in the higher frequencies. Standard encoder 40' will be able to achieve high compression ratios for those portions of the image due to its ability to efficiently encode these smaller (or zero) values in its entropy encoding stage. Therefore, the actual encoding of the image data can remain in an industry standard format such as JPEG or MPEG. As such, the resulting file can be decoded or viewed using a standard (unmodified) decoder or viewer for that file format.
In a further embodiment, identified areas of interest 30 are sampled at a higher spatial resolution than the unidentified areas. Identified areas of interest 30 may then be encoded in one or more additional data streams. The additional data stream(s) may be encoded 40 at a first quality level, and a data stream which contains the unidentified areas may be encoded at a second quality level. In addition, the additional data stream(s) may be encoded using a first method, and a data stream containing the unidentified areas may be encoded using a second method.
The invention may be implemented so that areas of interest 30 can be identified while image 10 is being transmitted from one location to another. For example, instead of previewing the image and recording the areas of interest, the image may be viewed "live" and the areas of interest are encoded while the image is being transmitted. The viewers could be located at the transmitting location or the destination location provided there is a return path for the area of interest information. Alternatively, the areas of interest may be identified while the image 10 is partially displayed, e.g., at low resolution, such as progressive JPEG images viewed on the world wide web. For example, areas of interest can be measured while viewers view the low resolution image, and these areas can be encoded and transmitted with a higher quality level.
Further, the quality level of the unidentified areas of the image may be reduced for security purposes.
The invention can be implemented to maintain a constant bit rate or a constant compression ratio. In a further embodiment of the invention, identified areas of interest 30 are transmitted according to level of interest, so that areas with a higher level of interest are transmitted first with successively lower interest level areas transmitted successively thereafter. Image 10 can then be built up as it is received starting with the areas of highest interest. The invention can also be used to record statistical data regarding identified areas of interest 30. Identified areas of interest 30 from multiple images 10 may be statistically recorded. Images 10 can be from multiple sources.
The invention can be implemented such that the quality levels of certain image areas are enhanced to artificially create areas of interest. The enhanced areas may consist of image areas containing a product, a name of a product, or any other portion of the image which would be desirable to enhance.
The increase in compression ratio is directly related to the portion of the image that is encoded at the lower Q factor (non areas of interest), and how much lower that Q factor is.
Taken to an extreme, the method described herein would adversely affect image quality as viewers get distracted from the areas of interest by compression artifacts appearing and moving around in the unidentified areas of the image. Good performance is generally achieved when the Q factor for the non-enhanced portion of the image is high enough to not have any obvious artifacts (such as DCT blocks showing, loss of grain, or drastic color banding). The enhanced portion is encoded with the remaining bit budget.
As an example, typical images viewed in a wide-screen movie presentation may require areas of interest covering 20 to 40% of the image area. If these areas are encoded at a Q factor (bit rate) sufficient to meet the desired quality level and the remainder is encoded at half the bit rate, a 30 to 40% savings in data size is achieved compared to encoding the entire image at the higher Q factor.
The size of the areas of interest should be large enough to encompass the viewers fovea (central high-resolution portion of the eye). Combining the angular coverage of the human fovea with the anticipated maximum viewing distance yields the diameter of the circles of the enhancement area required.
Figures 2-4 show the areas of interest on the Q-Map 50 as circular. Alternate shapes for the areas of interest 30 maybe non-circular. For example, the areas maybe made elliptical with the long axis along the direction of travel of each area of interest as it is tracked from frame to frame, which helps compensate for lags in a live broadcast.
Additionally, the shape of the areas of interest 30 maybe expanded to the extent of objects detected in the image or to the extent of similar texture so that the seams in the Q-Map fall on seams in the image.
When multiple areas of interest 30 are close to each other, the areas of enhancement may be combined into one area with perhaps a slightly larger size. It will now be appreciated that the present invention provides an improved method and system for digital image compression, wherein a plurality of identified areas of interest are encoded at a high quality level and unidentified areas are encoded at a lower quality level, while maintaining perceived image quality. Although the invention has been described in connection with preferred embodiments thereof, those skilled in the art will appreciate that numerous adaptations and modifications may be made thereto without departing from the spirit and scope of the invention, as set forth in the following claims.

Claims (64)

What is claimed is:
1. A method of digital image compression, comprising: identifying a plurality of areas of interest in the digital image; encoding the identified areas of interest at a first quality level; and encoding unidentified areas of the image at a second and lower quality level than the identified areas.
2. A method in accordance with claim 1, further comprising: creating a quantization map based on the identified areas of interest, wherein: the encoding is performed based on the quantization map.
3. A method in accordance with claim 1, wherein the digital image is a single still frame.
4. A method in accordance with claim 1, wherein the digital image is one of a sequence of images in a digital motion picture.
5. A method in accordance with claim 4, wherein: areas of interest are identified only for selected images in the sequence of images; and areas of interest for a remainder of images in the sequence are extrapolated from the identified areas of interest.
6. A method in accordance with claim 1, wherein the areas of interest are identified by tracking the eye gaze point of one or more viewers as the one or more viewers view the image.
7. A method in accordance with claim 1, wherein the areas of interest are identified by one or more viewers using a pointing device to designate the areas of interest on a display of the image.
8. A method in accordance with claim 1, wherein the areas of interest are identified by a group of viewers.
9. A method in accordance with claim 8, wherein the group of viewers comprises a representative audience made up of people likely to view the image.
10. A method in accordance with claim 8, wherein a histogram is used to determine the most popular areas of interest.
11. A method in accordance with claim 1, wherein the areas of interest are identified in real time during a live transmission of the image.
12. A method in accordance with claim 1, wherein the digital image is a spatially representative version of the image to be encoded.
13. A method in accordance with claim 1, further comprising: assigning values to each area of interest based on the amount of interest in that area, first values being assigned to areas with higher interest and second values being assigned to areas of lower interest; and encoding each area of interest at a quality level corresponding to the assigned value, said areas with said first values being encoded at higher quality levels than said areas with said second values.
14. A method in accordance with claim 1, wherein said encoding is performed to provide a gradual transition in quality between an identified area of interest and an unidentified area.
15. A method in accordance with claim 1, wherein the encoding is performed using a block discrete cosine transform (DCT).
16. A method in accordance with claim 15, wherein the quality level for blocks of pixels is adjusted for the areas of interest through the use of a quantization scale factor encoded for each block of pixels.
17. A method in accordance with claim 15, wherein the quality levels of the unidentified areas are adjusted downward by one of: (i) truncating one or more DCT frequency coefficients; (ii) setting to zero one or more DCT frequency coefficients; or (iii) otherwise discarding one or more DCT frequency coefficients, on a block by block basis.
18. A method in accordance with claim 1, wherein the encoding is performed using a wavelet transform.
19. A method in accordance with claim 1, wherein the quality level for the unidentified areas is adjusted downward by pre-filtering the image with a spatially varying spatial frequency filter prior to encoding.
20. A method in accordance with claim 1, further comprising: sampling the identified areas of interest at a higher spatial resolution than the unidentified areas; and encoding the identified areas of interest in one or more additional data streams.
21. A method in accordance with claim 20, wherein: the additional data stream(s) are encoded at a first quality level; and a data stream which contains the unidentified areas is encoded at a second quality level.
22. A method in accordance with claim 20, wherein: the additional data stream(s) are encoded using a first method; and a data stream containing the unidentified areas is encoded using a second method.
23. A method in accordance with claim 1, wherein the areas of interest are identified while the image is in transit.
24. A method in accordance with claim 1, wherein the areas of interest are identified while the image is partially displayed.
25. A method in accordance with claim 1, wherein the quality level of the unidentified areas of the image is reduced for security purposes.
26. A method in accordance with claim 1, wherein one of a constant bit rate or a constant compression ratio is maintained.
27. A method in accordance with claim 1, wherein: the identified areas of interest are transmitted according to level of interest, so that areas with a higher level of interest are transmitted first with successively lower interest level areas transmitted successively thereafter; and the image is built up as it is received starting with the areas of highest interest.
28. A method in accordance with claim 1, wherein identified areas of interest from multiple images are statistically recorded.
29. A method in accordance with claim 28, wherein the multiple images are from multiple sources.
30. A method in accordance with claim 1, wherein the quality levels of certain image areas are enhanced to create areas of interest.
31. A method in accordance with claim 30, wherein the enhanced areas are image areas containing at least one of a product and a name of a product.
32. A system for digital image compression, comprising: a digital image display; means for identifying a plurality of areas of interest in a digital image provided by said display; and an encoder, wherein the encoder encodes the identified areas of interest at a first quality level and encodes unidentified areas of the image at a second and lower quality level than the identified areas.
33. A system in accordance with claim 32, further comprising a quantization map created based on said identified areas of interest, wherein: the encoding is performed based on the quantization map.
34. A system in accordance with claim 32, wherein the digital image is a single still frame.
35. A system in accordance with claim 32, wherein the digital image is one of a sequence of images in a digital motion picture.
36. A system in accordance with claim 35, wherein: areas of interest are identified only for selected images in the sequence of images; and areas of interest for a remainder of images in the sequence are extrapolated from the identified areas of interest.
37. A system in accordance with claim 32, wherein the means for identifying areas of interest comprises one or more eye tracking mechanisms for tracking the eye gaze point of one or more viewers as the one or more viewers view the image.
38. A system in accordance with claim 32, wherein the means for identifying areas of interest comprises a pointing device for one or more viewers to designate the areas of interest on the image display.
39. A system in accordance with claim 32, wherein the areas of interest are identified by a group of viewers.
40. A system in accordance with claim 39, wherein the group of viewers comprises a representative audience made up of people likely to view the image.
41. A system in accordance with claim 39, wherein a histogram is used to determine the most popular areas of interest.
42. A system in accordance with claim 32, wherein the areas of interest are identified in real time during a live transmission of the image.
43. A system in accordance with claim 32, wherein the digital image is a spatially representative version of the image to be encoded.
44. A system in accordance with claim 32, wherein: values are assigned to each area of interest based on the amount of interest in that area, first values being assigned to areas with higher interest and second values being assigned to areas of lower interest; and each area of interest is encoded at a quality level corresponding to the assigned value, said areas with said first values being encoded at higher quality levels than said areas with said second values.
45. A system in accordance with claim 32, wherein said encoding is performed to provide a gradual transition in quality between an identified area of interest and an unidentified area.
46. A system in accordance with claim 32, wherein the encoding is performed using a block discrete cosine transform (DCT).
47. A system in accordance with claim 46, wherein the quality level for blocks of pixels is adjusted for the areas of interest through the use of a quantization scale factor encoded for each block of pixels.
48. A system in accordance with claim 46, wherein the quality levels of the unidentified areas are adjusted downward by one of: (i) truncating one or more DCT frequency coefficients; (ii) setting to zero one or more DCT frequency coefficients; or (iii) otherwise discarding one or more DCT frequency coefficients, on a block by block basis.
49. A system in accordance with claim 32, wherein the encoding is performed using a wavelet transform.
50. A system in accordance with claim 32, further comprising: a spatially varying spatial frequency filter, wherein the quality level for the unidentified areas is adjusted downward by pre-filtering the image using the spatial frequency filter prior to encoding.
51. A system in accordance with claim 32, wherein: the identified areas of interest are sampled at a higher spatial resolution than the unidentified areas; and the identified areas of interest are encoded in one or more additional data streams.
52. A system in accordance with claim 51, wherein: the additional data stream(s) are encoded at a first quality level; and a data stream which contains the unidentified areas is encoded at a second quality level.
53. A system in accordance with claim 51, wherein: the additional data stream(s) are encoded using a first method; and a data stream containing the unidentified areas is encoded using a second method.
54. A system in accordance with claim 32, wherein the areas of interest are identified while the image is in transit.
55. A system in accordance with claim 32, wherein the areas of interest are identified while the image is partially displayed.
56. A system in accordance with claim 32, wherein the quality level of the unidentified areas of the image is reduced for security purposes.
57. A system in accordance with claim 32, wherein one of a constant bit rate or a constant compression ratio is maintained.
58. A system in accordance with claim 32, wherein: the identified areas of interest are transmitted according to level of interest, so that areas with a higher level of interest are transmitted first with successively lower interest level areas transmitted successively thereafter; and the image is built up as it is received starting with the areas of highest interest.
59. A system in accordance with claim 32, wherein identified areas of interest from multiple images are statistically recorded.
60. A system in accordance with claim 59, wherein the multiple images are from multiple sources.
61. A system in accordance with claim 32, wherein the quality levels of certain image areas are enhanced to create areas of interest.
62. A system in accordance with claim 61, wherein the enhanced areas are image areas containing at least one of a product and a name of a product.
63. A method of digital image compression, comprising: identifying a plurality of areas of interest in the digital image by tracking the eye gaze point of one or more viewers as the one or more viewers view the image; encoding the identified areas of interest at a first quality level; and encoding unidentified areas of the image at a second and lower quality level than the identified areas.
64. A system for digital image compression, comprising: a digital image display device for displaying a digital image; one or more eye tracking mechanisms for tracking the eye gaze of one or more viewers as the one or more viewers view the digital image in order to identify a plurality of areas of interest in the digital image; and an encoder, wherein the encoder encodes the identified areas of interest at a first quality level and encodes unidentified areas of the image at a second and lower quality level than the identified areas.
AU2002309519A 2001-03-29 2002-03-25 Digital image compression Ceased AU2002309519B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US09/821,104 2001-03-29
US09/821,104 US7027655B2 (en) 2001-03-29 2001-03-29 Digital image compression with spatially varying quality levels determined by identifying areas of interest
PCT/US2002/009472 WO2002080568A2 (en) 2001-03-29 2002-03-25 Digital image compression

Publications (2)

Publication Number Publication Date
AU2002309519A1 true AU2002309519A1 (en) 2003-04-03
AU2002309519B2 AU2002309519B2 (en) 2006-12-14

Family

ID=25232521

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2002309519A Ceased AU2002309519B2 (en) 2001-03-29 2002-03-25 Digital image compression

Country Status (4)

Country Link
US (4) US7027655B2 (en)
EP (1) EP1374597A2 (en)
AU (1) AU2002309519B2 (en)
WO (1) WO2002080568A2 (en)

Families Citing this family (106)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002300581A (en) * 2001-03-29 2002-10-11 Matsushita Electric Ind Co Ltd Image-coding apparatus and image-coding program
US7027655B2 (en) 2001-03-29 2006-04-11 Electronics For Imaging, Inc. Digital image compression with spatially varying quality levels determined by identifying areas of interest
FR2828054B1 (en) * 2001-07-27 2003-11-28 Thomson Licensing Sa METHOD AND DEVICE FOR CODING A SCENE
RU2220514C2 (en) * 2002-01-25 2003-12-27 Андрейко Александр Иванович Method for interactive television using central vision properties of eyes of individual users or groups thereof that protects information against unauthorized access, distribution, and use
US7327505B2 (en) * 2002-02-19 2008-02-05 Eastman Kodak Company Method for providing affective information in an imaging system
US7436890B2 (en) * 2002-06-05 2008-10-14 Kddi R&D Laboratories, Inc. Quantization control system for video coding
US7302096B2 (en) 2002-10-17 2007-11-27 Seiko Epson Corporation Method and apparatus for low depth of field image segmentation
US7046924B2 (en) 2002-11-25 2006-05-16 Eastman Kodak Company Method and computer program product for determining an area of importance in an image using eye monitoring information
US7206022B2 (en) 2002-11-25 2007-04-17 Eastman Kodak Company Camera system with eye monitoring
US7460720B2 (en) * 2003-03-21 2008-12-02 Canon Kabushiki Kaisha Method and device for defining quality modes for a digital image signal
US7762665B2 (en) * 2003-03-21 2010-07-27 Queen's University At Kingston Method and apparatus for communication between humans and devices
US20050018911A1 (en) * 2003-07-24 2005-01-27 Eastman Kodak Company Foveated video coding system and method
JP4279083B2 (en) * 2003-08-18 2009-06-17 富士フイルム株式会社 Image processing method and apparatus, and image processing program
US9274598B2 (en) * 2003-08-25 2016-03-01 International Business Machines Corporation System and method for selecting and activating a target object using a combination of eye gaze and key presses
US8014611B2 (en) * 2004-02-23 2011-09-06 Toa Corporation Image compression method, image compression device, image transmission system, data compression pre-processing apparatus, and computer program
JP4542447B2 (en) * 2005-02-18 2010-09-15 株式会社日立製作所 Image encoding / decoding device, encoding / decoding program, and encoding / decoding method
WO2005107264A1 (en) * 2004-04-30 2005-11-10 British Broadcasting Corporation Media content and enhancement data delivery
DK2202609T3 (en) 2004-06-18 2016-04-25 Tobii Ab Eye control of computer equipment
WO2006044476A2 (en) 2004-10-12 2006-04-27 Robert Vernon Vanman Method of and system for mobile surveillance and event recording
US7492821B2 (en) * 2005-02-08 2009-02-17 International Business Machines Corporation System and method for selective image capture, transmission and reconstruction
US8224102B2 (en) * 2005-04-08 2012-07-17 Agency For Science, Technology And Research Method for encoding a picture, computer program product and encoder
US8982944B2 (en) * 2005-10-12 2015-03-17 Enforcement Video, Llc Method and system for categorized event recording of images in multiple resolution levels
US8238695B1 (en) * 2005-12-15 2012-08-07 Grandeye, Ltd. Data reduction techniques for processing wide-angle video
GB2435140B (en) * 2006-02-13 2011-04-06 Snell & Wilcox Ltd Sport action coding
GB0611969D0 (en) * 2006-06-16 2006-07-26 Robert Gordon The University Video content prioritisation
US8169495B2 (en) * 2006-12-01 2012-05-01 Broadcom Corporation Method and apparatus for dynamic panoramic capturing
US7929793B2 (en) * 2007-03-19 2011-04-19 General Electric Company Registration and compression of dynamic images
KR20080102668A (en) * 2007-05-21 2008-11-26 삼성전자주식회사 Method for transmitting and receiving video call, apparatus of video call using the same
US8599368B1 (en) 2008-01-29 2013-12-03 Enforcement Video, Llc Laser-based speed determination device for use in a moving vehicle
US20090046157A1 (en) * 2007-08-13 2009-02-19 Andrew Cilia Combined wide-angle/zoom camera for license plate identification
US8045799B2 (en) * 2007-11-15 2011-10-25 Sony Ericsson Mobile Communications Ab System and method for generating a photograph with variable image quality
WO2009076595A2 (en) * 2007-12-12 2009-06-18 Cisco Technology, Inc. Video processing with tiered interdependencies of pictures
WO2009097449A1 (en) 2008-01-29 2009-08-06 Enforcement Video, Llc Omnidirectional camera for use in police car event recording
US20090213218A1 (en) 2008-02-15 2009-08-27 Andrew Cilia System and method for multi-resolution storage of images
US8780988B2 (en) * 2008-02-28 2014-07-15 Vixs Systems, Inc. Hierarchical video analysis-based real-time perceptual video coding
US7850306B2 (en) 2008-08-28 2010-12-14 Nokia Corporation Visual cognition aware display and visual data transmission architecture
US8270476B2 (en) * 2008-12-31 2012-09-18 Advanced Micro Devices, Inc. Face detection system for video encoders
US8510462B2 (en) * 2009-03-31 2013-08-13 Canon Kabushiki Kaisha Network streaming of a video media from a media server to a media client
US8745186B2 (en) * 2009-03-31 2014-06-03 Canon Kabushiki Kaisha Network streaming of a video media from a media server to a media client
US20100251293A1 (en) * 2009-03-31 2010-09-30 Canon Kabushiki Kaisha Network streaming of a video media from a media server to a media client
US8416715B2 (en) * 2009-06-15 2013-04-09 Microsoft Corporation Interest determination for auditory enhancement
DE102009046362A1 (en) * 2009-11-03 2011-05-05 Tesa Se Pressure-sensitive adhesive made of a crosslinkable polyolefin and an adhesive resin
US8736680B1 (en) 2010-05-18 2014-05-27 Enforcement Video, Llc Method and system for split-screen video display
US8717289B2 (en) * 2010-06-22 2014-05-06 Hsni Llc System and method for integrating an electronic pointing device into digital image data
US8493390B2 (en) * 2010-12-08 2013-07-23 Sony Computer Entertainment America, Inc. Adaptive displays using gaze tracking
JP5492139B2 (en) * 2011-04-27 2014-05-14 富士フイルム株式会社 Image compression apparatus, image expansion apparatus, method, and program
US20130091207A1 (en) * 2011-10-08 2013-04-11 Broadcom Corporation Advanced content hosting
US8806529B2 (en) 2012-04-06 2014-08-12 Time Warner Cable Enterprises Llc Variability in available levels of quality of encoded content
WO2014008541A1 (en) * 2012-07-09 2014-01-16 Smart Services Crc Pty Limited Video processing method and system
US20140111431A1 (en) * 2012-10-18 2014-04-24 Bradley Horowitz Optimizing photos
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US9571726B2 (en) 2012-12-20 2017-02-14 Google Inc. Generating attention information from photos
US9116926B2 (en) 2012-12-20 2015-08-25 Google Inc. Sharing photos
US9898081B2 (en) * 2013-03-04 2018-02-20 Tobii Ab Gaze and saccade based graphical manipulation
US10895908B2 (en) 2013-03-04 2021-01-19 Tobii Ab Targeting saccade landing prediction using visual history
US10082870B2 (en) * 2013-03-04 2018-09-25 Tobii Ab Gaze and saccade based graphical manipulation
US9665171B1 (en) * 2013-03-04 2017-05-30 Tobii Ab Gaze and saccade based graphical manipulation
US11714487B2 (en) 2013-03-04 2023-08-01 Tobii Ab Gaze and smooth pursuit based continuous foveal adjustment
US9912930B2 (en) * 2013-03-11 2018-03-06 Sony Corporation Processing video signals based on user focus on a particular portion of a video display
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
ES2633016T3 (en) 2013-08-23 2017-09-18 Tobii Ab Systems and methods to provide audio to a user according to a look input
US9143880B2 (en) 2013-08-23 2015-09-22 Tobii Ab Systems and methods for providing audio to a user based on gaze input
US20150063461A1 (en) * 2013-08-27 2015-03-05 Magnum Semiconductor, Inc. Methods and apparatuses for adjusting macroblock quantization parameters to improve visual quality for lossy video encoding
US10356405B2 (en) 2013-11-04 2019-07-16 Integrated Device Technology, Inc. Methods and apparatuses for multi-pass adaptive quantization
US9628870B2 (en) * 2014-03-18 2017-04-18 Vixs Systems, Inc. Video system with customized tiling and methods for use therewith
US9530450B2 (en) * 2014-03-18 2016-12-27 Vixs Systems, Inc. Video system with fovea tracking and methods for use therewith
CN106464959B (en) * 2014-06-10 2019-07-26 株式会社索思未来 Semiconductor integrated circuit and the display device and control method for having the semiconductor integrated circuit
US10090002B2 (en) 2014-12-11 2018-10-02 International Business Machines Corporation Performing cognitive operations based on an aggregate user model of personality traits of users
US10013890B2 (en) * 2014-12-11 2018-07-03 International Business Machines Corporation Determining relevant feedback based on alignment of feedback with performance objectives
US10282409B2 (en) 2014-12-11 2019-05-07 International Business Machines Corporation Performance modification based on aggregation of audience traits and natural language feedback
US9491490B1 (en) 2015-06-12 2016-11-08 Intel Corporation Facilitating environment-based lossy compression of data for efficient rendering of contents at computing devices
EP3113159A1 (en) 2015-06-30 2017-01-04 Thomson Licensing Method and device for processing a part of an immersive video content according to the position of reference parts
US10096130B2 (en) * 2015-09-22 2018-10-09 Facebook, Inc. Systems and methods for content streaming
US9858706B2 (en) * 2015-09-22 2018-01-02 Facebook, Inc. Systems and methods for content streaming
WO2017085708A1 (en) * 2015-11-17 2017-05-26 Beamr Imaging Ltd. Method of controlling a quality measure and system thereof
US20170171271A1 (en) * 2015-12-09 2017-06-15 International Business Machines Corporation Video streaming
US10110935B2 (en) * 2016-01-29 2018-10-23 Cable Television Laboratories, Inc Systems and methods for video delivery based upon saccadic eye motion
US11284109B2 (en) 2016-01-29 2022-03-22 Cable Television Laboratories, Inc. Visual coding for sensitivities to light, color and spatial resolution in human visual system
US10341605B1 (en) 2016-04-07 2019-07-02 WatchGuard, Inc. Systems and methods for multiple-resolution storage of media streams
US10341650B2 (en) * 2016-04-15 2019-07-02 Ati Technologies Ulc Efficient streaming of virtual reality content
US10657674B2 (en) 2016-06-17 2020-05-19 Immersive Robotics Pty Ltd. Image compression method and apparatus
AU2017285700B2 (en) * 2016-06-17 2019-07-25 Immersive Robotics Pty Ltd Image compression method and apparatus
GB2551526A (en) * 2016-06-21 2017-12-27 Nokia Technologies Oy Image encoding method and technical equipment for the same
GB2556017A (en) * 2016-06-21 2018-05-23 Nokia Technologies Oy Image compression method and technical equipment for the same
US10979721B2 (en) 2016-11-17 2021-04-13 Dolby Laboratories Licensing Corporation Predicting and verifying regions of interest selections
EP3548987A4 (en) * 2016-12-01 2020-05-20 Shanghai Yunyinggu Technology Co., Ltd. Zone-based display data processing and transmission
CN106791846B (en) * 2016-12-09 2019-12-13 浙江宇视科技有限公司 Method and device for adjusting image coding quality factor
US10319573B2 (en) 2017-01-26 2019-06-11 Protein Metrics Inc. Methods and apparatuses for determining the intact mass of large molecules from mass spectrographic data
US10306011B2 (en) 2017-01-31 2019-05-28 International Business Machines Corporation Dynamic modification of image resolution
AU2018218182B2 (en) 2017-02-08 2022-12-15 Immersive Robotics Pty Ltd Antenna control for mobile device communication
FR3067199B1 (en) * 2017-06-06 2020-05-22 Sagemcom Broadband Sas METHOD FOR TRANSMITTING AN IMMERSIVE VIDEO
US11626274B2 (en) 2017-08-01 2023-04-11 Protein Metrics, Llc Interactive analysis of mass spectrometry data including peak selection and dynamic labeling
US10510521B2 (en) 2017-09-29 2019-12-17 Protein Metrics Inc. Interactive analysis of mass spectrometry data
AU2018373495B2 (en) 2017-11-21 2023-01-05 Immersive Robotics Pty Ltd Frequency component selection for image compression
WO2019100108A1 (en) 2017-11-21 2019-05-31 Immersive Robotics Pty Ltd Image compression for digital reality
GB2568690A (en) * 2017-11-23 2019-05-29 Nokia Technologies Oy Method for adaptive displaying of video content
US10803618B2 (en) * 2018-06-28 2020-10-13 Intel Corporation Multiple subject attention tracking
US11640901B2 (en) 2018-09-05 2023-05-02 Protein Metrics, Llc Methods and apparatuses for deconvolution of mass spectrometry data
CN109816739B (en) * 2018-12-14 2024-02-20 上海昇晔网络科技有限公司 Picture compression method, device, computer equipment and computer readable storage medium
CN110147892B (en) * 2019-02-20 2021-05-25 电子科技大学 Human movement mode presumption model, training method and presumption method based on variational trajectory context perception
US11346844B2 (en) 2019-04-26 2022-05-31 Protein Metrics Inc. Intact mass reconstruction from peptide level data and facilitated comparison with experimental intact observation
SG10201913146VA (en) * 2019-12-24 2020-11-27 Sensetime Int Pte Ltd Method and apparatus for filtrating images and electronic device
JP2023544647A (en) 2020-08-31 2023-10-24 プロテイン・メトリクス・エルエルシー Data compression for multidimensional time series data
US20220084187A1 (en) * 2020-09-14 2022-03-17 City University Of Hong Kong Method, device and computer readable medium for intrinsic popularity evaluation and content compression based thereon
US11240570B1 (en) * 2020-10-08 2022-02-01 International Business Machines Corporation Object-based video loading
US20220141531A1 (en) * 2020-10-30 2022-05-05 Rovi Guides, Inc. Resource-saving systems and methods

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3507988A (en) 1966-09-15 1970-04-21 Cornell Aeronautical Labor Inc Narrow-band,single-observer,television apparatus
US4028725A (en) 1976-04-21 1977-06-07 Grumman Aerospace Corporation High-resolution vision system
US4348186A (en) 1979-12-17 1982-09-07 The United States Of America As Represented By The Secretary Of The Navy Pilot helmet mounted CIG display with eye coupled area of interest
US4439157A (en) 1982-05-03 1984-03-27 The United States Of America As Represented By The Secretary Of The Navy Helmet mounted display projector
US4568159A (en) 1982-11-26 1986-02-04 The United States Of America As Represented By The Secretary Of The Navy CCD Head and eye position indicator
ATE73311T1 (en) 1986-04-04 1992-03-15 Applied Science Group Inc METHOD AND DEVICE FOR DEVELOPING THE REPRESENTATION OF WATCHING TIME DISTRIBUTION WHEN PEOPLE WATCH TELEVISION ADVERTISING.
US4755045A (en) * 1986-04-04 1988-07-05 Applied Science Group, Inc. Method and system for generating a synchronous display of a visual presentation and the looking response of many viewers
US4836670A (en) 1987-08-19 1989-06-06 Center For Innovative Technology Eye movement detector
US4852988A (en) 1988-09-12 1989-08-01 Applied Science Laboratories Visor and camera providing a parallax-free field-of-view image for a head-mounted eye movement measurement system
US5426513A (en) 1989-06-16 1995-06-20 Harris Corporation Prioritized image transmission system and method
US5103306A (en) 1990-03-28 1992-04-07 Transitions Research Corporation Digital image compression employing a resolution gradient
US5333212A (en) * 1991-03-04 1994-07-26 Storm Technology Image compression technique with regionally selective compression ratio
US5592226A (en) 1994-01-26 1997-01-07 Btg Usa Inc. Method and apparatus for video data compression using temporally adaptive motion interpolation
JPH08331561A (en) 1995-03-30 1996-12-13 Canon Inc Image processing unit
US5896176A (en) * 1995-10-27 1999-04-20 Texas Instruments Incorporated Content-based video compression
US5764803A (en) * 1996-04-03 1998-06-09 Lucent Technologies Inc. Motion-adaptive modelling of scene content for very low bit rate model-assisted coding of video sequences
US6252989B1 (en) * 1997-01-07 2001-06-26 Board Of The Regents, The University Of Texas System Foveated image coding system and method for image bandwidth reduction
US6144772A (en) * 1998-01-29 2000-11-07 Canon Kabushiki Kaisha Variable compression encoding of digitized images
US6389169B1 (en) * 1998-06-08 2002-05-14 Lawrence W. Stark Intelligent systems and methods for processing image data based upon anticipated regions of visual interest
US6496607B1 (en) * 1998-06-26 2002-12-17 Sarnoff Corporation Method and apparatus for region-based allocation of processing resources and control of input image formation
US6256423B1 (en) * 1998-09-18 2001-07-03 Sarnoff Corporation Intra-frame quantizer selection for video compression
US6476873B1 (en) * 1998-10-23 2002-11-05 Vtel Corporation Enhancement of a selectable region of video
US6356664B1 (en) * 1999-02-24 2002-03-12 International Business Machines Corporation Selective reduction of video data using variable sampling rates based on importance within the image
US7027655B2 (en) 2001-03-29 2006-04-11 Electronics For Imaging, Inc. Digital image compression with spatially varying quality levels determined by identifying areas of interest

Similar Documents

Publication Publication Date Title
AU2002309519B2 (en) Digital image compression
AU2002309519A1 (en) Digital image compression
US7075553B2 (en) Method and system for displaying an image
US6721952B1 (en) Method and system for encoding movies, panoramas and large images for on-line interactive viewing and gazing
US7110605B2 (en) Digital intermediate (DI) processing and distribution with scalable compression in the post-production of motion pictures
US7242850B2 (en) Frame-interpolated variable-rate motion imaging system
US9554132B2 (en) Video compression implementing resolution tradeoffs and optimization
CN113170234B (en) Adaptive encoding and streaming method, system and storage medium for multi-directional video
US20180077385A1 (en) Data, multimedia & video transmission updating system
US20110051808A1 (en) Method and system for transcoding regions of interests in video surveillance
US20110228846A1 (en) Region of Interest Tracking and Integration Into a Video Codec
CN104335243B (en) A kind of method and device for handling panorama
US7724964B2 (en) Digital intermediate (DI) processing and distribution with scalable compression in the post-production of motion pictures
Reeves et al. Adaptive foveation of MPEG video
JP5594842B2 (en) Video distribution device
Guediri et al. An affordable solution to real-time video compression
Freedman Video Compression
Quast et al. Spatial scalable region of interest transcoding of JPEG2000 for video surveillance
GB2348074A (en) Encoding movies, panoramas and large images for on-line interactive viewing and gazing
Van Wallendael et al. Motion JPEG2000 interactive Region-Of-Interest coding on mobile devices