US20180024703A1 - Techniques to modify content and view content on a mobile device - Google Patents
Techniques to modify content and view content on a mobile device Download PDFInfo
- Publication number
- US20180024703A1 US20180024703A1 US15/723,040 US201715723040A US2018024703A1 US 20180024703 A1 US20180024703 A1 US 20180024703A1 US 201715723040 A US201715723040 A US 201715723040A US 2018024703 A1 US2018024703 A1 US 2018024703A1
- Authority
- US
- United States
- Prior art keywords
- metadata
- cells
- cell
- graphical content
- polygonal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
Definitions
- a computer-based tool allows a designer to edit content so that the content is more conveniently and intuitively consumed on small screens (e.g., screens of mobile devices).
- an application for providing metadata to pre-existing media content where the application allows a designer (or other user) to indicate salient visual features for portions of visual content.
- the application uses vision algorithms to automatically generate other kinds of metadata based on the positions of the salient features and other characteristics of the graphical content. The designer would then approve or modify the generated metadata. Additionally or alternatively, any of the metadata can be manually entered by the designer using the computer-based tool.
- various embodiments are directed to systems, methods, and computer program products for viewing graphic and/or textual media on small screens.
- a viewing application receives graphical content and metadata (such as that produced using the tool described above) and renders the graphical content according to the metadata. For instance, for each portion that has a salient feature (e.g., each cell on a page of a comic book), there is a pan, a rotation, and a magnification associated therewith.
- a selected portion is the focus
- other portions that may appear on the screen can be modified to increase focus on the first portion (e.g., by adjusting opacity of the portions that are not the focus).
- a first screen may show many portions from which a user can select a portion to view.
- the user can move to a previous or subsequent portion by, e.g., a finger swipe on a display screen.
- a camera view on the first portion then moves to the next selected portion as the next selected portion is displayed according to the metadata.
- FIGS. 1-9 An example embodiment of a viewing application, being run on a handheld device and rendering content upon a display screen, is shown in FIGS. 1-9 ;
- FIGS. 10-16 show a second example of the viewing application using different graphical content
- FIG. 17 is an illustration of exemplary computer-based tool adapted according to one embodiment of the invention.
- FIG. 18 is an illustration of an exemplary method, according to one embodiment of the invention.
- FIG. 19 illustrates an example computer system adapted according to one embodiment of the present invention.
- FIGS. 1-9 An example embodiment of a viewing application, being run on a handheld device and rendering content upon a display screen, is shown in FIGS. 1-9 .
- FIG. 1 shows handheld device 101 with display screen 102 (in this case, a touch-screen that receives user input through user touching of the screen). Control features 111 , 112 , 113 and 190 are rendered upon screen 102 and are described further below.
- the view shown in FIG. 1 includes cells 121 - 126 , which in this example are individually viewable portions of the page. Also included in the view are text boxes 131 - 138 .
- handheld device An example of a handheld device that can be used in some embodiments is the iPhoneTM by Apple Inc., though other handheld devices can be used as well. However, not all embodiments are limited to handheld devices, as some embodiments use a larger display screen for rendering graphical content. Furthermore, devices without touch screens can be adapted for use in some embodiments, by, e.g., mapping keys to control features and frames. Additionally, some embodiments may be adapted for rendering graphical content upon a screen of a tablet computer, such as an iPadTM from Apple, Inc.
- FIG. 1 Typically it is difficult to come up with a pattern for reading a page such as that shown in FIG. 1 on a mobile phone.
- the text is too small to view, and the cells are shaped irregularly and are arranged in a sequential fashion that follows their irregular shapes.
- Various embodiments of the present invention are different from conventional approaches and provide a better way to render graphical content, as shown in FIGS. 2-9 .
- FIG. 2 shows a view after the user selects next 111 .
- the transition (not shown) includes an animated zoom and pan into the upper left hand corner of the page shown in FIG. 1 .
- Cell 121 is the focus of FIG. 2 , and it is placed and zoomed according to metadata associated with the graphical content (as explained in more detail below).
- the whole page pans and rotates to present the content shown in FIG. 3 , where cell 122 is the focus.
- cells 121 and 123 which are not the focus of the view of FIG. 3 , are faded out slightly.
- the fading effect is implemented by rendering a semi-transparent, or semi-opaque, mask on top of the cells that are not the focus.
- Applying a mask may be preferable in some embodiments, since it may not be necessary to modify the underlying existing content when a mask can be applied on top of the content.
- some embodiments may include modifying the content itself.
- FIG. 4 the view shown in FIG. 4 is rendered.
- the text box 135 is enlarged and moved to make it easier to read.
- FIGS. 2-9 show the views that are rendered as the user selects next 111 to view all cells on the page.
- the cell that is the focus of FIG. 7 is actually a landscape screen—wider than it is tall—and the application assumes that the user is going to rotate the device so that the user will be able to view cell 125 in a more natural, larger way.
- next view is portrait ( FIG. 8 ), and the application assumes that the user is going to rotate the device back to its portrait view.
- the idea is that the user, as part of experiencing the content, will rotate the phone back and forth as the user goes from cell to cell.
- FIGS. 1-9 addresses a problem in current comic book readers.
- a comic is created as an artistic, creative expression often without awareness that it might be consumed on a device with a small fixed-size screen, so sometimes the content items are wider than they are tall and sometimes they are taller than they are wide.
- Various embodiments of the present invention make an effective use of the screen so that the content is visually perceivable and the text is readable.
- control 113 there is control 113 .
- a user can zoom out the entire page (i.e., go back to the view shown in FIG. 1 ) by selecting control 113 .
- the various cells 121 - 126 are selectable items where a user can select one of the cells, and the viewing application will take the user directly into that cell and still do an appealing transition with a camera view that moves until the appropriate placement, size, rotation, and fading are rendered. So some users might actually choose to read the comic in that way, by stepping in and out of the cells.
- various embodiments support commonly accepted touch-based gestures. Similar to current viewers, the viewing application of FIGS. 1-9 transitions back and forth between cells with a finger swipe from one side to the next. Thus, finger swipes may be used instead of, or in addition to, control features 111 and 112 . In the present case, a finger swipe from one side of screen 102 to the other is effective to cause a transition to an adjacent cell even though the page is arranged substantially vertically (rather than horizontally). In this way, various embodiments can be adapted for use with any of a variety of arbitrary shapes and arrangements. Furthermore, in many embodiments, the orientation of gestures are adjusted to correspond to orientation of the screen. Thus, a side-to-side finger swipe is still a side-to-side finger swipe whether the device is arranged portrait-style or landscape-style.
- FIGS. 10-16 show a second example using different graphical content.
- the cells are rectangles and are, therefore, more regular than the cells of FIGS. 1-9 .
- FIG. 10 shows the whole page, where a user can select a particular cell and where the user can return by selecting control 113 .
- FIGS. 11-16 show a sequential transition among the cells, including at least one portrait-landscape-portrait transition, and extensive use of zooming.
- Automatic portrait-landscape-portrait transitioning is unconventional, but provides a good use of screen space. However, such automatic transitioning may not be preferred by all users.
- Some embodiments include a control, such as control 190 of FIG. 1 , allowing a user to disable automatic portrait-landscape-portrait transitioning. In such case, the application may split a landscape cell into multiple portrait views, or split a portrait cell into multiple landscape views.
- some embodiments include a computer-based tool that allows a designer, developer, author, artist, or other user to add metadata to existing media content to prepare the media content for display according to the concepts discussed above.
- metadata There are variety of different types of metadata that can be added to existing content, and the present examples list a few.
- the original image has a sequence of views—cells in the case of comic books—that the end-user will perceive, and there is metadata associated with each of those viewpoints.
- One metadata item is referred to as a viewpoint, which includes a center point of that view, a magnification, and a rotation.
- Another metadata item includes a sequence of polygons to adjust opacity of items that surround a given cell, referred to as polygonal overlays, and they form the basis of the masks.
- Yet another example of metadata includes an indication of the visually most salient point in that cell.
- the visually most salient point is determined by a developer or other user, who uses intuition or other technique to decide which point is most likely the most salient to end-users.
- the viewing application supports random access into any cell by tapping directly onto a portion of the cell to zoom right into it. So in order to support that feature, viewing applications receive the indications of salient points because the cells can actually be overlapping. In order to know which cell an end-user selects, it is assumed that users are most likely to tap on the visually most salient characteristic of one of those cells (e.g., the main figure's face). When a user selects a point on the screen, the viewing application looks for the closest salient point and goes to the cell associated with that closest salient point.
- the computer based tool of various embodiments allows a user to designate salient points in the various portions of the page.
- FIG. 17 is an illustration of exemplary computer-based tool 1700 , adapted according to one embodiment of the invention.
- computer-based tool 1700 includes interface 1701 , where for each page a user defines at least the viewpoint, the polygonal overlay, and the salient points.
- the interface includes rectangle 1702 , which is sized to correspond to the screen of a given handheld device (e.g., an iPhoneTM). The user drags rectangle 1702 around on the screen, rotates it, and resizes it to define a given viewpoint.
- Interface 1701 also provides polygon tool 1703 to draw polygons to cover the portions of the view that are the parts that the user wants to de-emphasize in order to define the polygonal overlay.
- Interface 1701 also supports a user's selection (by mouse, touchscreen or otherwise) in order to specify the salient portion of the screen that will be used for navigation. The user defines the metadata for each of the views or cells manually in this way.
- interface 1701 allows the user to select the salient points for the viewpoints (or for a single viewpoint if the user prefers to go viewpoint-by-viewpoint).
- Tool 1700 includes computer vision to analyze the image, looking for the boxes and lines and likely interesting areas and it makes a best estimation as to the viewpoint metadata item and the polygonal overlay metadata item.
- Tool 1700 makes a best estimation as to the best view and as to the polygons that should be generated in order to hide the uninteresting parts.
- the image processing has a particularly good chance of working because, for many kinds of content, such as comics and other visually-oriented books, such content typically has white backgrounds or solid or simple gradient backgrounds with lines that typically follow the black lines that define the area as well. It is possible to do computer vision or image processing algorithms that identify those lines with high accuracy. While the examples herein mention comic book material for the underlying content, it is noted that any of a variety of graphical content can be modified by, or viewed by, various embodiments of the invention.
- the user When the first estimate is acceptable, the user indicates acceptance and moves the process on to the next task (e.g., moving on to the next cell). If the user does not think that the computer's estimate is acceptable, then the user can manually manipulate the view and/or the polygons. In such a case the computer has already identified the lines and the polygons and the underlying image, giving the human user more to work with in defining the viewpoint and/or the polygonal overlay.
- some embodiments include a snap-to feature to automatically snap views on polygons to the next best item, so when a user drags a view, tool 1700 snaps it to the next set of lines.
- the computer might get it all right, and the user might just verify the computer's estimates.
- the automatic embodiment has the potential to be quite efficient. For more complex and difficult source images, the process might simply be manual in the worst-case scenario.
- FIG. 18 is an illustration of exemplary method 1800 according to one embodiment of the invention.
- Method 1800 can be performed, for example, by a computer-based tool to add metadata to graphical content.
- the graphical content is divided into a plurality of portions (e.g., cells).
- the user indicates the salient points of the cells, and the tool makes a best estimate of other types of metadata items, such as polygonal overlays, rotation, magnification, and position (at blocks 1801 - 1803 ).
- the user can then accept the computer's metadata, reject some, or reject all of the metadata at block 1804 .
- For metadata that is rejected the user is given an opportunity to modify the items manually at block 1805 .
- the tool associates metadata with each of the cells and makes the content available to end-users (e.g., by publishing to an Internet resource) at blocks 1806 - 1807 .
- Metadata may be indicated manually and/or generated automatically by the computer-based tool.
- An additional type of metadata can refer to pixels of a cell that correspond to text, whether the text is by itself, in a balloon, or included in another arbitrary shape. Such data can be used to “pop-out” the text, moving it and/or making it larger to increase readability. For instance, in some situations, text (at least in its original form) may be too small to read on a handheld device screen.
- Various embodiments identify where the text is located and magnify the text. The degree of magnification can be determined from a combination of design and end-user preference. In one example, an end-user might prefer fourteen-point font, so the text is marked to be magnified to fourteen-point font by the viewing application. Additionally or alternatively, there may be a default where the text is marked to magnify it ten or twenty percent or thirty percent to make it still larger and still visually more salient.
- popping out the text is, instead of popping the text out automatically at the viewing application, allowing the end-user to select the text to pop the text out.
- Some embodiments may add other visual schemes to enhance aesthetic or artistic effect by, e.g., enlarging text with a bounce or other animated sequence make the experience more like video.
- any of a variety of visual effects can be added to the underlying content through use of metadata.
- the images are preprocessed to generate views wherein the text, itself is the focus (e.g., as in FIG. 4 ).
- the text is magnified within a graphic cell. In some instances where there are multiple text boxes, magnifying them all in place would cause them to overlap.
- One such solution is to add metadata that specifies parts of the image that a given text box should not cover.
- readable media can include any medium that can store information.
- FIG. 19 illustrates an example computer system 1900 adapted according to one embodiment of the present invention. That is, computer system 1900 comprises an example system on which embodiments of the present invention may be implemented (such as a viewing application or a tool to modify graphical content).
- Central processing unit (CPU) 1901 is coupled to system bus 1902 .
- CPU 1901 may be any general purpose or specialized purpose CPU. However, the present invention is not restricted by the architecture of CPU 1901 as long as CPU 1901 supports the inventive operations as described herein.
- CPU 1901 may execute the various logical instructions according to embodiments of the present invention. For example, one or more CPUs, such as CPU 1901 , may execute machine-level instructions according to the exemplary operational flows described above regarding the respective operation of the viewer application and/or the content-modifying tool.
- Computer system 1900 also preferably includes random access memory (RAM) 1903 , which may be SRAM, DRAM, SDRAM, or the like. In this example, computer system 1900 uses RAM 1903 to buffer 302 of FIG. 3 .
- Computer system 1900 preferably includes read-only memory (ROM) 1904 which may be PROM, EPROM, EEPROM, or the like. RAM 1903 and ROM 1904 hold user and system data and programs, as is well known in the art.
- Computer system 1900 also preferably includes input/output (I/O) adapter 1905 , communications adapter 1911 , user interface adapter 1908 , and display adapter 1909 .
- I/O adapter 1905 , user interface adapter 1908 , and/or communications adapter 1911 may, in certain embodiments, enable a user to interact with computer system 1900 in order to input information, such as indicating salient features (e.g., with respect to a tool to modify the graphical content) or select a cell to view (e.g., with respect to a viewing application).
- I/O adapter 1905 preferably connects to storage device(s) 1906 , such as one or more of hard drive, compact disc (CD) drive, floppy disk drive, tape drive, etc. to computer system 1900 .
- the storage devices may be utilized when RAM 1903 is insufficient for the memory requirements associated with storing media data.
- Communications adapter 1911 is preferably adapted to couple computer system 1900 to network 1912 (e.g., the Internet, a LAN, a cellular network, etc.).
- User interface adapter 1908 couples user input devices, such as keyboard 1913 , pointing device 1907 , and microphone 1914 , a touch screen (such as 102 of FIG. 1 ) and/or output devices, such as speaker(s) 1915 to computer system 1900 .
- Display adapter 1909 is driven by CPU 1901 to control the display on display device 1910 to, for example, display the media as it is played.
- FIG. 19 shows a general-purpose computer
- devices that run a viewing application may be any kind of processor-based device that includes a small screen, such as a cell phone, a Personal Digital Assistant (PDA), and/or the like.
- devices that run metadata tool applications may be any kind of processor-based device, such as a personal computer, a server-type computer, a handheld device, and the like.
- embodiments of the present invention may be implemented on application specific integrated circuits (ASICs) or very large scale integrated (VLSI) circuits.
- ASICs application specific integrated circuits
- VLSI very large scale integrated circuits
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional Application No. 61/225,366, filed Jul. 14, 2009 and entitled, “SYSTEMS AND METHODS PROVIDING TECHNIQUES TO MODIFY CONTENT AND VIEW CONTENT ON MOBILE DEVICES,” the disclosure of which is incorporated herein by reference.
- Comic books, graphic novels, and other graphic media are quite popular among some readers. Some graphic media include sequential, rectangular cells where the story is told as the sequence of cells progresses from right to left. More modern forms of the media often include cells that have irregularly shaped boundaries and/or arrange the cells in irregular patterns upon the page that do not progress from right to left.
- Recently, there have been attempts to adapt comic book reader interfaces to handheld devices, such as the iPhone™, available from Apple, Inc. Currently, most comic book readers on the iPhone™ do the same thing. They have a series of sequential images that are cropped from a comic book, and the user reads the comics in the same way that a user browses photos in the photo library. In other words, a user drags his or her finger across the display screen to go left or right to an adjacent square located to the left or right. However, such a technique is not suitable for a page that has irregularly-shaped cells and/or an irregular arrangement of cells. Furthermore, the cropping and rearranging of cells often destroys the look and feel that was intended by the author. A more intuitive and less destructive comic book reading interface is, therefore, desirable.
- Various embodiments of the invention are directed to systems, methods, and computer program products for editing existing content to be viewed on mobile device screens. In one example, a computer-based tool allows a designer to edit content so that the content is more conveniently and intuitively consumed on small screens (e.g., screens of mobile devices). In one embodiment, there is an application for providing metadata to pre-existing media content where the application allows a designer (or other user) to indicate salient visual features for portions of visual content. The application uses vision algorithms to automatically generate other kinds of metadata based on the positions of the salient features and other characteristics of the graphical content. The designer would then approve or modify the generated metadata. Additionally or alternatively, any of the metadata can be manually entered by the designer using the computer-based tool.
- In another aspect, various embodiments are directed to systems, methods, and computer program products for viewing graphic and/or textual media on small screens. In one example, a viewing application receives graphical content and metadata (such as that produced using the tool described above) and renders the graphical content according to the metadata. For instance, for each portion that has a salient feature (e.g., each cell on a page of a comic book), there is a pan, a rotation, and a magnification associated therewith. When a selected portion is the focus, other portions that may appear on the screen can be modified to increase focus on the first portion (e.g., by adjusting opacity of the portions that are not the focus). A first screen may show many portions from which a user can select a portion to view. When a user has selected, and is viewing, a particular portion, the user can move to a previous or subsequent portion by, e.g., a finger swipe on a display screen. A camera view on the first portion then moves to the next selected portion as the next selected portion is displayed according to the metadata.
- The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims. The novel features which are believed to be characteristic of the invention, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present invention.
- For a more complete understanding of the present invention, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
- An example embodiment of a viewing application, being run on a handheld device and rendering content upon a display screen, is shown in
FIGS. 1-9 ; -
FIGS. 10-16 show a second example of the viewing application using different graphical content; -
FIG. 17 is an illustration of exemplary computer-based tool adapted according to one embodiment of the invention; -
FIG. 18 is an illustration of an exemplary method, according to one embodiment of the invention; and -
FIG. 19 illustrates an example computer system adapted according to one embodiment of the present invention. - An example embodiment of a viewing application, being run on a handheld device and rendering content upon a display screen, is shown in
FIGS. 1-9 .FIG. 1 showshandheld device 101 with display screen 102 (in this case, a touch-screen that receives user input through user touching of the screen). Control features 111, 112, 113 and 190 are rendered uponscreen 102 and are described further below. The view shown inFIG. 1 includes cells 121-126, which in this example are individually viewable portions of the page. Also included in the view are text boxes 131-138. - An example of a handheld device that can be used in some embodiments is the iPhone™ by Apple Inc., though other handheld devices can be used as well. However, not all embodiments are limited to handheld devices, as some embodiments use a larger display screen for rendering graphical content. Furthermore, devices without touch screens can be adapted for use in some embodiments, by, e.g., mapping keys to control features and frames. Additionally, some embodiments may be adapted for rendering graphical content upon a screen of a tablet computer, such as an iPad™ from Apple, Inc.
- Typically it is difficult to come up with a pattern for reading a page such as that shown in
FIG. 1 on a mobile phone. In this view, the text is too small to view, and the cells are shaped irregularly and are arranged in a sequential fashion that follows their irregular shapes. Various embodiments of the present invention are different from conventional approaches and provide a better way to render graphical content, as shown inFIGS. 2-9 . - By selecting next 111, the user can step through the comic book.
FIG. 2 shows a view after the user selects next 111. The transition (not shown) includes an animated zoom and pan into the upper left hand corner of the page shown inFIG. 1 .Cell 121 is the focus ofFIG. 2 , and it is placed and zoomed according to metadata associated with the graphical content (as explained in more detail below). When the user clicks next 111 again, the whole page pans and rotates to present the content shown inFIG. 3 , wherecell 122 is the focus. InFIG. 3 ,cells FIG. 3 , are faded out slightly. - In one example, the fading effect is implemented by rendering a semi-transparent, or semi-opaque, mask on top of the cells that are not the focus. Applying a mask may be preferable in some embodiments, since it may not be necessary to modify the underlying existing content when a mask can be applied on top of the content. However, some embodiments may include modifying the content itself.
- As the user continues to go from cell-to-cell, similar positioning, zooming, panning, rotating (if applicable) and fading are performed to give the user an appealing feel—one that is organic and intuitive. When the user selects next 111 again, the view shown in
FIG. 4 is rendered. InFIG. 4 , thetext box 135 is enlarged and moved to make it easier to read.FIGS. 2-9 show the views that are rendered as the user selects next 111 to view all cells on the page. The cell that is the focus ofFIG. 7 is actually a landscape screen—wider than it is tall—and the application assumes that the user is going to rotate the device so that the user will be able to viewcell 125 in a more natural, larger way. As the user selects next 111 again, the next view is portrait (FIG. 8 ), and the application assumes that the user is going to rotate the device back to its portrait view. As the user walks through a page like this, the idea is that the user, as part of experiencing the content, will rotate the phone back and forth as the user goes from cell to cell. - The application shown in
FIGS. 1-9 addresses a problem in current comic book readers. A comic is created as an artistic, creative expression often without awareness that it might be consumed on a device with a small fixed-size screen, so sometimes the content items are wider than they are tall and sometimes they are taller than they are wide. Various embodiments of the present invention make an effective use of the screen so that the content is visually perceivable and the text is readable. - Referring back to
FIG. 1 , there iscontrol 113. At any point, a user can zoom out the entire page (i.e., go back to the view shown inFIG. 1 ) by selectingcontrol 113. Furthermore, the various cells 121-126 are selectable items where a user can select one of the cells, and the viewing application will take the user directly into that cell and still do an appealing transition with a camera view that moves until the appropriate placement, size, rotation, and fading are rendered. So some users might actually choose to read the comic in that way, by stepping in and out of the cells. - While not easily shown in
FIG. 1-9 , various embodiments support commonly accepted touch-based gestures. Similar to current viewers, the viewing application ofFIGS. 1-9 transitions back and forth between cells with a finger swipe from one side to the next. Thus, finger swipes may be used instead of, or in addition to, control features 111 and 112. In the present case, a finger swipe from one side ofscreen 102 to the other is effective to cause a transition to an adjacent cell even though the page is arranged substantially vertically (rather than horizontally). In this way, various embodiments can be adapted for use with any of a variety of arbitrary shapes and arrangements. Furthermore, in many embodiments, the orientation of gestures are adjusted to correspond to orientation of the screen. Thus, a side-to-side finger swipe is still a side-to-side finger swipe whether the device is arranged portrait-style or landscape-style. -
FIGS. 10-16 show a second example using different graphical content. InFIGS. 10-16 , the cells are rectangles and are, therefore, more regular than the cells ofFIGS. 1-9 .FIG. 10 shows the whole page, where a user can select a particular cell and where the user can return by selectingcontrol 113.FIGS. 11-16 show a sequential transition among the cells, including at least one portrait-landscape-portrait transition, and extensive use of zooming. - Automatic portrait-landscape-portrait transitioning is unconventional, but provides a good use of screen space. However, such automatic transitioning may not be preferred by all users. Some embodiments include a control, such as
control 190 ofFIG. 1 , allowing a user to disable automatic portrait-landscape-portrait transitioning. In such case, the application may split a landscape cell into multiple portrait views, or split a portrait cell into multiple landscape views. - In another aspect, some embodiments include a computer-based tool that allows a designer, developer, author, artist, or other user to add metadata to existing media content to prepare the media content for display according to the concepts discussed above. There are variety of different types of metadata that can be added to existing content, and the present examples list a few. The original image has a sequence of views—cells in the case of comic books—that the end-user will perceive, and there is metadata associated with each of those viewpoints. One metadata item is referred to as a viewpoint, which includes a center point of that view, a magnification, and a rotation. Another metadata item includes a sequence of polygons to adjust opacity of items that surround a given cell, referred to as polygonal overlays, and they form the basis of the masks.
- Yet another example of metadata includes an indication of the visually most salient point in that cell. Often, the visually most salient point is determined by a developer or other user, who uses intuition or other technique to decide which point is most likely the most salient to end-users. The viewing application supports random access into any cell by tapping directly onto a portion of the cell to zoom right into it. So in order to support that feature, viewing applications receive the indications of salient points because the cells can actually be overlapping. In order to know which cell an end-user selects, it is assumed that users are most likely to tap on the visually most salient characteristic of one of those cells (e.g., the main figure's face). When a user selects a point on the screen, the viewing application looks for the closest salient point and goes to the cell associated with that closest salient point. The computer based tool of various embodiments allows a user to designate salient points in the various portions of the page.
-
FIG. 17 is an illustration of exemplary computer-basedtool 1700, adapted according to one embodiment of the invention. In the example embodiment, computer-basedtool 1700 includesinterface 1701, where for each page a user defines at least the viewpoint, the polygonal overlay, and the salient points. In this example, the interface includesrectangle 1702, which is sized to correspond to the screen of a given handheld device (e.g., an iPhone™). The user dragsrectangle 1702 around on the screen, rotates it, and resizes it to define a given viewpoint.Interface 1701 also providespolygon tool 1703 to draw polygons to cover the portions of the view that are the parts that the user wants to de-emphasize in order to define the polygonal overlay.Interface 1701 also supports a user's selection (by mouse, touchscreen or otherwise) in order to specify the salient portion of the screen that will be used for navigation. The user defines the metadata for each of the views or cells manually in this way. - One issue with the embodiment described above is that it can be somewhat time consuming because these views can be arbitrarily shaped and arranged with different angles and different sizes. For instance, it might take a relatively long time for a human user to draw the polygonal overlay to cover up one of these strangely shaped lines or triangular views that can be part of the source image. Another embodiment uses automated image processing techniques to increase efficiency. In such an example,
interface 1701 allows the user to select the salient points for the viewpoints (or for a single viewpoint if the user prefers to go viewpoint-by-viewpoint).Tool 1700 includes computer vision to analyze the image, looking for the boxes and lines and likely interesting areas and it makes a best estimation as to the viewpoint metadata item and the polygonal overlay metadata item.Tool 1700 makes a best estimation as to the best view and as to the polygons that should be generated in order to hide the uninteresting parts. In many cases, the image processing has a particularly good chance of working because, for many kinds of content, such as comics and other visually-oriented books, such content typically has white backgrounds or solid or simple gradient backgrounds with lines that typically follow the black lines that define the area as well. It is possible to do computer vision or image processing algorithms that identify those lines with high accuracy. While the examples herein mention comic book material for the underlying content, it is noted that any of a variety of graphical content can be modified by, or viewed by, various embodiments of the invention. - When the first estimate is acceptable, the user indicates acceptance and moves the process on to the next task (e.g., moving on to the next cell). If the user does not think that the computer's estimate is acceptable, then the user can manually manipulate the view and/or the polygons. In such a case the computer has already identified the lines and the polygons and the underlying image, giving the human user more to work with in defining the viewpoint and/or the polygonal overlay. In fact, some embodiments include a snap-to feature to automatically snap views on polygons to the next best item, so when a user drags a view,
tool 1700 snaps it to the next set of lines. In some instance, the computer might get it all right, and the user might just verify the computer's estimates. Thus, for simpler source images, the automatic embodiment has the potential to be quite efficient. For more complex and difficult source images, the process might simply be manual in the worst-case scenario. -
FIG. 18 is an illustration ofexemplary method 1800 according to one embodiment of the invention.Method 1800 can be performed, for example, by a computer-based tool to add metadata to graphical content. The graphical content is divided into a plurality of portions (e.g., cells). As explained above, the user indicates the salient points of the cells, and the tool makes a best estimate of other types of metadata items, such as polygonal overlays, rotation, magnification, and position (at blocks 1801-1803). The user can then accept the computer's metadata, reject some, or reject all of the metadata at block 1804. For metadata that is rejected, the user is given an opportunity to modify the items manually atblock 1805. The tool associates metadata with each of the cells and makes the content available to end-users (e.g., by publishing to an Internet resource) at blocks 1806-1807. - Other types of metadata may be indicated manually and/or generated automatically by the computer-based tool. An additional type of metadata can refer to pixels of a cell that correspond to text, whether the text is by itself, in a balloon, or included in another arbitrary shape. Such data can be used to “pop-out” the text, moving it and/or making it larger to increase readability. For instance, in some situations, text (at least in its original form) may be too small to read on a handheld device screen. Various embodiments identify where the text is located and magnify the text. The degree of magnification can be determined from a combination of design and end-user preference. In one example, an end-user might prefer fourteen-point font, so the text is marked to be magnified to fourteen-point font by the viewing application. Additionally or alternatively, there may be a default where the text is marked to magnify it ten or twenty percent or thirty percent to make it still larger and still visually more salient.
- One variation of popping out the text is, instead of popping the text out automatically at the viewing application, allowing the end-user to select the text to pop the text out. Some embodiments may add other visual schemes to enhance aesthetic or artistic effect by, e.g., enlarging text with a bounce or other animated sequence make the experience more like video. In fact, any of a variety of visual effects can be added to the underlying content through use of metadata.
- In one example, the images are preprocessed to generate views wherein the text, itself is the focus (e.g., as in
FIG. 4 ). In other examples, the text is magnified within a graphic cell. In some instances where there are multiple text boxes, magnifying them all in place would cause them to overlap. One such solution is to add metadata that specifies parts of the image that a given text box should not cover. - When implemented via computer-executable instructions, various elements of embodiments of the present invention are in essence the software code defining the operations of such various elements. The executable instructions or software code may be obtained from a readable medium (e.g., a hard drive media, optical media, RAM, EPROM, EEPROM, tape media, cartridge media, flash memory, ROM, memory stick, and/or the like). In fact, readable media can include any medium that can store information.
-
FIG. 19 illustrates anexample computer system 1900 adapted according to one embodiment of the present invention. That is,computer system 1900 comprises an example system on which embodiments of the present invention may be implemented (such as a viewing application or a tool to modify graphical content). Central processing unit (CPU) 1901 is coupled tosystem bus 1902.CPU 1901 may be any general purpose or specialized purpose CPU. However, the present invention is not restricted by the architecture ofCPU 1901 as long asCPU 1901 supports the inventive operations as described herein.CPU 1901 may execute the various logical instructions according to embodiments of the present invention. For example, one or more CPUs, such asCPU 1901, may execute machine-level instructions according to the exemplary operational flows described above regarding the respective operation of the viewer application and/or the content-modifying tool. -
Computer system 1900 also preferably includes random access memory (RAM) 1903, which may be SRAM, DRAM, SDRAM, or the like. In this example,computer system 1900 usesRAM 1903 to buffer 302 ofFIG. 3 .Computer system 1900 preferably includes read-only memory (ROM) 1904 which may be PROM, EPROM, EEPROM, or the like.RAM 1903 andROM 1904 hold user and system data and programs, as is well known in the art. -
Computer system 1900 also preferably includes input/output (I/O)adapter 1905,communications adapter 1911,user interface adapter 1908, anddisplay adapter 1909. I/O adapter 1905,user interface adapter 1908, and/orcommunications adapter 1911 may, in certain embodiments, enable a user to interact withcomputer system 1900 in order to input information, such as indicating salient features (e.g., with respect to a tool to modify the graphical content) or select a cell to view (e.g., with respect to a viewing application). - I/
O adapter 1905 preferably connects to storage device(s) 1906, such as one or more of hard drive, compact disc (CD) drive, floppy disk drive, tape drive, etc. tocomputer system 1900. The storage devices may be utilized whenRAM 1903 is insufficient for the memory requirements associated with storing media data.Communications adapter 1911 is preferably adapted to couplecomputer system 1900 to network 1912 (e.g., the Internet, a LAN, a cellular network, etc.).User interface adapter 1908 couples user input devices, such askeyboard 1913,pointing device 1907, andmicrophone 1914, a touch screen (such as 102 ofFIG. 1 ) and/or output devices, such as speaker(s) 1915 tocomputer system 1900.Display adapter 1909 is driven byCPU 1901 to control the display ondisplay device 1910 to, for example, display the media as it is played. - While
FIG. 19 shows a general-purpose computer, it should be noted that the exact configuration of a portion of a system according to various embodiments may be slightly different. For example, devices that run a viewing application according to one or more embodiments may be any kind of processor-based device that includes a small screen, such as a cell phone, a Personal Digital Assistant (PDA), and/or the like. Additionally, devices that run metadata tool applications according to one or more embodiments may be any kind of processor-based device, such as a personal computer, a server-type computer, a handheld device, and the like. Moreover, embodiments of the present invention may be implemented on application specific integrated circuits (ASICs) or very large scale integrated (VLSI) circuits. In fact, persons of ordinary skill in the art may utilize any number of suitable structures capable of executing logical operations according to the embodiments of the present invention. - Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
Claims (12)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/723,040 US11061524B2 (en) | 2009-07-14 | 2017-10-02 | Techniques to modify content and view content on a mobile device |
US17/340,332 US11567624B2 (en) | 2009-07-14 | 2021-06-07 | Techniques to modify content and view content on mobile devices |
US18/147,611 US11928305B2 (en) | 2009-07-14 | 2022-12-28 | Techniques to modify content and view content on mobile devices |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US22536609P | 2009-07-14 | 2009-07-14 | |
US12/836,424 US9778810B2 (en) | 2009-07-14 | 2010-07-14 | Techniques to modify content and view content on mobile devices |
US15/723,040 US11061524B2 (en) | 2009-07-14 | 2017-10-02 | Techniques to modify content and view content on a mobile device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/836,424 Continuation US9778810B2 (en) | 2009-07-14 | 2010-07-14 | Techniques to modify content and view content on mobile devices |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/340,332 Continuation US11567624B2 (en) | 2009-07-14 | 2021-06-07 | Techniques to modify content and view content on mobile devices |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180024703A1 true US20180024703A1 (en) | 2018-01-25 |
US11061524B2 US11061524B2 (en) | 2021-07-13 |
Family
ID=46637881
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/836,424 Active 2032-07-21 US9778810B2 (en) | 2009-07-14 | 2010-07-14 | Techniques to modify content and view content on mobile devices |
US15/723,040 Active 2031-05-09 US11061524B2 (en) | 2009-07-14 | 2017-10-02 | Techniques to modify content and view content on a mobile device |
US17/340,332 Active US11567624B2 (en) | 2009-07-14 | 2021-06-07 | Techniques to modify content and view content on mobile devices |
US18/147,611 Active US11928305B2 (en) | 2009-07-14 | 2022-12-28 | Techniques to modify content and view content on mobile devices |
US18/587,124 Pending US20240281104A1 (en) | 2009-07-14 | 2024-02-26 | Techniques to Modify Content and View Content on Mobile Devices |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/836,424 Active 2032-07-21 US9778810B2 (en) | 2009-07-14 | 2010-07-14 | Techniques to modify content and view content on mobile devices |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/340,332 Active US11567624B2 (en) | 2009-07-14 | 2021-06-07 | Techniques to modify content and view content on mobile devices |
US18/147,611 Active US11928305B2 (en) | 2009-07-14 | 2022-12-28 | Techniques to modify content and view content on mobile devices |
US18/587,124 Pending US20240281104A1 (en) | 2009-07-14 | 2024-02-26 | Techniques to Modify Content and View Content on Mobile Devices |
Country Status (1)
Country | Link |
---|---|
US (5) | US9778810B2 (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9886936B2 (en) | 2009-05-14 | 2018-02-06 | Amazon Technologies, Inc. | Presenting panels and sub-panels of a document |
US9922354B2 (en) | 2010-04-02 | 2018-03-20 | Apple Inc. | In application purchasing |
US20110246618A1 (en) | 2010-04-02 | 2011-10-06 | Apple Inc. | Caching multiple views corresponding to multiple aspect ratios |
US8615432B2 (en) | 2010-04-02 | 2013-12-24 | Apple Inc. | Background process for providing targeted content within a third-party application |
US9110749B2 (en) | 2010-06-01 | 2015-08-18 | Apple Inc. | Digital content bundle |
JP2014164630A (en) * | 2013-02-27 | 2014-09-08 | Sony Corp | Information processing apparatus, information processing method, and program |
USD739438S1 (en) * | 2013-11-21 | 2015-09-22 | Microsoft Corporation | Display screen with icon |
USD761845S1 (en) * | 2014-11-26 | 2016-07-19 | Amazon Technologies, Inc. | Display screen or portion thereof with an animated graphical user interface |
KR101780792B1 (en) | 2015-03-20 | 2017-10-10 | 네이버 주식회사 | Apparatus, method, and computer program for creating catoon data, and apparatus for viewing catoon data |
CN111240793B (en) * | 2020-02-13 | 2024-01-09 | 抖音视界有限公司 | Method, device, electronic equipment and computer readable medium for cell prerendering |
US11409411B1 (en) | 2021-03-12 | 2022-08-09 | Topgolf International, Inc. | Single finger user interface camera control |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5949428A (en) * | 1995-08-04 | 1999-09-07 | Microsoft Corporation | Method and apparatus for resolving pixel data in a graphics rendering system |
GB2378340A (en) * | 2001-07-31 | 2003-02-05 | Hewlett Packard Co | Generation of an image bounded by a frame or of overlapping images |
GB0602710D0 (en) * | 2006-02-10 | 2006-03-22 | Picsel Res Ltd | Processing Comic Art |
US7620905B2 (en) * | 2006-04-14 | 2009-11-17 | International Business Machines Corporation | System and method of windows management |
US8087044B2 (en) * | 2006-09-18 | 2011-12-27 | Rgb Networks, Inc. | Methods, apparatus, and systems for managing the insertion of overlay content into a video signal |
US8013870B2 (en) * | 2006-09-25 | 2011-09-06 | Adobe Systems Incorporated | Image masks generated from local color models |
US7956847B2 (en) * | 2007-01-05 | 2011-06-07 | Apple Inc. | Gestures for controlling, manipulating, and editing of media files using touch sensitive devices |
US8225208B2 (en) * | 2007-08-06 | 2012-07-17 | Apple Inc. | Interactive frames for images and videos displayed in a presentation application |
US7721209B2 (en) * | 2008-09-08 | 2010-05-18 | Apple Inc. | Object-aware transitions |
-
2010
- 2010-07-14 US US12/836,424 patent/US9778810B2/en active Active
-
2017
- 2017-10-02 US US15/723,040 patent/US11061524B2/en active Active
-
2021
- 2021-06-07 US US17/340,332 patent/US11567624B2/en active Active
-
2022
- 2022-12-28 US US18/147,611 patent/US11928305B2/en active Active
-
2024
- 2024-02-26 US US18/587,124 patent/US20240281104A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US11061524B2 (en) | 2021-07-13 |
US20240281104A1 (en) | 2024-08-22 |
US11928305B2 (en) | 2024-03-12 |
US20120210259A1 (en) | 2012-08-16 |
US11567624B2 (en) | 2023-01-31 |
US9778810B2 (en) | 2017-10-03 |
US20210294463A1 (en) | 2021-09-23 |
US20230137901A1 (en) | 2023-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11928305B2 (en) | Techniques to modify content and view content on mobile devices | |
US11223771B2 (en) | User interfaces for capturing and managing visual media | |
EP3792738B1 (en) | User interfaces for capturing and managing visual media | |
US20220294992A1 (en) | User interfaces for capturing and managing visual media | |
US8760464B2 (en) | Shape masks | |
KR101580478B1 (en) | Application for viewing images | |
JP7467553B2 (en) | User interface for capturing and managing visual media | |
US10809898B2 (en) | Color picker | |
US20070182999A1 (en) | Photo browse and zoom | |
US9235575B1 (en) | Systems and methods using a slideshow generator | |
JP2020507174A (en) | How to navigate the panel of displayed content | |
US9530183B1 (en) | Elastic navigation for fixed layout content | |
JP2004506995A (en) | Enlarging and editing parts of an image in the context of the image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: ZUMOBI, INC., WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BEDERSON, BENJAMIN B.;SANGIOVANNI, JOHN;REEL/FRAME:043767/0233 Effective date: 20090817 |
|
AS | Assignment |
Owner name: PACIFIC WESTERN BANK, NORTH CAROLINA Free format text: SECURITY INTEREST;ASSIGNOR:ZUMOBI, INC.;REEL/FRAME:043896/0732 Effective date: 20170927 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: ZUMOBI, INC., WASHINGTON Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PACIFIC WESTERN BANK;REEL/FRAME:048540/0463 Effective date: 20190308 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
AS | Assignment |
Owner name: ZUMOBI, LLC, TEXAS Free format text: MERGER AND CHANGE OF NAME;ASSIGNORS:ZUMOBI, INC.;ZUMOBI, LLC;REEL/FRAME:052123/0221 Effective date: 20191231 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |