WO2021070168A2 - Ganaka-4: ganaka applied to mobiles - Google Patents

Ganaka-4: ganaka applied to mobiles Download PDF

Info

Publication number
WO2021070168A2
WO2021070168A2 PCT/IB2020/061646 IB2020061646W WO2021070168A2 WO 2021070168 A2 WO2021070168 A2 WO 2021070168A2 IB 2020061646 W IB2020061646 W IB 2020061646W WO 2021070168 A2 WO2021070168 A2 WO 2021070168A2
Authority
WO
WIPO (PCT)
Prior art keywords
mobiles
display
mobile
synchronization
ganaka
Prior art date
Application number
PCT/IB2020/061646
Other languages
French (fr)
Other versions
WO2021070168A3 (en
Inventor
Srinivasa Prasanna
Original Assignee
Srinivasa Prasanna
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Srinivasa Prasanna filed Critical Srinivasa Prasanna
Publication of WO2021070168A2 publication Critical patent/WO2021070168A2/en
Publication of WO2021070168A3 publication Critical patent/WO2021070168A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1438Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display using more than one graphics controller
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/10Intensity circuits
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/12Synchronisation between the display unit and other units, e.g. other display units, video-disc players
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/363Graphics controllers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/399Control of the bit-mapped memory using two or more bit-mapped memories, the operations of which are switched in time, e.g. ping-pong buffers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/045Zooming at least part of an image, i.e. enlarging it or shrinking it
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0464Positioning
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2356/00Detection of the display position w.r.t. other display screens
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/14Detecting light within display terminals, e.g. using a single or a plurality of photosensors
    • G09G2360/144Detecting light within display terminals, e.g. using a single or a plurality of photosensors the light being ambient light

Definitions

  • Ganaka-4 Ganaka applied to Mobiles . 1
  • This invention relates to the software/hardware architecture of computers, and enhances ideas in prior patent applications. These ideas are included in POLYTOPE AND CONVEX BODY DATABASE as claimed in "POLYTOPE AND CONVEX BODY DATABASE " 2464/CHE/2012, "CONVEX MODEL DATABASES: AN EXTENSION OF POLYTOPE AND CONVEX BODY DATABASE “ 201641038613, “DECISION SUPPORT METHODS UNDER UNCERTAINTY “ 1677/CHE/2008 and related applications, and the provisional application No 201841039665 “Ganaka: A Computer Operating on Models, filed on 19/10/2018", and provisional application No 201941018933 “Ganaka-2: A Computer and Architecture Operating on Models, Cont'd, filed on 12/01/2019", and subsequent ones, and incorporates all the ideas therein by reference.
  • the patent applications have now been converted to complete applications.
  • Figure 1 Anu with accelerometers, position sensors, etc . 14
  • Figure 3 Image frame from multiple mobiles arranged in an approximate grid, with gaps and obstructions . 16
  • Figure 4 Image frame from multiple mobiles without calibration and brightness control, and some clutter . 17
  • a first embodiment of ANU is a software-hardware package on mobiles, enabling them to collaboratively create a large display (called a multi-display), which is insensitive to motion of the individual mobiles, and offers the facilities of simultaneous zooming and synchronized video.
  • Data which has to be displayed can be downloaded in compressed format (e.g. MPEG), from standard wireless connections (2G, 3G, 4G, 5G, ...), or as polyhedral/convex/non-convex models, as per our earlier referenced inventions.
  • compressed format e.g. MPEG
  • This external sensor determines the forward signal to light transfer function for each mobile (true- colour value to lumens). This is inverted (possibly taking visual sensitivity criteria, gamma correction, ... into account) to create a calibration table relating desired RGB lumens, to input pixel 24-bit true colour value, or equivalent. This calibration table is used by applications to control the mobile display.
  • the system operates in a master-slave fashion.
  • the Master mobile/sensor commands each display mobile to display a test pattern.
  • the master views the displayed test pattern, and determines the transfer function and inverse accordingly, for each mobile.
  • o Software exists on master and display mobiles, for this handshake.
  • a mirror can be used to reflect light back to the same mobile and a “selfie” taken to determine the forward and reverse transfer functions.
  • a machine learning algorithm can be used to determine where the action is in a scene, and the mobile displaying that region can be made brighter, with more contrast. It can also play a pleasing low volume sound, to draw attention to that portion, ...
  • An exemplary algorithm is YOLO, well known in the state of art.
  • the mobiles When video is displayed, the mobiles should update their individual displays (cropping rectangle, brightness, sound, ...), or play audio, at the same time, else different frames on different mobiles can get superimposed on each other, making quality unacceptable.
  • the mobiles At 30 FPS, there is insufficient time for the mobiles to communicate with each other, unless a complex high speed semaphore or equivalent is used (this may need hardware changes in the mobile).
  • the local clocks of the mobiles can be synchronized at the beginning of a video sequence, and a periodic interrupt by each mobile individually generated for display updating (no communication).
  • the next interrupt time can be calculated from the timing available in the video file (e.g. MPEG).
  • Slow drift (if any) in the mobile clocks can be corrected by periodic resynchronization, using semaphores, or exemplarily the Network Time Protocol (NTP).
  • NTP Network Time Protocol
  • Both can be used in conjunction - with local clocks being used most of the time, with periodic resynchronization to correct for drift using semaphores or equivalents.
  • a complication is that the non-real time operating system, and application layer introduces latency between the time of the clock interrupt, and the frame buffer being displayed, which is difficult to characterize.
  • Different application layer latencies imply that the mobiles update their displays (or play sound), at different times, inspite of being commanded by the globally synchronized mobile clock.
  • the clock interrupt can directly feed and switch the updated frame buffer, provided the software can be prioritized to fill it before the clock interrupt. Otherwise, the latency has to be estimated using exemplarily another mobile, recording the displayed image frame portion, and its timing with respect to the global clock, in exemplarily a mobile idle condition (other applications, including the Java Virtual Machine stopped). Ganaka's machine learning capabilities can be employed to estimate application layer delays, based on machine state also.
  • a special pattern can be used for this purpose, and used for initial latency estimation.
  • Application layer latency drift can be periodically estimated by superposing a test pattern at a portion of the next image frame, and timing when it gets displayed. Since only latency differences between mobile displays are relevant, this measurement can be conveniently done by splitting the test pattern across the edge or corner between two mobiles, and sensing which mobile shows it first (within the resolution of the inter-frame interval).
  • the test pattern can be a high spatial and temporal frequency clip, and recorded using another mobile, or a selfie through a mirror or other sensor.
  • the estimated differential delay between different mobiles is compensated for advancing/delaying the clock display/play-sound commands.
  • Figure 1 shows a mobile with position sensing Fl_100, brightness-contrast control Fl_200, a cropping module Fl_300, and a synchronization semaphore tree or synchronized clock system Fl_400.
  • the input to these mobiles is from wireless signal Fl_500.
  • FIG. 1 connectors between two mobiles are not shown, and many not exist in some embodiments.
  • the brightness-contrast shading is deeper in mobiles with detected relatively high action (from exemplarity a Machine Learning Program embedded in the mobile).
  • the figure also shows the semaphore tree or synchronized clock system, to signal that all display buffers are synchronized during video display.
  • the invention offers a large number of features for both video and audio, illustrated by the following use cases.
  • the display steps are as follows (described exemplarily for a double buffered system - one buffer being displayed, and another being updated): 1.
  • the mobiles are calibrated, synchronized, and brightness-contrast adjusted as described above.
  • the frame to be displayed next is determined from the local time clock, globally synchronized periodically using a semaphore tree or other mechanism, as described above.
  • Each mobile determines the portion of the next frame, which it has to display. This is a function of the position of the mobile in the 2-D array of mobiles, their size (which may not be equal - some mobiles may be laptops/desktop also) the local magnification used, and the picture dimensions (height x width in pixels).
  • the mobiles may cover the image frame with gaps and/or overlapping portion.
  • the calibration and brightness of each mobile may also depend on activity in the respective frame portion, as determined exemplarily using a YOLO algorithm, or be specified in a table.
  • Brightness and colour calibration, and correct cropping parameters are determined. If specified in exemplarily a table, magnification can also change between mobiles. After this step, the image frame is displayed. For audio, the analogy is a sound being played once.
  • the challenge here is the synchronization of displays, driven by independent frame buffers. This is achieved by updating and displaying, using local clocks. These local clocks are corrected for differential display latency (including application layer latency), and periodically resynchronized using communication for correcting drift, as described above.
  • a zoom command to a single mobile will zoom the displays in all synchronously - again using inter-mobile communication as above. Zooming may also be local to the mobile only. Since the portion of the image frame displayed changes, data and/or models as per our earlier invention may have to be reloaded.
  • a single contiguous portion of an image frame need not be displayed all at the same magnification. Some areas can be eliminated (zero magnification), and only important areas shown (areas requiring attention).
  • Mobiles need not be the same size, and some could even be laptops/desktops. Mobiles can be hung on the sides of laptops showing a scene, with the mobiles showing detail in important areas, exemplarily identified by software like YOLO.
  • Figure 2, Figure 3, and Figure 4 show an original image frame, one displayed across multiple mobiles, arranged in a 2-D array (with gaps, this is allowed in this embodiment of the invention), correctly cropped, and another also displayed across multiple mobiles, but without the calibration and brightness control step, and with clutter added.
  • the second image frame multiple mobiles with calibration
  • the third one without calibration is not intelligible.
  • the mobiles exchange location information (updated as they mobiles move) using Bluetooth/SMS/...., to determine the cropping area parameters (e.g. top left corner, w.r.t., the image frame, and length-width).
  • the mobile's image frame portion can be zoomed in/out, across the 2-D mobile array.
  • the mobiles function as a single large display, by exchanging information.
  • Each mobile needs to have only the portion of the image frame it displays, and not the entire image frame.
  • the (ith, jth) mobile shows pixels in the (N/Nl, M/Ml) sized rectangle, whose top left corner is at
  • the frame buffers in the mobiles have to be synchronized during motion, if video is displayed. This can be done by the above described double-buffering scheme, with the current buffer being displayed, and the next frame updated in another buffer.
  • the buffers in all mobiles will be synchronized using, synchronized clocks, with differential delay compensation, as described above, or a high speed shared semaphore (implemented using a binary tree - Figure 1).
  • Analogous to display resources being partitioned, processing, memory and io resources can be also partitioned. This is especially useful in rural areas, where mobile connectivity is poor and intermittent, and connection using multiple mobiles can be used to improve reception for applications requiring good connectivity - e.g. remote education, remote diagnosis, ...
  • the mobiles can easily be placed several lambdas from each other, for spatial diversity.
  • Figure 5 shows multiple mobiles F5_100 communicating via exemplarily Wi-Fi, or a USB, to a Ganaka F5_200, which combines possibly erroneous/paused/incomplete received data (this is at the application layer, and distinct from MIMO).
  • F5_200 can exemplarily use machine learning and inference techniques to combine partially completed data in a mobile whose connection is congested, with another data from another mobile, which has the missing data portion.
  • F5_200 in turn drives the multi-mobile display F5_300, as described above.
  • the multiple mobiles F5_100 are used for providing reception diversity after decoding.
  • the Ganaka uP F5_200 is used for stream multiplexing and error control. It is important to note that this error control is at the application layer, after decoding and removal of packet headers.
  • An example of such an error control is the same video frame being played by two mobiles, but with the first having a frozen display.
  • Ganaka's machine learning recognizes this frozen display, and uses the data from the second mobile. In machine learning parlance, the frame from the first mobile is classified outside the set of allowable frames, and the duplicate from the second one used. More sophisticated 1:K (one mobile stands by for K others), can be used also.
  • Ganaka F5_200 performs sematic error control in this application (as distinct from ECC - e.g. BCH codes, Turbo codes, ).
  • the internals of the mobiles are not required, since the connection is only through the Wi-Fi or USB port (as a hotspot). This is important, since the mobile internals are often proprietary.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

Mobiles are small devices, with small displays. The ubiquitous nature of mobiles enables a collection of them to create a large display, functioning like a single one, and this invention describes how to do this, and some variants. We describe various facilities – calibration, brightness-contrast adjustment, synchronization including machine learning based synchronization, and semantic error control.

Description

-TITLE OF THE INVENTION: Ganaka-4: Ganaka applied to Mobiles.
Figure imgf000002_0001
Contents
Ganaka-4: Ganaka applied to Mobiles . 1
Abstract . 1
Contents . 3
1. FIELD OF THE INVENTION . 4
2. ABSTRACT, SUMMARY and BACKGROUND . 4
3. DISCUSSION OF THE PRIOR ART . 5
4. BRIEF DESCRIPTION OF THE DRA WINGS . 5
5. DETAILED DESCRIPTION OF THE INVENTION . 6
1. Calibration . 7
2. Brightness-Contrast Adjustment . 7
3. Synchronization . 8
3.1. Application Layer Latency . 8
4. Use Cases . 9
4.1.1. Still Image frame Display . 10
4.1.2. Video Display . 10
4.1.3. Video Pausing . 11
4.1.4. Zooming . 11
4.1.5. Multiple attention areas & differently sized mobiles, and Laptops . 11
4.2. An example . 11
4.3. Multi-Phone Connection . 12
7. CLAIMS 19 1. F!ELD OF TH E INVENTION
This invention relates to the software/hardware architecture of computers, and enhances ideas in prior patent applications. These ideas are included in POLYTOPE AND CONVEX BODY DATABASE as claimed in "POLYTOPE AND CONVEX BODY DATABASE " 2464/CHE/2012, "CONVEX MODEL DATABASES: AN EXTENSION OF POLYTOPE AND CONVEX BODY DATABASE " 201641038613, "DECISION SUPPORT METHODS UNDER UNCERTAINTY " 1677/CHE/2008 and related applications, and the provisional application No 201841039665 "Ganaka: A Computer Operating on Models, filed on 19/10/2018", and provisional application No 201941018933 "Ganaka-2: A Computer and Architecture Operating on Models, Cont'd, filed on 12/05/2019", and subsequent ones, and incorporates all the ideas therein by reference. The patent applications have now been converted to complete applications.
2. ABSTRACT, SU MMARY and BACKGROU ND
Mobiles are small devices, with small displays. The ubiquitous nature of mobiles enables a collection of them to create a large display, functioning like a single one, and this invention describes how to do this, and some variants. The invention extends the bit-slice
Figure imgf000003_0001
(Anu)" described in the prior provisional application No 201841039665 "Ganaka: A Computer Operating on Models, filed on 19/10/2018", and subsequent ones. These provisionals have been converted to full applications.
3. DISCUSSION OF TH E PRIOR ART
A prior description is in the provisional application No 201841039665 "Ganaka: A Computer Operating on Models, filed on 19/10/2018", and provisional application No 201941018933 "Ganaka-2: A Computer and Architecture Operating on Models, Cont'd, filed on 12/05/2019", and provisional application No 201941019709, "Ganaka-3: A Computer and Architecture Operating on Models, Cont'd", and this extension in part provides more details, and further generalizations.
These provisionals have now been made full applications, with more details.
4. BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 Anu with accelerometers, position sensors, etc . 14
Figure 2 Original Image frame . 15
Figure 3 Image frame from multiple mobiles arranged in an approximate grid, with gaps and obstructions . 16
Figure 4 Image frame from multiple mobiles without calibration and brightness control, and some clutter . 17
Figure 5 Multi Phone Diversity Connection with Multi-Mobile Display . 18
5. DETAILED DESCRIPTION OF THE INVENTION
A first embodiment of ANU is a software-hardware package on mobiles, enabling them to collaboratively create a large display (called a multi-display), which is insensitive to motion of the individual mobiles, and offers the facilities of simultaneous zooming and synchronized video.
In most of the following, we describe an embodiment where a 2D M x N image frame is divided equally, with constant magnification, in an array of Ml x N1 mobiles (the mobiles exemplarily use the Ganaka processor for applications). However, the equal split is by no means needed, and different mobiles can display arbitrary portions of the image frame, at arbitrary and possibly position varying magnifications, with or without gaps and overlaps. Some and/or all mobiles can be laptops, desktops, and other types of devices. The audio and video corresponding to a frame may be played on the same mobile, or split between different and multiple mobiles. Thus, while the description is for an exemplary embodiment, the claims extend to all variants.
Below we refer to the provisional application "Ganaka" filed on October 19th, 2018, the associated complete applications, and ANU description therein. Here we add the following capability (Figure 1).
• Calibration (brightness, contrast, timing, ...) of each mobile's display and sound, so as to give a coherent and unified multi-display device.
• Ability for the mobiles to sense each other's position (using Bluetooth, GPS, and other forms of signaling), even if not touching, and adjust the portion of the display accordingly, referred to as "smart cropping" (see Figure 1). o Changing the "smart cropping", based on change in position, from accelerometer and other sensor information (see Figure 1). Essentially the instantaneous position of each possibly moving mobile is estimated, and that portion of the image frame is cropped. Similarly for changes in magnification. Standard methods from motion compensation can be adopted for this purpose. o The cropping may be unrelated to position and/or magnification, and the cropping area specified in a table accessible to the mobiles.
• Changing brightness, contrast, and other display, parameters, based on action in a scene -the phone which has more action is exemplarily brighter. The action can be exemplarily determined from a machine learning algorithm like YOLO.
• Installation on arbitrary smart mobiles (these must be software/hardware compatible). This enlarges the user base, as one does not have to buy a special mobile for this. o A downloadable app will enable the multi-display operation, using the functions below. Data which has to be displayed can be downloaded in compressed format (e.g. MPEG), from standard wireless connections (2G, 3G, 4G, 5G, ...), or as polyhedral/convex/non-convex models, as per our earlier referenced inventions.
The working of the invention is described below. We first describe key facilities, and then put them all together in several use cases.
1. Calibration
It is unrealistic to expect all mobile displays to have the same calibration, i.e., the same colour brightness for the same input pixel numerical value (24-bit true-colour say). This implies that the different mobile displays have to be calibrated, to avoid annoying breaks between mobiles in the same picture area.
This can be done using several calibration methods. We can exemplarily use the method below:
• An external sensor to calibrate each mobile. This external sensor (exemplarily another mobile) determines the forward signal to light transfer function for each mobile (true- colour value to lumens). This is inverted (possibly taking visual sensitivity criteria, gamma correction, ... into account) to create a calibration table relating desired RGB lumens, to input pixel 24-bit true colour value, or equivalent. This calibration table is used by applications to control the mobile display.
• The system operates in a master-slave fashion. The Master mobile/sensor commands each display mobile to display a test pattern. The master views the displayed test pattern, and determines the transfer function and inverse accordingly, for each mobile. o Software (Apps) exists on master and display mobiles, for this handshake.
• Instead of the external sensor, a mirror can be used to reflect light back to the same mobile and a “selfie” taken to determine the forward and reverse transfer functions.
2. Brightness-Contrast Adjustment
A machine learning algorithm can be used to determine where the action is in a scene, and the mobile displaying that region can be made brighter, with more contrast. It can also play a pleasing low volume sound, to draw attention to that portion, ... An exemplary algorithm is YOLO, weil known in the state of art.
3. Synchronization
When video is displayed, the mobiles should update their individual displays (cropping rectangle, brightness, sound, ...), or play audio, at the same time, else different frames on different mobiles can get superimposed on each other, making quality unacceptable. At 30 FPS, there is insufficient time for the mobiles to communicate with each other, unless a complex high speed semaphore or equivalent is used (this may need hardware changes in the mobile).
Alternatively, the local clocks of the mobiles can be synchronized at the beginning of a video sequence, and a periodic interrupt by each mobile individually generated for display updating (no communication). For variable frame rate video, the next interrupt time can be calculated from the timing available in the video file (e.g. MPEG). Slow drift (if any) in the mobile clocks can be corrected by periodic resynchronization, using semaphores, or exemplarily the Network Time Protocol (NTP).
Both can be used in conjunction - with local clocks being used most of the time, with periodic resynchronization to correct for drift using semaphores or equivalents.
3.1. Application Layer Latency
A complication is that the non-real time operating system, and application layer introduces latency between the time of the clock interrupt, and the frame buffer being displayed, which is difficult to characterize. Different application layer latencies imply that the mobiles update their displays (or play sound), at different times, inspite of being commanded by the globally synchronized mobile clock.
If hardware changes can be made, the clock interrupt can directly feed and switch the updated frame buffer, provided the software can be prioritized to fill it before the clock interrupt. Otherwise, the latency has to be estimated using exemplarily another mobile, recording the displayed image frame portion, and its timing with respect to the global clock, in exemplarily a mobile idle condition (other applications, including the Java Virtual Machine stopped). Ganaka's machine learning capabilities can be employed to estimate application layer delays, based on machine state also.
A special pattern can be used for this purpose, and used for initial latency estimation. Application layer latency drift can be periodically estimated by superposing a test pattern at a portion of the next image frame, and timing when it gets displayed. Since only latency differences between mobile displays are relevant, this measurement can be conveniently done by splitting the test pattern across the edge or corner between two mobiles, and sensing which mobile shows it first (within the resolution of the inter-frame interval).
The test pattern can be a high spatial and temporal frequency clip, and recorded using another mobile, or a selfie through a mirror or other sensor. The estimated differential delay between different mobiles, is compensated for advancing/delaying the clock display/play-sound commands.
Figure 1 shows a mobile with position sensing Fl_100, brightness-contrast control Fl_200, a cropping module Fl_300, and a synchronization semaphore tree or synchronized clock system Fl_400. The input to these mobiles is from wireless signal Fl_500. These blocks are used below, and internally in the multiple display device F5_300 in Figure 5.
In Figure 1 connectors between two mobiles are not shown, and many not exist in some embodiments. The brightness-contrast shading is deeper in mobiles with detected relatively high action (from exemplarity a Machine Learning Program embedded in the mobile). The figure also shows the semaphore tree or synchronized clock system, to signal that all display buffers are synchronized during video display.
4. Use Cases
Based on the above basic facilities, the invention offers a large number of features for both video and audio, illustrated by the following use cases. In all cases, the display steps are as follows (described exemplarily for a double buffered system - one buffer being displayed, and another being updated): 1. First, the mobiles are calibrated, synchronized, and brightness-contrast adjusted as described above.
2. The frame to be displayed next is determined from the local time clock, globally synchronized periodically using a semaphore tree or other mechanism, as described above.
3. Each mobile determines the portion of the next frame, which it has to display. This is a function of the position of the mobile in the 2-D array of mobiles, their size (which may not be equal - some mobiles may be laptops/desktop also) the local magnification used, and the picture dimensions (height x width in pixels).
3.1. It can also be determined from a table, specifying which portion of the image frame is displayed on which mobile (there need not be a 1-1 map between the layout of mobiles, and the 2-D frame). The mobiles may cover the image frame with gaps and/or overlapping portion.
4. The calibration and brightness of each mobile, may also depend on activity in the respective frame portion, as determined exemplarily using a YOLO algorithm, or be specified in a table.
5. The contents of the buffer to be displayed next are updated, using cropping/brightness- contrast transformations, as per the parameters above.
6. At the next display instant, as determined by the local time clock, with compensation for latency above, the buffers are switched, and the updated buffer is displayed and the displayed buffer becomes the next one to be updated.
This process is repeated continuously.
Under this basic framework, the system offers the following facilities.
4.1.1. Still image frame Display
Brightness and colour calibration, and correct cropping parameters (top left, height, widths,..) are determined. If specified in exemplarily a table, magnification can also change between mobiles. After this step, the image frame is displayed. For audio, the analogy is a sound being played once.
4.1.2. Video Display
The challenge here is the synchronization of displays, driven by independent frame buffers. This is achieved by updating and displaying, using local clocks. These local clocks are corrected for differential display latency (including application layer latency), and periodically resynchronized using communication for correcting drift, as described above.
4.1.3. Video Pausing
At the application level, all frames displayed in the mobiles pause together. This is achieved through inter-mobile communication, using a semaphore tree, or other signaling methods including Bluetooth or SMS messages. When we restart, old frames can be discarded in all mobiles holding them. Rolling video back requires reloading the mobiles with discarded data.
4.1.4. Zooming
A zoom command to a single mobile will zoom the displays in all synchronously - again using inter-mobile communication as above. Zooming may also be local to the mobile only. Since the portion of the image frame displayed changes, data and/or models as per our earlier invention may have to be reloaded.
4.1.5. Multiple attention areas & differently sized mobiles, and Laptops
There are many variants using the above facilities. A single contiguous portion of an image frame need not be displayed all at the same magnification. Some areas can be eliminated (zero magnification), and only important areas shown (areas requiring attention).
Similarly, all mobiles need not be the same size, and some could even be laptops/desktops. Mobiles can be hung on the sides of laptops showing a scene, with the mobiles showing detail in important areas, exemplarily identified by software like YOLO.
4.2. An example
Some further details are given below.
In order, Figure 2, Figure 3, and Figure 4 show an original image frame, one displayed across multiple mobiles, arranged in a 2-D array (with gaps, this is allowed in this embodiment of the invention), correctly cropped, and another also displayed across multiple mobiles, but without the calibration and brightness control step, and with clutter added. Clearly the second image frame (multiple mobiles with calibration) is intelligible, while the third one without calibration is not intelligible.
The mobiles exchange location information (updated as they mobiles move) using Bluetooth/SMS/...., to determine the cropping area parameters (e.g. top left corner, w.r.t., the image frame, and length-width). The mobile's image frame portion can be zoomed in/out, across the 2-D mobile array. Essentially, the mobiles function as a single large display, by exchanging information. Each mobile needs to have only the portion of the image frame it displays, and not the entire image frame. For a frame of NxM pixels, shown exemparily mapped uniformly, in an NlxMl mobile array, the (ith, jth) mobile shows pixels in the (N/Nl, M/Ml) sized rectangle, whose top left corner is at
[(i*(N/Nl),j*(M/Ml)]
This formula ignores gaps. Other maps including nonlinear ones, with magnification varying across mobiles, can be defined in a lookup table with a record for each mobile.
The frame buffers in the mobiles have to be synchronized during motion, if video is displayed. This can be done by the above described double-buffering scheme, with the current buffer being displayed, and the next frame updated in another buffer. The buffers in all mobiles will be synchronized using, synchronized clocks, with differential delay compensation, as described above, or a high speed shared semaphore (implemented using a binary tree - Figure 1).
4 3 Multi-Phone Connection
Analogous to display resources being partitioned, processing, memory and io resources can be also partitioned. This is especially useful in rural areas, where mobile connectivity is poor and intermittent, and connection using multiple mobiles can be used to improve reception for applications requiring good connectivity - e.g. remote education, remote diagnosis, ...
We can use multiple mobiles, with a Wi-Fi connection to improve both reliability and bandwidth, without any hardware changes. The mobiles can easily be placed several lambdas from each other, for spatial diversity.
Figure 5 shows multiple mobiles F5_100 communicating via exemplarily Wi-Fi, or a USB, to a Ganaka F5_200, which combines possibly erroneous/paused/incomplete received data (this is at the application layer, and distinct from MIMO). F5_200 can exemplarily use machine learning and inference techniques to combine partially completed data in a mobile whose connection is congested, with another data from another mobile, which has the missing data portion. F5_200 in turn drives the multi-mobile display F5_300, as described above.
Essentially the multiple mobiles F5_100 are used for providing reception diversity after decoding. The Ganaka uP F5_200, is used for stream multiplexing and error control. It is important to note that this error control is at the application layer, after decoding and removal of packet headers. An example of such an error control is the same video frame being played by two mobiles, but with the first having a frozen display. Ganaka's machine learning recognizes this frozen display, and uses the data from the second mobile. In machine learning parlance, the frame from the first mobile is classified outside the set of allowable frames, and the duplicate from the second one used. More sophisticated 1:K (one mobile stands by for K others), can be used also. Essentially, Ganaka F5_200 performs sematic error control in this application (as distinct from ECC - e.g. BCH codes, Turbo codes, ...).
Multiple mobiles are used in the display F5_300, as described above.
The internals of the mobiles are not required, since the connection is only through the Wi-Fi or USB port (as a hotspot). This is important, since the mobile internals are often proprietary.

Claims

7. CLAIMS We claim
1. A computer system incorporating a display spread out across multiple mobile phones, automatically cropping and calibrating, and zooming, to present a display, with a double buffered graphics display synchronized in different mobiles.
2. The system of claim 1 where synchronization is achieved using the clocks on individual mobiles synchronized, with each other, at the start of the display sequence.
3. The system of claim 2 where the synchronization is periodically reset, to eliminate clock drift.
4. The system of claim 3 where the periodic reset of said synchronization is done using a semaphore tree distributed across multiple mobiles.
5. The system of claim 1 where synchronization is achieved using a semaphore tree, distributed across multiple mobiles.
6. The system of claim 1 offering the facilities of simultaneous zoom and pause on all mobiles.
7. The system of claim 1 offering different amounts of zooming in different mobiles, exemplarily with different portions of the image frame shown in different mobiles.
8. The system of claim 1, where the input to display mobiles is though a combiner accessing a Wi-Fi interface from other multiple mobiles, said other multiple mobiles sending multiple and possibly error prone decoded application layer data, with said combiner removing errors at the application level.
PCT/IB2020/061646 2019-10-08 2020-12-08 Ganaka-4: ganaka applied to mobiles WO2021070168A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201941040790 2019-10-08
IN201941040790 2019-10-08

Publications (2)

Publication Number Publication Date
WO2021070168A2 true WO2021070168A2 (en) 2021-04-15
WO2021070168A3 WO2021070168A3 (en) 2021-05-20

Family

ID=75437170

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/061646 WO2021070168A2 (en) 2019-10-08 2020-12-08 Ganaka-4: ganaka applied to mobiles

Country Status (1)

Country Link
WO (1) WO2021070168A2 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150084837A1 (en) * 2013-09-19 2015-03-26 Broadcom Corporation Coordination of multiple mobile device displays
US10607571B2 (en) * 2017-08-14 2020-03-31 Thomas Frederick Utsch Method and system for the distribution of synchronized video to an array of randomly positioned display devices acting as one aggregated display device

Also Published As

Publication number Publication date
WO2021070168A3 (en) 2021-05-20

Similar Documents

Publication Publication Date Title
US11210993B2 (en) Optimized display image rendering
JP6894976B2 (en) Image smoothness improvement method and equipment
WO2020140758A1 (en) Image display method, image processing method, and related devices
WO2017016339A1 (en) Video sharing method and device, and video playing method and device
US20170213388A1 (en) Frame Projection For Augmented Reality Environments
US9607542B2 (en) Display panel driving method, driving device and display device
JP4346591B2 (en) Video processing apparatus, video processing method, and program
US20080018582A1 (en) Image data refreshing method and display system using the same
WO2022068326A1 (en) Image frame prediction method and electronic device
CN115097994B (en) Data processing method and related device
WO2023125677A1 (en) Discrete graphics frame interpolation circuit, method, and apparatus, chip, electronic device, and medium
JP2009265547A (en) Display control device and display control method
EP4124020A1 (en) Exposure data acquisition method and electronic device
US20140320689A1 (en) Image pickup apparatus, information processing system and image data processing method
WO2023246302A1 (en) Subtitle display method and apparatus, device and medium
EP4254927A1 (en) Photographing method and electronic device
WO2021244651A1 (en) Information display method and device, and terminal and storage medium
US20140347370A1 (en) Information processing device, information processing method, and information processing computer program product
CN112905132B (en) Screen projection method and device
WO2021070168A2 (en) Ganaka-4: ganaka applied to mobiles
CN115955589A (en) Optimized video splicing method, system and storage medium based on MIPI
WO2023108016A1 (en) Augmented reality using a split architecture
CN117956210A (en) Audio and video synchronization adjustment method and related equipment
TWI627623B (en) Displaying control system and displaying control method
KR20210063998A (en) Augmented reality providing server and Method for playing content based on augmented reality using the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20875375

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20875375

Country of ref document: EP

Kind code of ref document: A2