WO2016103057A1 - Procédé et appareil de traitement d'une image sur un dispositif électronique - Google Patents

Procédé et appareil de traitement d'une image sur un dispositif électronique Download PDF

Info

Publication number
WO2016103057A1
WO2016103057A1 PCT/IB2015/052554 IB2015052554W WO2016103057A1 WO 2016103057 A1 WO2016103057 A1 WO 2016103057A1 IB 2015052554 W IB2015052554 W IB 2015052554W WO 2016103057 A1 WO2016103057 A1 WO 2016103057A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing thread
electronic device
segments
image
graphical primitives
Prior art date
Application number
PCT/IB2015/052554
Other languages
English (en)
Inventor
Sergey Sergeevich KONSTANTINOV
Original Assignee
Yandex Europe Ag
Yandex Llc
Yandex Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yandex Europe Ag, Yandex Llc, Yandex Inc. filed Critical Yandex Europe Ag
Publication of WO2016103057A1 publication Critical patent/WO2016103057A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/14Tree-structured documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles

Definitions

  • the present technology relates to methods and apparatuses for processing an image and more specifically to a method and apparatus for processing the image on an electronic device.
  • a typical user of electronic devices can access a plethora of information (such as information available on the various resources hosted on the Internet and the like). Some of the information so accessed includes multi-media - such as images, videos, audio files and the like. Some of the images available on these websites can be quite voluminous and require significant computing resources of the electronic device to render the image on a screen of the electronic device.
  • One example of such an image that requires significant computing resources is a geographical map to be displayed on the electronic device.
  • Many web sites and applications for electronic devices display geographical maps. Some applications are dedicated solely to the display of geographical maps. In addition to displaying the geographical maps, these web sites and applications often display objects overlaid on the map to show objects such as points of interest, waypoints, labels, icons, itinerary and the like. In many cases, these maps are interactive. For example, for a map displayed in a web browser, moving the mouse pointer over an object on the map can cause the display of a description of the object, such as the address or the name of the location.
  • clicking on the object can display a balloon containing additional information associated with the location represented by the object, such as the name of the location, address, phone number, link to the web site, and in the case of service businesses, such as restaurants and hotels, ratings, links for making a reservation or to clients' reviews.
  • a user's request for a particular map section is sent to servers associated with the particular web site or application being used. For example, a person could be looking for a map of a down town of a particular city.
  • a map server retrieves a map tile or tiles corresponding to the requested map section.
  • An application server can also retrieve additional information corresponding to the location requested including objects and data to generate the interactive elements of the map.
  • the server side then sends the requested map section (typically in tiles) to the user's electronic device to be displayed on the user interface for the user to view and, optionally, to interact with.
  • the user's electronic device then renders the map view, using the received tiles.
  • the amount of processing power required on user's electronic device is quite high considering the high resolution of map images, as well as potential need to re- render the map view when the user zooms in, zooms out or moves the map view.
  • an apparatus e.g., a map server
  • the tiled vector image data is transferred to a remote device (e.g., a client device) for rendering of the map image.
  • the vector image data may comprise one or more attributes associated therewith, the one or more attributes configurable by a receiving device.
  • the map image may be rendered (e.g., customized) based on one or more local attributes to vary the language used, colors, how items in the map are displayed, and other imaging characteristics of the map image.
  • JavaScript allows for building of relatively complex web resources, as well as to execute some of the programming code on the electronic device that is used to access the web resource.
  • the browser executing the JavaScript code (or more specifically the core of the operating system on which the browser is executed) creates a main processing thread for processing the code of the JavaScript.
  • the main processing thread executes, in sequence, the blocks of the JavaScript.
  • the worker threads are allowed to execute parallel processes within the JavaScript with a limitation that worker threads can only execute processes that do not require interaction with the user interface of the electronic device executing the browser that in turn executed the JavaScript.
  • the main processing thread and the worker threads exchange data.
  • the data exchanged between the main processing threads and the worker threads includes: (i) tasks to be executed by the worker threads and (ii) results of the execution, by the worker threads the tasks.
  • the main processing thread then uses the results of the execution by the worker threads to output the results (or a portion thereof) on the user interface of the electronic device.
  • a method of processing an image executed on an electronic device.
  • the electronic device has an application executed by the electronic device using a main processing thread and at least one child processing thread, the at least one child processing thread being dependent from the main processing thread.
  • the method comprises: acquiring the image to be rendered for displaying on the electronic device; splitting the image into at least two segments, each of the at least two segments being an image that is amenable to being rendered using graphical primitives; acquiring, by a respective child processing thread, an indication of the respective segment of the at least two segments and a set of required graphical primitives, such that each child processing thread receives a single instance of the respective segment for processing at a given time; allocating a new space in a memory of the electronic device for each of the respective child processing threads; rendering, by the respective child processing thread, the respective image of each respective segment in corresponding allocated space of memory; saving each rendered image in a form of an array of bytes; transmitting, by the respective child processing thread, the array of bytes to the main processing thread; at the main processing thread, rendering a final image on a screen of the electronic device using sets of array of bytes.
  • the acquiring, by the respective child processing thread, the set of required graphical primitives comprises: for each segment of the at least two segments, calculating, by the main processing thread, the set of required graphical primitives required for rendering the respective segment; transmitting the indication of the respective segment and the set of required graphical primitives to the respective child processing thread.
  • the set of required graphical primitives is a subset of all possible graphical primitives and wherein the method further comprises selecting, by the main processing thread, the set of required graphical primitives for each of the at least two segments.
  • the acquiring, by the respective child processing thread, the set of required graphical primitives comprises: receiving, from the main processing thread, all possible graphical primitives; selecting, by the respective child processing thread, from all possible graphical primitives a subset of graphical primitives, the subset being the set of required graphical primitives required for processing the associated one of the at least two segments.
  • the main processing thread and the at least one child processing thread are components of a JavaScript architecture.
  • the method further comprises selecting a number of the at least one child processing thread.
  • the selecting is executed as a function of computational power of a processor of the electronic device.
  • the application is a browser application.
  • the image comprises a portion of a map.
  • the method further comprises receiving a request for a map view, the map view including the portion of the map.
  • the at least two segments comprise plurality of segments and wherein each respective child processing thread received a next of the plurality of segments after completing processing a previous one of the plurality of segments.
  • an electronic device comprises: a user input interface and a user output interface; a processor coupled to the user input interface and the user output interface, the processor configured to execute an application using a main processing thread and at least one child processing thread, the at least one child processing thread being dependent from the main processing thread, the processor being further configured to: acquire the image to be rendered for displaying on the electronic device; split the image into at least two segments, each of the at least two segments being an image that is amenable to being rendered using graphical primitives; cause a respective child processing thread, to acquire an indication of the respective segment of the at least two segments and a set of required graphical primitives, such that each child processing thread receives a single instance of the respective segment for processing at a given time; allocating a new space in a memory of the electronic device for each of the respective child processing threads; cause the respective child processing thread, to render the respective image of each respective segment in corresponding allocated space of memory; save each rendered image in
  • the processor is configured to: for each segment of the at least two segments, to cause the main processing thread, to calculate the set of required graphical primitives required for rendering the respective segment and to transmit the indication of the respective segment and the set of required graphical primitives to the respective child processing thread.
  • the set of required graphical primitives is a subset of all possible graphical primitives and wherein the processor is further configured to cause the main processing thread to select the set of required graphical primitives for each of the at least two segments.
  • the processor is configured to cause the respective child processing thread to: receive, from the main processing thread, all possible graphical primitives; select from all possible graphical primitives a subset of graphical primitives, the subset being the set of required graphical primitives required for processing the associated one of the at least two segments.
  • the main processing thread and the at least one child processing thread are components of a JavaScript architecture.
  • the processor is further configured to select a number of the at least one child processing thread. [0029] In some implementations of the electronic device, the processor is configured to select the number of the at leas one child processing thread as a function of computational power of the processor.
  • the application is a browser application.
  • the image comprises a portion of a map.
  • the processor is further configured to receive, via the user input interface, a request for a map view, the map view including the portion of the map.
  • the at least two segments comprise plurality of segments and wherein each respective child processing thread received a next of the plurality of segments after completing processing a previous one of the plurality of segments.
  • a "server" is a computer program that is running on appropriate hardware and is capable of receiving requests (e.g. from client devices) over a network, and carrying out those requests, or causing those requests to be carried out.
  • the hardware may be one physical computer or one physical computer system, but neither is required to be the case with respect to the present technology.
  • a "server” is not intended to mean that every task (e.g. received instructions or requests) or any particular task will have been received, carried out, or caused to be carried out, by the same server (i.e. the same software and/or hardware); it is intended to mean that any number of software elements or hardware devices may be involved in receiving/sending, carrying out or causing to be carried out any task or request, or the consequences of any task or request; and all of this software and hardware may be one server or multiple servers, both of which are included within the expression "at least one server”.
  • client device is any computer hardware that is capable of running software appropriate to the relevant task at hand.
  • client devices include personal computers (desktops, laptops, netbooks, etc.), smartphones, and tablets.
  • a device acting as a client device in the present context is not precluded from acting as a server to other client devices.
  • the use of the expression "a client device” does not preclude multiple client devices being used in receiving/sending, carrying out or causing to be carried out any task or request, or the consequences of any task or request, or steps of any method described herein.
  • a “database” is any structured collection of data, irrespective of its particular structure, the database management software, or the computer hardware on which the data is stored, implemented or otherwise rendered available for use.
  • a database may reside on the same hardware as the process that stores or makes use of the information stored in the database or it may reside on separate hardware, such as a dedicated server or plurality of servers.
  • the expression “component” is meant to include software (appropriate to a particular hardware context) that is both necessary and sufficient to achieve the specific function(s) being referenced.
  • the expression “computer usable information storage medium” is intended to include media of any nature and kind whatsoever, including RAM, ROM, disks (CD-ROMs, DVDs, floppy disks, hard drivers, etc.), USB keys, solid state-drives, tape drives, etc.
  • the expression “interactive” is meant to indicate that something is responsive to a user's input or that at least portions thereof are responsive to a user's input.
  • Implementations of the present technology each have at least one of the above- mentioned object and/or aspects, but do not necessarily have all of them. It should be understood that some aspects of the present technology that have resulted from attempting to attain the above-mentioned object may not satisfy this object and/or may satisfy other objects not specifically recited herein.
  • Figure 1 is a screenshot depicting a typical map view image, the typical map view image being an example of an image that can be processed using embodiments of the present technology.
  • Figure 2 depicts a schematic diagram of a system suitable for implementing non- limiting embodiments of the present technology.
  • Figure 3 depicts a schematic diagram of a client device of the system of Figure 2.
  • Figure 4 depicts a flow chart of a method executed according to non-limiting embodiments of the present technology, the method being executed by the client device of Figure 2.
  • Figure 5 depicts a map section being processed in accordance with embodiments of the present technology.
  • Figure 6 depicts a schematic representation of a main processing thread, a first child processing thread, a second child processing thread and an N th child processing thread that are created in accordance with non-limiting embodiments of the present technology.
  • FIG. 2 there is shown a schematic diagram of a system 200 being suitable for implementing non-limiting embodiments of the present technology.
  • the system 200 is depicted as merely as an illustrative implementation of the present technology.
  • the description thereof that follows is intended to be only a description of illustrative examples of the present technology. This description is not intended to define the scope or set forth the bounds of the present technology.
  • what are believed to be helpful examples of modifications to the system 200 may also be set forth below. This is done merely as an aid to understanding, and, again, not to define the scope or set forth the bounds of the present technology.
  • the system 200 comprises an electronic device 202.
  • the electronic device 202 is typically associated with a user 204 and, as such, can be referred herein below as a client device 202. It should be noted that the fact that the client device 202 is associated with the user 204 does not need to suggest or imply any mode of operation, such as a need to log in, a need to be registered or the like.
  • the implementation of the client device 202 is not particularly limited, but as an example, the client device 202 may be implemented as a personal computer (desktop (as shown), laptop, netbook, etc.), and a wireless communication device (a cell phone, a smartphone, a tablet and the like).
  • the client device 202 comprises hardware and software and/or firmware (or a combination thereof) for executing an application (such as a browser application, a mapping application and the like), the application for inter alia displaying a user interface, such as an interface of a browser application which can be used for accessing a map image using a map application or a browser application.
  • the map application can be, but not limited to, a dedicated mapping application, such as the Yandex.Maps application for mobile devices, a web browser, or any other application providing including a map display portion.
  • the user interface can be implemented on a web page that is not otherwise dedicated to maps (such as, for example, a web site of general interest, such as a bank web site, a restaurant web site and the like).
  • the client device 202 may comprise a processor 303.
  • the processor 303 may comprise one or more processors and/or one or more microcontrollers configured to execute instructions and to carry out operations associated with the operation of the client device 202.
  • processor 303 may be implemented as a single-chip, multiple chips and/or other electrical components including one or more integrated circuits and printed circuit boards.
  • Processor 303 may optionally contain a cache memory unit (not depicted) for temporary local storage of instructions, data, or computer addresses.
  • the processor 303 may include one or more processors or one or more controllers dedicated for certain processing tasks of the client device 202 or a single multifunctional processor or controller.
  • the processor 303 is operatively coupled to a memory module 304.
  • Memory module 304 may encompass one or more storage media and generally provide a place to store computer code (e.g., software and/or firmware).
  • the memory module 304 may include various tangible computer-readable storage media including Read-Only Memory (ROM) and/or Random-Access Memory (RAM).
  • ROM Read-Only Memory
  • RAM Random-Access Memory
  • Memory module 304 may also include one or more fixed storage devices in the form of, by way of example, hard disk drives (HDDs), solid-state drives (SSDs), flash-memory cards (e.g., Secured Digital or SD cards, embedded MultiMediaCard or eMMD cards), among other suitable forms of memory coupled bi- directionally to the processor 303. Information may also reside on one or more removable storage media loaded into or installed in the client device 202 when needed. By way of example, any of a number of suitable memory cards (e.g., SD cards) may be loaded into the client device 202 on a temporary or permanent basis.
  • HDDs hard disk drives
  • SSDs solid-state drives
  • flash-memory cards e.g., Secured Digital or SD cards, embedded MultiMediaCard or eMMD cards
  • Information may also reside on one or more removable storage media loaded into or installed in the client device 202 when needed.
  • any of a number of suitable memory cards e.g., SD cards
  • the memory module 304 may store inter alia a series of computer-readable instructions, which instructions when executed cause the processor 303 (as well as other components of the client device 202) to execute the various operations described herein.
  • the client device 202 further comprises an input output module 306.
  • Input output module 306 may comprise one or more input and output devices operably connected to processor 303.
  • input output module 306 may include keyboard, mouse, one or more buttons, thumb wheel, and/or display (e.g., liquid crystal display (LCD), light emitting diode (LED), Interferometric modulator display (IMOD), or any other suitable display technology).
  • display e.g., liquid crystal display (LCD), light emitting diode (LED), Interferometric modulator display (IMOD), or any other suitable display technology.
  • input devices are configured to transfer data, commands and responses from the outside world into client device 202.
  • the display is generally configured to display a graphical user interface (GUI) that provides an easy to use visual interface between a user of the client device 202 and the operating system or application(s) running on the client device 202.
  • GUI graphical user interface
  • Input output module 306 may also include touch based devices such as touchpad and touch screen.
  • a touchpad is an input device including a surface that detects touch-based inputs of users.
  • a touch screen is a display that detects the presence and location of user touch inputs.
  • Input output module 306 may also include dual touch or multi-touch displays or touchpads that can identify the presence, location and movement of more than one touch inputs, such as two or three finger touches.
  • the input output module 306 can be implemented a touch-sensitive screen.
  • client device 202 may additionally comprise an audio module 308, a camera module 310, a wireless communication module 312, a sensor module 314, and/or wired communication module 316, all operably connected to the processor 303 to facilitate various functions of client device 202.
  • the camera module 310 including an optical sensor (e.g., a charged coupled device (CCD), or a complementary metal-oxide semiconductor (CMOS) image sensor), can be utilized to facilitate camera functions, such as recording photographs and video clips.
  • the wired communication module 316 can include a Universal Serial Bus (USB) port for file transferring, or a Ethernet port for connection to a local area network (LAN).
  • the client device 202 may be powered by a power source module 318, which can be executed as rechargeable battery or the like.
  • Wireless communication module 312 can be designed to operate over one or more wireless networks, for example, a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN, an infrared PAN), a WI-FI network (such as, for example, an 802.11a/b/g/n WI-FI network, an 802.11s mesh network), a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network, an Enhanced Data Rates for GSM Evolution (EDGE) network, a Universal Mobile Telecommunications System (UMTS) network, and/or a Long Term Evolution (LTE) network).
  • WPAN wireless PAN
  • WI-FI network such as, for example, an 802.11a/b/g/n WI-FI network, an 802.11s mesh network
  • WI-MAX such as, for example, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network, an
  • the sensor module 314 may include one or more sensor devices to provide additional input and facilitate multiple functionalities of the client device 202.
  • various components of client device 202 may be operably connected together by one or more buses (including hardware and/or software).
  • the one or more buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front- side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI- Express (PCI-X) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, a Universal Asynchronous Receiver/Transmitter (UART) interface, a Inter-Integrated Circuit (I C) bus, a Serial Peripheral Interface (SPI) bus, a Secure Digital (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (E
  • the client device 202 is coupled to a communications network 206 via a communication link 208.
  • the communications network 206 can be implemented as the internet. In other embodiments of the present technology, the communications network 206 can be implemented differently, such as any wide-area communications network, local-area communications network, a private communications network and the like.
  • the communication link 208 is implemented is not particularly limited and will depend on how the client device 202 is implemented.
  • the communication link 208 can be implemented as a wireless communication link (such as but not limited to, a 3G communications network link, a 4G communications network link, a Wireless Fidelity, or WiFi® for short, Bluetooth® and the like).
  • the communication link can be either wireless (such as WiFi®, Bluetooth® or the like) or wired (such as an Ethernet based connection).
  • mapping server 210 Also coupled to the communications network are a mapping server 210 and an application server 212.
  • the servers 210, 212 can each be implemented as a conventional computer server. In an example of an embodiment of the present technology, each of the servers 210, 212 can be implemented as a DellTM PowerEdgeTM Server running the MicrosoftTM Windows ServerTM operating system. It is contemplated that the servers 210, 212 can be implemented in any other suitable hardware and/or software and/or firmware or a combination thereof. Of course, the mapping server 210 can be implemented differently from the application server 212.
  • each of the servers 210, 212 is a single server.
  • the functionality of each of the servers 210, 212 may be distributed and may be implemented via respective multiple servers.
  • the functionality of the servers 210, 212 may be combined in a single server.
  • the servers 210, 212 comprise a communication interface (not depicted) structured and configured to communicate with various entities (such as the client device 208, for example or each other) via the communications network 206.
  • Each of the servers 210, 212 further comprises at least one computer processor (not depicted) operationally connected with the communication interface and structured and configured to execute various processes to be described herein.
  • the mapping server 210 is coupled to the communications network 206 via a communication link 214.
  • the application server 212 is coupled to the communications network 206 via a communication link 216.
  • How the communication links 214, 216 are implemented is not particularly limited and will depend on how the servers 210, 212 are implemented. It is contemplated that the examples of implementations of the communication link 208 provided above could be applied to the communication links 214, 216.
  • the mapping server 210 is adapted to receive from the client device 202 a request for a map section, via the communications network 206 and links 208, 214, retrieve the requested map section from one or more map database (not shown) communicating with the mapping server 210, and send the requested map section back to the client device 202, via the communications network 206 and links 208, 214.
  • the client device 202 can be configured to connect to the application server 212 to access an application provided by the application server 212.
  • the application can be a mapping application.
  • the application can be a web resource (accessible by means of the user 204 using a browser application executed on the electronic device 202).
  • the client device 202 can request the map view via the application provided by the application server 212 and the request can be transmitted to the mapping server 210 by the application server 212.
  • the mapping server 210 is adapted to receive from the application server 212 a request for a map section, via the communications network 206 and links 216, 214, retrieve the requested map section from one or more map database (not shown) communicating with the mapping server 210, and send the requested map section back to the application server 212, via the communications network 206 and links 214, 216.
  • the application server 212 then transmits the requested map section to the client device 202.
  • the application server 212 and the mapping server 210 can communicate therebetween via a network different that the network 206 (such as through a private network, a direct connection and the like).
  • the application server 212 and the mapping server 210 can be implemented as a single server.
  • the single server can execute separate processes - one for the functionality described in association with the application server 212 and one for the functionality described in association with the mapping server 210.
  • the client device 202 can process the received data in the following manner in order to render it on the input output module 306.
  • the methods and routines described below can be executed by the processor 303.
  • the processor 303 can access the memory module 304 storing computer executable instructions, which instructions cause the processor 303 to execute methods and routines described above. How the processor 303 executes the methods and routines is not particularly limited and can be executed by the operating system executed by the processor 303, by a dedicated process executed by the processor 303, by a dedicated or a multi-purpose application executed by the processor 303 and the like.
  • the map section received by the client device 202 is the map image 500 depicted in Figure 5, the map image 500 being the same as the map image 100 of Figure 1, but being processed in accordance with non-limiting embodiments of the present technology.
  • the map image 100 is received in the Scalable Vector Graphics (SVG) format, which is an XML-based vector image format for two-dimensional graphics with support for interactivity and animation.
  • SVG Scalable Vector Graphics
  • W3C World Wide Web Consortium
  • the client device 202 is configured to acquire the image to be rendered for displaying on the client device 202 (displaying being implemented by the input output module 306).
  • the acquiring of the image can be executed by means of receiving the map image 100 from either the mapping server 210 or the application server 212.
  • the receiving can be executed by the wireless communication module 212 or the wired communication module 216, as the case may be.
  • embodiments of the present technology can be equally applied to the images acquired from the memory module 304 (i.e. those stored on the client device 202), captured by the camera module 310 or captured by means of a screen capture of the information displayed on the input output module 306.
  • embodiments of the present technology can be applied to any other type of an image to be rendered on the client device 202.
  • the user 204 enters a request in the user interface of the browser application running on the client device 202 for a map section.
  • the user 204 enters using the input output module 306, such as a keyboard and a mouse for example, the desired map section (for example, by typing an address in a search interface of a map vertical displayed in the browser application).
  • the input output module 306 such as a keyboard and a mouse for example
  • the desired map section for example, by typing an address in a search interface of a map vertical displayed in the browser application.
  • the request is sent to either the mapping server 210 or the application server 212, as the case may be.
  • the user 204 has entered "Restaurants in Westeros" in the search field and as such wants the result of the search to display, on the input output module 306 of the client device 202, an interactive map section showing the location of restaurants in Westeros.
  • the requested map section is the map image 500, showing a map of "Westeros" and the associated information corresponding to the location of restaurants in Westeros (as well as potentially other information relating to these restaurants, such as, for example, address, telephone number, website, and reviews to name a few).
  • the map section to be processed can be automatically selected based on an IP address or GPS coordinates associated with the client device 202, thereby providing a current location of the client device 202, or with a particular location previously stored in the application.
  • the user 204 could specify a distance radius from the current or previously specified location to determine the map section to be presented.
  • the user 204 could select a particular location, via a drop down menu or manual entry, for which the interactive map section is desired.
  • the entry of the map section to be requested is generated based on the current location of the client device 202, using an IP address or GPS coordinates for example.
  • the boundaries of the map section to be requested can be determined from a predetermined radius from the current location of the client device 202 or from such a radius selected by the user 204 of the client device 202
  • the client device 202 is configured to split the image into at least two segments, each of the at least two segments being an image that is amenable to being rendered using graphical primitive.
  • the map image 500 is split into a first map segment 502, a second map segment 504, a third map segment 506 and a fourth map segment 508.
  • Each of the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508 can in itself be rendered using graphical primitives.
  • the map image is split into four segments (the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508), the exact number of segments is not so limited and a different number of segments can be used in various embodiments of the present technology.
  • the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508 are depicted with substantially similar sizes, this does not need to be so in every embodiment of the present technology.
  • the map image 500 received by the client device 202 contains map tiles each providing a map of a particular range of geographical coordinates.
  • each tile consists of a satellite picture, or an assembly of satellite pictures, of the area corresponding to its range of geographical coordinates.
  • each tile consists of an illustrated map corresponding to its range of geographical coordinates.
  • each tile is a combination of one or more satellite pictures and an illustrated map corresponding to its range of geographical coordinates.
  • the client device 202 splits the map image 500 into respective tiles making up the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508.
  • splitting into respective segments can be done by the electronic device 202 based on a pre-determined algorithm.
  • the client device 202 is then configured to organize a main processing thread and at least one child processing thread that depends from the main processing thread.
  • the client device 202 is configured to initiate a main processing thread 602, as well as a first child processing thread 604, a second child processing thread 606 and an N* child processing thread 608.
  • the N th child processing thread 608 is representative of one or more additional child processing threads potentially being defined by the client device 202. It is noted that the initiation of the main processing thread 602, the first child processing thread 604, the second child processing thread 606 and the N th child processing thread 608 can be executed by an operating system of the electronic device 202.
  • the electronic device 202 then causes the respective child processing thread 604, 606, 608 to acquire an indication of the respective segment of the at least two segments (i.e. the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508) and a set of required graphical primitives. It is noted that the process is executed in such a manner so that each child processing thread 604, 606, 608 receives a single instance of the respective segment (i.e. one of the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508) for processing at a given time.
  • the main processing thread 602 selects the required primitives for each of the respective segment of the at least two segments (i.e. the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508) from a plurality of possible primitives and transmits the indication to the respective child processing threads 604, 606, 608.
  • the main processing thread 602 transmits all the possible primitives to the respective child processing threads 604, 606, 608; and the respective child processing thread 604, 606, 608 selects the required primitives for each of the respective segment of the at least two segments (i.e. the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508).
  • the selection of required primitives is executed as follows.
  • the map image 500 comprises a set of required primitives, as well as coordinates for each of the primitives.
  • the main processing thread 602 or the respective child processing thread 604, 606, 608 can determine coordinates that delimit that given segment (i.e. the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508). Based on the range of the coordinates for each of the segments (i.e.
  • the main processing thread 602 or the respective child processing thread 604, 606, 608 can determine the required primitives for rendering that specific segment.
  • all possible primitives are the primitives required for rendering the entirety of the map image 500 and the required primitives are primitives required for rendering a given one of the segments of the map image 500, namely the respective one of the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508.
  • the client device 202 is then configured to allocate a new space in a memory (i.e.
  • the main processing thread 602 when the main processing thread 602 transmits the indication of the respective segment to be processed by the respective child processing thread 604, 606, 608 and either (i) a set of required graphical primitives or (ii) all the possible primitives for the respective child processing thread 604, 606, 608 to select the required primitives from, the main processing thread 602 can execute the following command: worker.postMessage(uInt8View.buffer, [uInt8View.buffer]), where uInt8View.buffer is indicative of the amount of memory required to be dedicated to processing the given one of the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508. [0092] The respective child processing thread 604, 606, 608 then uses the indication of the amount of memory contained in uInt8View.buffer to allocate the required memory space.
  • the electronic device can then cause the respective child processing thread 604, 606, 608 to render the respective image of each respective segment in corresponding allocated space of memory.
  • the respective child processing thread 604, 606, 608 renders its allocated one of the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508 into the allocated required memory space.
  • the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508 uses the allocated memory space as a "canvas" for rendering the allocated image (i.e. the allocated one of the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508).
  • the electronic device 202 then causes the respective child processing thread 604, 606, 608 to save each rendered image in a form of an array of bytes.
  • the respective child processing thread 604, 606, 608 can save the rendered image in the PNG format using a function call: canvas.toDataURL"image/png").
  • the respective child processing thread 604, 606, 608 can use any other format, even though those formats that are not lossless (such as JPEG and the like).
  • the electronic device 202 then causes the respective child processing thread 604, 606, 608 to transmit the array of bytes to the main processing thread 602.
  • the main processing thread 602 renders a final image on a screen (i.e the input output module 306) of the client device 202 using the received sets of array of bytes.
  • the main processing thread 602 uses canvas.drawimage function call.
  • the processor 303 is configured to execute a method of processing an image.
  • a flow chart of a method 400 the method 400 being executed in accordance with non-limiting embodiments of the present technology.
  • the method 400 can be executed on the client device 202.
  • the processor 303 has access to computer executable instructions stored on the memory module 304.
  • the processor 303 can execute an application (such as a browser application, a mapping application, etc), which is executed using the main processing thread 602 and at least one child processing thread 604, 606, 608, the at least one child processing thread 604, 606, 608 being dependent from the main processing thread 602.
  • the main processing thread 602 and the at least one child processing thread 604, 606, 608 can be components of a JavaScript architecture.
  • Step 402 - acquiring the image to be rendered for displaying on the electronic device
  • the method 400 begins at step 402, where the processor 303 acquires the image to be rendered for displaying on the client device 202. In the examples provided above, the processor 303 receives the map image 500. The embodiments of the method 400 will be illustrated using the example of the map image 500 presented above. [00102] Step 404 - splitting the image into at least two segments, each of the at least two segments being an image that is amenable to being rendered using graphical primitives
  • the processor 303 then splits the map image 500 into at least two segments, each of the at least two segments being an image that is amenable to being rendered using graphical primitives (i.e. into one or more of the first map segment 502, the second map segment 504, the third map segment 506 and the fourth map segment 508).
  • Step 406 - acquiring, by a respective child processing thread, an indication of the respective segment of the at least two segments and a set of required graphical primitives, such that each child processing thread receives a single instance of the respective segment for processing at a given time
  • the processor 303 causes the respective child processing thread 604, 606, 608 to acquire an indication of the respective segment of the at least two segments and a set of required graphical primitives, such that each child processing thread 605, 606, 608 receives a single instance of the respective segment for processing at a given time.
  • the step of acquiring, by the respective child processing thread 604, 606, 608, the set of required graphical primitives comprises: for each segment of the at least two segments, calculating, by the main processing thread 602, the set of required graphical primitives required for rendering the respective segment; and transmitting the indication of the respective segment and the set of required graphical primitives to the respective child processing thread 604, 606, 608.
  • the set of required graphical primitives is a subset of all possible graphical primitives and the method 400 further comprises selecting, by the main processing thread 602, the set of required graphical primitives for each of the at least two segments.
  • the step of acquiring, by the respective child processing thread 604, 606, 608, the set of required graphical primitives comprises: receiving, from the main processing thread 602, all possible graphical primitives; and selecting, by the respective child processing thread 604, 606, 608, from all possible graphical primitives a subset of graphical primitives, the subset being the set of required graphical primitives required for processing the associated one of the at least two segments.
  • Step 408 allocating a new space in a memory of the electronic device for each of the respective child processing threads
  • the processor 303 causes the respective child processing thread 604 to render the respective image of each respective segment in corresponding allocated space of memory.
  • Step 410 the processor 303 causes the respective child processing thread 604 to render, the respective image of each respective segment in corresponding allocated space of memory.
  • Step 412 saving each rendered image in a form of an array of bytes
  • the processor 303 causes each of the respective child processing threads 604, 606, 608 to save each rendered image in a form of an array of bytes.
  • Step 414 transmitting, by the respective child processing thread, the array of bytes to the main processing thread
  • the processor 303 causes the respective child processing thread 604, 606, 608 to transmit the array of bytes to the main processing thread 602.
  • Step 416 - at the main processing thread rendering a final image on a screen of the electronic device using sets of array of bytes
  • the processor 303 causes the main processing thread 602 to render a final image on the screen of the client device 202 using sets of array of bytes.
  • the method 400 further comprises selecting a number of the at least one child processing thread 604, 606, 608 to process the map image 500.
  • the selecting is executed as a function of computational power of the processor 303. Additionally or alternatively, the selecting can be executed as a function of the number of segments within the map image 500 and/or the size of the map image 500.
  • the method 400 can be executed in response the user requesting a map view (using the browser application or a mapping application executed on the client device 202, as an example). As such, the method 400 can further comprise receiving a request for a map view, the map view including the portion of the map.
  • displaying data to the user via a user-graphical interface may involve transmitting a signal to the user- graphical interface, the signal containing data, which data can be manipulated and at least a portion of the data can be displayed to the user using the user-graphical interface.
  • the signals can be sent-received using optical means (such as a fibre-optic connection), electronic means (such as using wired or wireless connection), and mechanical means (such as pressure-based, temperature based or any other suitable physical parameter based).
  • optical means such as a fibre-optic connection
  • electronic means such as using wired or wireless connection
  • mechanical means such as pressure-based, temperature based or any other suitable physical parameter based

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Un traitement d'image est exécuté sur une application de dispositif électronique à l'aide d'un fil de traitement principal et d'au moins un fil de traitement enfant dépendant du fil de traitement principal. L'image est acquise pour un rendu sur le dispositif électronique, et divisée en au moins deux segments pouvant faire l'objet d'un rendu en utilisant des primitives graphiques. Une indication de segment respectif est acquise par un fil de traitement enfant respectif concernant lesdits au moins deux segments et un ensemble de primitives graphiques requises, de telle sorte que chaque fil de traitement enfant reçoit une instance unique du segment respectif pour le traitement à un moment donné. Un nouvel espace dans la mémoire du dispositif électronique est attribué pour chaque fil de traitement enfant respectif. Le fil de traitement enfant respectif rend l'image de chaque segment respectif dans l'espace attribué correspondant de mémoire. Chaque image rendue est sauvegardée dans un réseau d'octets transmis par le fil de traitement enfant respectif au fil de traitement principal. Au niveau du fil de traitement principal, une image finale est rendue sur un écran du dispositif électronique à l'aide d'ensembles du réseau d'octets.
PCT/IB2015/052554 2014-12-26 2015-04-08 Procédé et appareil de traitement d'une image sur un dispositif électronique WO2016103057A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
RU2014152947 2014-12-26
RU2014152947A RU2608883C2 (ru) 2014-12-26 2014-12-26 Способ и электронное устройство для обработки изображения

Publications (1)

Publication Number Publication Date
WO2016103057A1 true WO2016103057A1 (fr) 2016-06-30

Family

ID=56149341

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2015/052554 WO2016103057A1 (fr) 2014-12-26 2015-04-08 Procédé et appareil de traitement d'une image sur un dispositif électronique

Country Status (2)

Country Link
RU (1) RU2608883C2 (fr)
WO (1) WO2016103057A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818023A (zh) * 2017-11-06 2018-03-20 深圳市雷鸟信息科技有限公司 基于线程的消息处理方法、智能设备及存储介质
GB2564075A (en) * 2017-03-23 2019-01-09 Pridefield Ltd Multi-Threaded rendering system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0367183B1 (fr) * 1988-10-31 1996-03-27 Bts Broadcast Television Systems Gmbh Système de traitement à grande vitesse d'images graphiques informatiques
US20130047074A1 (en) * 2011-08-16 2013-02-21 Steven Erik VESTERGAARD Script-based video rendering
WO2014099002A1 (fr) * 2012-12-21 2014-06-26 Intel Corporation Mécanisme pour obtenir une haute performance et une haute fidélité dans un système informatique en chapelet

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6873329B2 (en) * 2002-07-05 2005-03-29 Spatial Data Technologies, Inc. System and method for caching and rendering images
US20080027642A1 (en) * 2006-06-30 2008-01-31 Tele Atlas North America, Inc. Method and System for Collecting User Update Requests Regarding Geographic Data to Support Automated Analysis, Processing and Geographic Data Updates
US7734412B2 (en) * 2006-11-02 2010-06-08 Yahoo! Inc. Method of client side map rendering with tiled vector data
JP4752897B2 (ja) * 2008-10-31 2011-08-17 ソニー株式会社 画像処理装置、画像表示方法および画像表示プログラム
US8587610B2 (en) * 2008-12-12 2013-11-19 Microsoft Corporation Rendering source content for display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0367183B1 (fr) * 1988-10-31 1996-03-27 Bts Broadcast Television Systems Gmbh Système de traitement à grande vitesse d'images graphiques informatiques
US20130047074A1 (en) * 2011-08-16 2013-02-21 Steven Erik VESTERGAARD Script-based video rendering
WO2014099002A1 (fr) * 2012-12-21 2014-06-26 Intel Corporation Mécanisme pour obtenir une haute performance et une haute fidélité dans un système informatique en chapelet

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2564075A (en) * 2017-03-23 2019-01-09 Pridefield Ltd Multi-Threaded rendering system
GB2564075B (en) * 2017-03-23 2020-04-01 Pridefield Ltd Multi-Threaded rendering system
CN107818023A (zh) * 2017-11-06 2018-03-20 深圳市雷鸟信息科技有限公司 基于线程的消息处理方法、智能设备及存储介质

Also Published As

Publication number Publication date
RU2608883C2 (ru) 2017-01-25
RU2014152947A (ru) 2016-07-20

Similar Documents

Publication Publication Date Title
US10841511B1 (en) Electronic device and method for image processing
RU2677595C2 (ru) Способ и аппаратура для отображения интерфейса приложения и электронное устройство
US10181305B2 (en) Method of controlling display and electronic device for providing the same
CN107003818B (zh) 在设备之间共享屏幕的方法及使用该方法的设备
US20170235435A1 (en) Electronic device and method of application data display therefor
KR102310780B1 (ko) 웹 어플리케이션 서비스 제공 장치 및 방법, 그리고 이를 위한 사용자 장치
CN106662969B (zh) 处理内容的方法及其电子设备
US9658713B2 (en) Systems, methods, and applications for dynamic input mode selection based on whether an identified operating system includes an application program interface associated with the input mode
EP2738659A2 (fr) Utilisation de serrage pour modifier le défilement
CN104508699B (zh) 内容传输方法和使用该方法的系统、装置和计算机可读记录介质
US20170139554A1 (en) Electronic apparatus and display control method
US20160004425A1 (en) Method of displaying graphic user interface and electronic device implementing same
KR20160140700A (ko) 이미지의 자동화된 선택적 업로드 기법
KR20160026142A (ko) 스크랩 정보를 제공하는 전자 장치 및 그 제공 방법
CN107810472B (zh) 电子装置和用于控制全景图像的显示的方法
US10789033B2 (en) System and method for providing widget
US20150347377A1 (en) Method for processing contents and electronic device thereof
US10216404B2 (en) Method of securing image data and electronic device adapted to the same
EP3340155A1 (fr) Dispositif électronique et procédé d'affichage de pages web l'utilisant
WO2015181591A1 (fr) Procédé et système de recommandation d'une application à un utilisateur
CN105892849B (zh) 图像处理方法和支持其的电子设备
US10643252B2 (en) Banner display method of electronic device and electronic device thereof
CN109313529B (zh) 文档与图片之间的轮播
WO2016103057A1 (fr) Procédé et appareil de traitement d'une image sur un dispositif électronique
US10739981B2 (en) Tag input device of electronic device and control method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15872049

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15872049

Country of ref document: EP

Kind code of ref document: A1