WO2020164726A1 - Mobile communications device and media server - Google Patents

Mobile communications device and media server Download PDF

Info

Publication number
WO2020164726A1
WO2020164726A1 PCT/EP2019/053762 EP2019053762W WO2020164726A1 WO 2020164726 A1 WO2020164726 A1 WO 2020164726A1 EP 2019053762 W EP2019053762 W EP 2019053762W WO 2020164726 A1 WO2020164726 A1 WO 2020164726A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
mobile communications
communications device
media server
view
Prior art date
Application number
PCT/EP2019/053762
Other languages
French (fr)
Inventor
Tommy Arngren
Peter ÖKVIST
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2019/053762 priority Critical patent/WO2020164726A1/en
Publication of WO2020164726A1 publication Critical patent/WO2020164726A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/1059End-user terminal functionalities specially adapted for real-time communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/765Media network packet handling intermediate

Definitions

  • the invention relates to a mobile communications device, a media server, a method performed by a mobile communications device, a method performed by a media server, corresponding computer programs,
  • camera-equipped mobile communications devices such as smartphones, Head-Mounted Displays (HMDs), life loggers, smartwatches, and camera glasses
  • HMDs Head-Mounted Displays
  • smartwatches smartwatches
  • camera glasses e.g., Google Glass
  • first-person camera devices such as camera glasses (e.g., Google Glass)
  • Many social networks have the ability to tag images with the identity of persons represented in these images, based on face recognition algorithms which are applied to images captured by users for the purpose of sharing these via social networks. Face recognition may either be performed on the mobile communications devices which have captured the images, i.e., before they are uploaded to a social-network platform, or after upload using the social-network infrastructure. Successfully recognized persons may be notified by the social-network platform, e.g., by email or via apps which users run on their smartphones. In view of the considerable processing power and energy which is required by face-recognition algorithms, these approaches are typically limited to faces of the social-network contacts of the user uploading the media content.
  • a mobile communications device comprises a camera, a positioning sensor, an orientation sensor, a wireless network interface, and a processing circuit.
  • the processing circuit causes the mobile
  • the mobile communications device to be operative to capture one or more images using the camera.
  • the mobile communications device is further operative to transmit the one or more captured images, information indicating respective times of capturing the one or more images, and information pertaining to respective fields-of-view of the camera during capturing the one or more images, to a media server.
  • a media server comprises a network interface, and a processing circuit.
  • the processing circuit causes the media server to be operative to receive images captured by cameras comprised in a plurality of mobile communications devices, information indicating respective times of capturing the images, and information pertaining to respective fields-of-view of the cameras during capturing the images, from the mobile communications devices.
  • the media server is further operative to associatively store the received images, the information indicating respective times of capturing the images, and the information pertaining to respective fields-of-view of the cameras during capturing the images, in a database.
  • the media server is further operative to receive a request for retrieval of images, the request comprising information pertaining to one or more positions of another mobile communications device at respective times, to select one or more images stored in the database based on determining that the other mobile
  • communications device was positioned within the respective fields-of-view associatively stored with the one or more images, and to transmit the selected one or more images in response to the received request.
  • a method is provided.
  • the method is performed by a mobile communications device.
  • the method comprises capturing one or more images using a camera comprised in the mobile communications device.
  • the method further comprises transmitting the one or more captured images, information indicating respective times of capturing the one or more images, and information pertaining to respective fields-of-view of the camera during capturing the one or more images, to a media server.
  • a computer program comprises instructions which, when the computer program is executed by a processor comprised in a mobile communications device, cause the mobile communications device to carry out an embodiment of the method according to the third aspect of the invention.
  • a computer-readable storage medium is provided. The computer-readable storage medium has an embodiment of the computer program according to the fourth aspect of the invention stored thereon.
  • a data carrier signal is provided.
  • the data carrier signal carries an embodiment of the computer program according to the fourth aspect of the invention.
  • a method is provided.
  • the method is performed by a media server.
  • the method comprises receiving images captured by cameras comprised in a plurality of mobile communications devices, information indicating respective times of capturing the images, and information pertaining to respective fields-of-view of the cameras during capturing the images, from the mobile communications devices.
  • the method further comprises associatively storing the received images, the information indicating respective times of capturing the images, and the information pertaining to respective fields-of-view of the cameras during capturing the images, in a database.
  • the method further comprises receiving a request for retrieval of images, the request comprising information pertaining to one or more positions of another mobile communications device at respective times.
  • the method further comprises selecting one or more images stored in the database based on determining that the other mobile communications device was positioned within the respective fields-of-view associatively stored with the one or more images.
  • the method further comprises transmitting the selected one or more images in response to the received request.
  • a computer program comprising instructions which, when the computer program is executed by a processor comprised in a media server, cause the media server to carry out an embodiment of the method according to the seventh aspect of the invention.
  • a computer-readable storage medium is provided. The computer-readable storage medium has an embodiment of the computer program according to the eighth aspect of the invention stored thereon.
  • a data carrier signal is provided.
  • the data carrier signal carries an embodiment of the computer program according to the eighth aspect of the invention.
  • the invention makes use of an understanding that images which are captured with mobile communications devices can be shared with users of other mobile communications devices, which users were positioned within the respective fields-of-view of the cameras capturing these images.
  • the selection of images which are shared with a requesting user of another mobile communications device is based on information pertaining to respective times of capturing the images and respective fields-of-view of the cameras during capturing the images, in combination with time-stamped position information which is provided by the requesting user. This time- stamped position information represents locations which the user has visited in the past while carrying her mobile communications device.
  • the solution presented herein does not require any prior relation between users capturing images and other users requesting the retrieval of captured images. Since positioning sensors and orientation sensors are prevalent in modern mobile communications devices such as smartphones, the described solution provides an efficient means of sharing images with persons who may be interested in receiving a copy of a captured image, without relying on resource consuming face-recognition algorithms.
  • Fig. 1 illustrates sharing of images captured by a mobile
  • Fig. 2 shows a sequence diagram, illustrating uploading of user- captured images to the media server, in accordance with embodiments of the invention.
  • Fig. 3 shows a sequence diagram, illustrating retrieval of user- captured images from the media server, in accordance with embodiments of the invention.
  • Fig. 4 shows a mobile communications device, in accordance with embodiments of the invention.
  • Fig. 5 shows a media server, in accordance with embodiments of the invention.
  • Fig. 6 shows a flow chart illustrating a method performed by a mobile communications device, in accordance with embodiments of the invention.
  • Fig. 7 shows a flow chart illustrating a method performed by a media server, in accordance with embodiments of the invention. All the figures are schematic, not necessarily to scale, and generally only show parts which are necessary in order to elucidate the invention, wherein other parts may be omitted or merely suggested. Detailed description
  • Mobile communications devices 110A-110D may in particular be embodied as user devices such as smartphones, mobile phones, tablets, smartwatches, digital cameras with wireless connectivity, camera glasses, life loggers, or the like, which have the ability to capture, i.e., record and store, images for subsequent transfer to a media server 130 via a wireless communications network 120, e.g., a Radio Access Network (RAN), such as a cellular telecommunications network (e.g., GSM, UMTS, LTE, 5G, NR/NX) a Wireless Local Area Network (WLAN)/Wi-Fi network, Bluetooth, or any other kind of radio- or light-based communications technology.
  • RAN Radio Access Network
  • a cellular telecommunications network e.g., GSM, UMTS, LTE, 5G, NR/NX
  • WLAN Wireless Local Area Network
  • Bluetooth any other kind of radio- or light-based communications technology.
  • the process of transferring one or more images from the mobile communications devices 110 to the media server 130 is herein also referred to as transmitting or uploading images.
  • the transfer of images from one or more mobile communications devices 110 to the media server 130, or in the reverse direction when retrieving images from the media server 130 may involve additional communications networks such as the Internet (not shown in Fig. 1 ).
  • two users visiting a certain location capture images of a scene with their respective mobile communications devices 110A and 110B, which may, e.g., be smartphones.
  • the mobile communications devices 110A and 110B have respective fields-of-view 111A and 111 B (in Fig. 1 illustrated as acute angles limited by dashed
  • the field-of-view of a camera may be adjustable by modifying the optics of the camera, e.g., by changing its focal length (aka optical zoom) or by cropping the area of the image which is captured by the camera (aka digital zoom). That is, the field-of-view is a characteristic of each captured image and may be determined based on the current configuration of the camera (e.g., if optical zoom is used) or settings of a camera app executed on a smartphone (e.g., if digital zoom is used).
  • the devices 110C and 110D are depicted as being positioned within the field-of- view 111 A of the mobile communications devices 110A, and the user of mobile communications devices 110D is depicted as being additionally positioned within the field-of-view 111 B of the mobile communications devices 110B. Accordingly, the user of the mobile communications
  • the device 110C is likely to be represented, i.e., visible, in images which are captured by the mobile communications device 110A, and the user of the mobile communications device 110D is likely to be represented in images which are captured by the mobile communications devices 110A and 110B.
  • the solution provided herein is directed to enabling sharing of images captured by mobile communications devices 110A/B with users of other mobile communications devices 110C/D which are represented in the captured images.
  • this may be the case if users carry their mobile communications devices 110A/B during a trip, e.g., for capturing images of a sight they are visiting.
  • other people which are potentially unknown to the user capturing the image, may accidently be positioned within the field-of-view of the camera during capturing an image, and are accordingly represented in the captured image.
  • Known solutions for sharing images with others are typically limited to acquaintances of the user who has captured the images, in particular social- network contacts. This may, e.g., be achieved by the user tagging captured images with names of persons which are represented in her images and which she recognizes, or by (semi-)automatically recognizing faces of people known the user, e.g., social-network contacts, by conducting image analysis on the captured images.
  • embodiments of the invention enable sharing of images captured by mobile communications devices 110A/B with users of other mobile communications devices 110C/D which are likely to be represented in the captured images. This is achieved by transferring images captured by mobile communications devices 110A/B to a media server 130, for later retrieval by users of other mobile communications devices 110C/D which were positioned within the respective fields-of-view 111 A/B of the cameras during capturing the images.
  • the selection of images which is made available to a user requesting retrieval of images is based on information pertaining to respective times of capturing the images and respective fields- of-view during capturing the images, in combination with time-stamped position information which is provided by the requesting user.
  • the images are selected based on determining that the user requesting retrieval of images was positioned within the respective fields-of- view when the images were captured. This is achieved by providing information pertaining to past positions of a mobile communications device 110C/D of the requesting user at respective times, i.e., time-stamped position information, to the media server 130. That is, the selection of images by the media server 130 is based on the understanding that the users requesting retrieval of images from the media server 130 have been carrying their respective mobile communications devices 130 when the selected images were captured.
  • the solution presented herein does not require any prior relation between users capturing images and other users requesting the retrieval of captured images. Since positioning sensors and orientation sensors are prevalent in modern mobile communications devices such as smartphones, the described solution provides an efficient means of sharing images with persons who may be interested in receiving a copy of a captured image, without relying on resource consuming face-recognition algorithms.
  • Figs. 2 and 3 show sequence diagrams illustrating transfer of user- captured images to the media server 130 (Fig. 2) and retrieval of user- captured images from the media server 130 (Fig. 3), respectively.
  • Embodiments of the mobile communications device 110 which are schematically illustrated in Fig. 4, comprise a camera 410, a positioning sensor 420, an orientation sensor 430, a wireless network interface 440, and a processing circuit 450.
  • the camera 410 is a digital camera, e.g., of CMOS type which is prevalent in today’s smartphones, and is configured to capture images with a field-of-view 111 A/111 B which is determined by the current position and orientation of the camera 410 (and, accordingly, of the mobile
  • the field-of-view may be expressed in terms of the angular size of the view cone, as an angle-of-view.
  • the diagonal field of view FoV can be calculated as
  • FoV tan 1 ( 2/ ).
  • SensorSize is the size of the camera sensor and f its focal length.
  • the positioning sensor 420 is configured to determine a current position of the mobile communications device 110, and accordingly the camera 410. It may either be based on the Global Positioning System (GPS), the Global Navigation Satellite System (GNSS), China's BeiDou Navigation Satellite System (BDS), GLONASS, or Galileo, or may receive position information via the wireless network interface 440, e.g., from a positioning server.
  • the position information may, e.g., be based on radio triangulation, radio fingerprinting, or crowd-sourced identifiers which are associated with known positions of access points of wireless communications networks (e.g., cell-IDs or WLAN SSIDs).
  • the current position of the mobile communications device 110 may, e.g., be made available via an Application Programming Interface (API) provided by an operating system of the mobile
  • API Application Programming Interface
  • the current position at the time of capturing an image may be stored as metadata with the image, or in a separate data record.
  • the orientation sensor 430 is configured to determine a current orientation of the mobile communications device 110, and accordingly the camera 410, relative to a reference frame, e.g., the direction of gravity. It may comprise one or more sensors of different type, such as accelerometers, gyroscopes, and magnetometers, which are common in today’s
  • the current orientation of the mobile communications device 110 may, e.g., be made available via an API provided by the operating system of the mobile communications device 110.
  • the current orientation at the time of capturing an image may be stored as metadata with the image, or in a separate data record.
  • the wireless network interface 440 is configured to access the wireless communications network 120 and thereby enable the mobile communications device 110 to communicate, i.e., exchange data in either direction (uplink or downlink), with the media server 130 and optionally any other network node which is accessible via the wireless communications network 120, e.g., a positioning server. It may, e.g., comprise one or more of a cellular modem (e.g., GSM, UMTS, LTE, 5G, NR/NX), a WLAN/Wi-Fi modem, a Bluetooth modem, a Visible Light Communication (VLC) modem, and the like.
  • a cellular modem e.g., GSM, UMTS, LTE, 5G, NR/NX
  • WLAN/Wi-Fi modem e.g., a WLAN/Wi-Fi modem
  • Bluetooth modem e.g., Bluetooth
  • VLC Visible Light Communication
  • the processing circuit 450 may comprise one or more
  • processors 451 such as Central Processing Units (CPUs), microprocessors, application-specific processors, Graphics Processing Units (GPUs), and Digital Signal Processors (DSP) including image processors, or a
  • the computer program 453 is configured, when executed by the processor(s) 451 , to cause the mobile communications device 110 to perform in accordance with embodiments of the invention described herein.
  • the computer program 453 may be downloaded to the memory 452 by means of the wireless network interface 440, as a data carrier signal carrying the computer program 453.
  • the processor(s) 451 may further comprise one or more Application-Specific Integrated Circuits
  • ASICs Application-Programmable Gate Arrays
  • FPGAs Field-Programmable Gate Arrays
  • Embodiments of the media server 130 which are schematically illustrated in Fig. 5, comprise a network interface 510 and a processing circuit 520.
  • the network interface 510 is configured to enable the media server to communicate, i.e., exchange data in either direction, with mobile
  • communications devices 110 via the wireless communications network 120, and optionally with other network nodes, e.g., an external database 140 for storing images.
  • network nodes e.g., an external database 140 for storing images.
  • It may be any type of wired or wireless network interface, e.g., Ethernet, WLAN/Wi-Fi, or the like.
  • the processing circuit 520 may comprise one or more
  • processors 521 such as Central Processing Units (CPUs), microprocessors, application-specific processors, Graphics Processing Units (GPUs), and Digital Signal Processors (DSP) including image processors, or a
  • the computer program 523 is configured, when executed by the processor(s) 521 , to cause the media server 130 to perform in accordance with embodiments of the invention described herein.
  • the computer program 523 may be downloaded to the memory 522 by means of the network interface 510, as a data carrier signal carrying the computer program 523.
  • the processor(s) 521 may further comprise one or more Application-Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), or the like, which in cooperation with, or as an alternative to, the computer program instructions 523 are configured to cause the media server 130 to perform in accordance with embodiments of the invention described herein.
  • ASICs Application-Specific Integrated Circuits
  • FPGAs Field-Programmable Gate Arrays
  • Capturing 211 the one or more images may be triggered by a user of the mobile communications device 110, e.g., by pressing a camera button which may either be a hardware button provided on a face of the mobile communications device 110, or a virtual button which is displayed on a touchscreen comprised in the mobile communications device 110 as part of the user interface of a camera app, as is known in the art.
  • capturing 211 the one or more images may be effected repeatedly, periodically, or regularly, or if a current position of the mobile communications device 110 has changed by more than a threshold value (which may optionally be configured by the user of the mobile
  • communications device 110 in an always-on camera, or life-logger, type of fashion.
  • the mobile communications device 110 is further operative to transmit 215, herein also referred to as transferring or uploading, the one or more captured images, information indicating respective times of
  • the one or more images may be transmitted 215 to the media server 130 for later retrieval, from the media server 130, by a user of another mobile communications
  • the one or more captured images may either be transmitted 215 in the same format as they were captured and stored by the camera 410, i.e., in raw data format or in a compressed file format, or as a representation of the captured images.
  • the mobile communications device 110 may be operative to transmit 215 a compressed version of the captured images, e.g., with reduced resolution or color space, to the media server 130 for sharing, thereby reducing bandwidth which is required for transmitting 215 the one or more images via the wireless communications network 120, and any other interconnected communications network, to the media server 130.
  • the information indicating respective times of capturing the one or more images, and the information pertaining to respective fields-of-view of the camera 410 during capturing the one or more images, may be
  • the information indicating respective times of capturing the one or more images may, e.g., comprise time stamps which are obtained from a clock comprised in the mobile communications device 110.
  • the current time may, e.g., be obtained via an API provided by the operating system of the mobile communications device 110.
  • the time of capturing an image may be stored as metadata with the image, or in a separate data record.
  • the mobile communications device 110 may be operative to determine the respective fields-of-view 111A/B of the camera 410 during capturing the one or more images based on information received from the positioning sensor 420 and the orientation sensor 430.
  • mobile communications device 110 may be operative to determine 212 respective positions of the mobile
  • the communications device 110 during capturing 211 the one or more images using the positioning sensor 420, and determine 213 respective directions in which the camera 410 is pointing during capturing 211 the one or more images using the orientation sensor 430.
  • the information may either be received directly from the positioning sensor 420 and the orientation sensor 430, or via an API of the operating system.
  • the mobile communications device 110 may be operative to determine the respective fields-of-view 111A/B of the camera 410 during capturing 211 the one or more images further based on information received from the camera 410. More specifically, mobile communications device 110 may be operative to determine 214 respective angles-of-view of the camera 410 during capturing 211 the one or more images based on information pertaining to a configuration of the camera 410.
  • the information may either be received directly from the camera 410, via an API of the operating system, as is described hereinbefore, or via an API of a camera app which is executed on the mobile communications device 110 and which is provided for controlling the camera 410 via a (optionally touch-based) user- interface of the mobile communications device 110.
  • the information may, e.g., relate to one or more of a current focal-length setting of the camera 410, a size of the sensor of the camera 410, a current angle-of-view of the camera 410, or the like.
  • the transmitted 215 information pertaining to the respective fields-of- view of the camera 410 during capturing 211 the one or more images comprises the determined respective positions and the determined
  • the mobile communications device 110 may further be operative to transmit additional information pertaining to light conditions, whether conditions, or visibility, during capturing the images. This additional information may be transmitted either as metadata or as separate data record, or in separate message exchanges between the mobile
  • the media server 130 is operative to receive 215 images captured by cameras comprised in a plurality of mobile communications devices 110, information indicating respective times of capturing the images, and information pertaining to respective fields-of-view 111 A/B of the cameras during capturing the images, from the mobile communications devices 110.
  • the received information pertaining to respective fields-of-view 111A/B of the cameras during capturing the images may comprise respective positions of the mobile communications devices 110 during capturing the images and respective directions in which the cameras are pointing during capturing the images.
  • the received information pertaining to respective fields-of- view 111 A/B of the cameras during capturing the images may further comprise respective angles-of-view of the cameras during capturing the images.
  • the media server 130 may be operative to determine the respective angles- of-view by analyzing the received images, e.g., based on known dimensions of objects captured in the images (e.g., known buildings which are
  • the media server 130 is further operative to associatively store 221 the received 215 images, the information indicating respective times of capturing the images, and the information pertaining to respective fields-of- view 111 A/B of the cameras during capturing the images, in a database.
  • the database may either be comprised in, or co-located with, the media server 130 (such as database 530 shown in Fig. 5), or provided separately from the media server 130 and accessible for the media server 130 via network interface 510 (such as database 140 shown in Fig. 1 ), e.g., as a cloud-based storage.
  • the mobile communications device 110C/D may further be operative to associatively store 231 information pertaining to one or more positions of the mobile communications device 110C/D at respective times, e.g., in the form of a sequence of position-timestamp pairs, and to transmit 232 a request for retrieval of images to the media server 130.
  • the transmitted 232 request comprises the information pertaining to one or more positions of the mobile communications device 110C/D at respective times.
  • the mobile communications device 110C/D is further operative to receive 313 one or more images from the media server 130, which images were captured using respective fields-of-view 111 A/B encompassing the position of the mobile communications device 110C/D during capturing the respective images.
  • the media server 130 is further operative to receive 232 the request for retrieval of images, the request comprising information pertaining to one or more positions of the mobile communications device 110C/D at respective times (e.g., in the form of a sequence of position-timestamp pairs), to select 311 one or more images stored in the database, and to transmit 313 the selected one or more images in response to the received 232 request.
  • the media server 130 is operative to select 311 the one or more images based on determining that the mobile
  • the position information transmitted 232 with the request for retrieval of images was within the field-of-view 111A/B of the camera when an image was captured 211 , that image is selected 311 for transmission 313 to the requesting mobile communications device 110C/D.
  • the media server 130 may further be operative to select 311 the one or more images stored in the database 140/530 further based on distances between respective positions of the other mobile communications device 110 and respective positions of the mobile
  • the communications device 110 during capturing 211 the images. For instance, this may be achieved by using a threshold distance, which may optionally be specified by the user requesting the retrieval of images, or by prioritizing or sorting the selected images based on distance. For instance, preference may be given to images captured at shorter distance from the requesting user. In this way, the images which are selected 311 and transmitted 313 in response to a request 232 for retrieval of images are more likely to represent the user of the other mobile communications device 110 in a desirable manner, as the user was closer, or sufficiently close, to the camera during capturing the image.
  • the media server 130 may be operative to select images based on light conditions, whether conditions, or visibility, during capturing the images, as images which were“shot into the sun” or captured during low-visibility conditions typically are less preferred. This may be achieved by selecting images based on additional information received from the mobile communications devices 110, or by retrieving historical information about light conditions, whether conditions, or visibility, from a weather database.
  • device 110C/D may further be operative to transmit 232 an image
  • the representation of a user of the mobile communications device 110C/D with the request for retrieval of images to the media server 130 This may, e.g., be an image of the user’s face or body, or part’s thereof, or a set of features which is derived from an image-recognition algorithm, in particular a face- recognition algorithm.
  • the one or more images which are received 313 from the media server 130 represent the user.
  • the media server 130 may be operative to
  • face recognition is thereby only performed on images which have been selected based on determining that that the requesting mobile communications device 110C/D was positioned within the respective fields- of-view 111 A/B associatively stored with the images in the database.
  • the media server 130 is operative to select 311 the one or more images stored in the database 140/530 further based on determining that the user was successfully recognized.
  • the media server 130 may be operative to select 311 the one or more images stored in the database 140/530 further based on respective facial expressions of the user in the images in which the user was successfully recognized. This may be achieved by utilizing known face- recognition algorithms which are able to detect facial expressions. For instance, selection of images may be limited to images in which the user is smiling.
  • the media server 130 may be operative to select 311 the one or more images stored in the database 140/530 further based respective gazes of the user in the images in which the user was successfully
  • This may be achieved by utilizing known face-recognition algorithms which are able to detect a gaze direction. For instance, selection of images may be limited to images in which the user is gazing at the camera.
  • the device 110C/D may be operative, if a plurality of images is received 313 from the media server 130, to combine 341 the received plurality of images into a slide show or a video.
  • the media server 130 may be operative to select 311 a plurality of images and to combine 312 the selected plurality images into a slide show or a video.
  • images representing the user can be combined into a slide show or video summarizing the user’s trip.
  • the mobile communications device 110C/D may further be operative, if a plurality of images is received 313 from the media server 130 which were captured at substantially the same time by different mobile communications devices 110A and 110B having respective positions and orientations, to combine 342 the received plurality of images into a 3D image.
  • the mobile communications device 110C/D may be operative to select 341 images, among the received plurality of images, which were captured at substantially the same time by different mobile communications
  • the media server 130 may be operative to select 311 a plurality of images which were captured at substantially the same time by different mobile communications devices having respective positions and orientations, and to combine 312 the plurality of images into a 3D image.
  • the resulting 3D image is transmitted 313 as a representation of the selected one or more images in response to the received request.
  • received 232 with the request for retrieval of images may contain position information pertaining to current positions of the mobile communications device 110C/D at respective times which does not exactly coincide with capturing times of images stored in the database.
  • the mobile communications device 110C/D may contain position information pertaining to current positions of the mobile communications device 110C/D at respective times which does not exactly coincide with capturing times of images stored in the database.
  • received 232 position information may be interpolated to estimate an approximate position of the mobile communications device 110C/D at respective capturing times of images stored in the database.
  • images may be selected which have been captured shortly before, or shortly after, a point in time for which position information has been received 232 from the mobile communications device 110C/D. The selection of images based on respective capturing times which are close in time to time-stamped position information received 232 from the mobile communications
  • the device 110C/D may also be based on a speed of the mobile communications device 110C/D at the relevant time. For instance, if the mobile
  • communications device 110C/D was substantially stationary during a certain duration of time, selecting 311 images with respective capturing times during that duration of time does not require exact matching of position timestamps with respective capturing times.
  • alternative embodiments of the mobile communications device 110 may be envisaged.
  • such alternative embodiments may not necessarily be operative to capture 211 one or more images using the camera 410, and to transmit 215 the one or more captured images, information indicating respective times of capturing the one or more images, and information pertaining to respective fields-of-view of the camera during capturing the one or more images, to the media server 130.
  • an alternative embodiment of the mobile communications device 110C/D may be operative to associatively store 231 information pertaining to one or more positions of the mobile communications device at respective times, transmit 232 a request for retrieval of images to the media server, the request comprising the information pertaining to one or more positions of the mobile communications device at respective times, and to receive 313 one or more images from the media server, without being operative to perform the steps illustrated in Fig. 2, i.e., without transferring captured images to the media server 130 for sharing with users of other mobile communications devices.
  • Such alternative embodiment of the mobile communications device 110 may optionally be operative to perform additional steps, in in accordance with what is described herein.
  • a request for retrieval of images may be received by the media server 130 from a computing device other than the mobile communications device 110.
  • a computing device other than the mobile communications device 110.
  • users may utilize a personal computer a laptop, a tablet, or the like to request retrieval of images from the media server 130 based on time-stamped position information which has been collected by their respective mobile communication devices, wearables, smartwatches, or the like.
  • the mobile communications device 110 is operative to exchange information with the media server 130, in particular to transmit 215 the one or more captured images, the information indicating respective times of capturing the one or more images, and the information pertaining to respective fields-of-view of the camera during capturing the one or more images, and to receive 313 one or more images from the media server 130, using any suitable network protocol, combination of network protocols, or protocol stack.
  • the mobile communications device 110 may be operative to utilize the HyperText Transfer protocol (HTTP), the Transmission Control Protocol (TCP), the Internet Protocol (IP), the User Datagram
  • HTTP HyperText Transfer protocol
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • User Datagram User Datagram
  • the media server 130 is operative to exchange information with the mobile communications devices 110, and optionally with an external database 140, using one or more corresponding network protocols.
  • the method 600 comprises capturing 601 one or more images using a camera 410 comprised in the mobile communications device 110, and transmitting 605 the one or more captured images, information indicating respective times of capturing the one or more images, and information pertaining to respective fields-of-view 111 of the camera 410 during capturing the one or more images, to a media server 130. More specifically, the one or more images are transmitted 605 to the media server 130 for later retrieval, from the media server 130, by a user of another mobile communications device 110 which was positioned within the respective fields-of-view 111 of the camera 410 during capturing the one or more images.
  • the method 600 may further comprise determining the respective fields-of-view 111 of the camera 410 during capturing the one or more images based on information received from a positioning sensor 420 and an orientation sensor 430 comprised in the mobile communications device 110.
  • the respective fields-of-view 111 of the camera 410 during capturing the one or more images are determined further based on information received from the camera 410.
  • the method 600 may further comprise determining 602 respective positions of the mobile communications device 110 during capturing the one or more images using a positioning sensor 420 comprised in the mobile communications device 110, and determining 603 respective directions in which the camera 410 is pointing during capturing the one or more images using an orientation sensor 430 comprised in the mobile communications device 110.
  • the information pertaining to the respective fields-of-view 111 of the camera 410 during capturing the one or more images comprises the determined respective positions and the determined respective directions.
  • the method 600 further comprises determining 604 respective angles-of-view of the camera 410 during capturing the one or more images based on information pertaining to a configuration of the camera 410, and the information pertaining to the respective fields-of-view 111 of the camera 410 during capturing the one or more images further comprises the determined respective angles-of-view.
  • the method 600 may further comprise associatively storing 606 information pertaining to one or more positions of the mobile communications device 110 at respective times, transmitting 607 a request for retrieval of images to the media server 130, and receiving 608 one or more images from the media server 130.
  • the request comprises the information pertaining to one or more positions of the mobile communications device 110 at respective times. More specifically, the one or more images which are received 608 from the media server 130 were captured using respective fields-of-view 111 encompassing the position of the mobile communications device 110 during capturing the respective images.
  • the request for retrieval of images further comprises an image representation of a user of the mobile communications device 110, and the one or more images which are received 608 from the media server 130 represent the user.
  • a plurality of images is received 608 from the media server 130, and the method 600 further comprises combining 609 the received plurality of images into a slide show or a video.
  • a plurality of images is received 608 from the media server 130 which were captured at substantially the same time by different mobile communications devices 110 having respective positions and orientations, and the method 600 further comprises combining 611 the received plurality of images into a 3D image.
  • a plurality of images is received 608 from the media server 130, and the method 600 further comprises selecting 610 images, among the plurality of images received from the media sever 130, which were captured at substantially the same time by different mobile
  • communications devices 110 having respective positions and orientations, and combining 611 the selected images into a 3D image.
  • An embodiment of the method 600 may be implemented as a computer program 453 comprising instructions which, when the computer program is executed by a processor 451 comprised in a mobile communications device 110, cause the mobile communications device 110 to carry out an embodiment of the method 600.
  • the computer program 453 may be stored on a computer-readable storage medium 452, such as a memory stick, a Random-Access Memory (RAM), a Read-Only Memory (ROM), a Flash memory, a CDROM, a DVD, or the like.
  • the computer program 453 may be carried by a data carrier signal, e.g., when the computer program is downloaded to a mobile communications device 110 via a wireless network interface 440 comprised in the mobile communications device 110.
  • Method 700 comprises receiving 701 images captured by cameras comprised in a plurality of mobile communications devices 110, information indicating respective times of capturing the images, and information pertaining to respective fields-of-view 111 of the cameras during capturing the images, from the mobile communications devices 110.
  • the method 700 further comprises associatively storing 702 the received images, the information indicating respective times of capturing the images, and the information pertaining to respective fields-of-view 111 of the cameras during capturing the images, in a database 140/530.
  • the method 700 further comprises receiving 703 a request for retrieval of images, the request comprising information pertaining to one or more positions of another mobile communications device 110 at respective times.
  • the method 700 further comprises selecting 704 one or more images stored in the database 140/530 based on determining that the other mobile communications device 110 was positioned within the respective fields-of-view 111 associatively stored with the one or more images.
  • the method 700 further comprises transmitting 711 the selected one or more images in response to the received request.
  • the received information pertaining to respective fields-of-view 111 of the cameras during capturing the images may comprise respective positions of the mobile communications devices 110 during capturing the images and respective directions in which the cameras are pointing during capturing the images.
  • the received information pertaining to respective fields-of- view 111 of the cameras during capturing the images may further comprise respective angles-of-view of the cameras during capturing the images.
  • the one or more images stored in the database 140/530 may be selected 704 further based on a distance between respective positions of the other mobile communications device 110 and respective positions of the mobile communications device 110 during capturing the images.
  • the request for retrieval of images may further comprise an image representation of a user requesting the retrieval of images.
  • the method 600 may further comprise recognizing the user by analyzing one or more images selected based on determining that the other mobile communications device 110 was positioned within the respective fields-of-view 111
  • the associatively stored with the one or more images, and the one or more images stored in the database 140/530 may be selected 704 further based on determining that the user was successfully recognized.
  • the one or more images stored in the database 140/530 may be selected 704 further based on respective facial expressions of the user in the images in which the user was successfully recognized.
  • the one or more images stored in the database 140/530 may be selected 704 further based on respective gazes of the user in the images in which the user was successfully recognized.
  • a plurality of images is selected 704, and the method 600 further comprises combining 708 the selected plurality images into a slide show or a video.
  • a plurality of images is selected 704 which were captured at substantially the same time by different mobile communications
  • method 600 further comprises combining 710 the plurality of images into a 3D image, and transmitting 711 the 3D image as a representation of the selected one or more images in response to the received request.
  • An embodiment of the method 700 may be implemented as a computer program 523 comprising instructions which, when the computer program is executed by a processor 521 comprised in a media server 130, cause the media server 130 to carry out an embodiment of the method 700.
  • the computer program 523 may be stored on a computer- readable storage medium 522, such as a memory stick, a Random-Access Memory (RAM), a Read-Only Memory (ROM), a Flash memory, a CDROM, a DVD, or the like.
  • the computer program 523 may be carried by a data carrier signal, e.g., when the computer program is downloaded to a media server 130 via a network interface 510 comprised in the media server 130.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A mobile communications device (110) (MCD) and a media server (130) are provided. The MCD is operative to capture images and transmit the captured images, information indicating respective capturing times, and information pertaining to respective fields-of-view (111) of the captured images, to the media server. The media server is operative to receive and associatively store the images, the information indicating respective capturing times, and the information pertaining to respective fields-of-view. The media server is further operative to receive a request for retrieval of images, the request comprising information pertaining to positions of another MCD at respective times, select stored images based on determining that the other MCD was positioned within the respective fields-of-view of the images, and transmit the selected images in response to the received request. The MCD may be operative to associatively store information pertaining to positions of the MCD at respective times, transmit a request for retrieval of images to the media server, and receive images from the media server.

Description

MOBILE COMMUNICATIONS DEVICE AND MEDIA SERVER
Technical field
The invention relates to a mobile communications device, a media server, a method performed by a mobile communications device, a method performed by a media server, corresponding computer programs,
corresponding computer-readable storage media, and corresponding data carrier signals.
Background
In recent years, camera-equipped mobile communications devices such as smartphones, Head-Mounted Displays (HMDs), life loggers, smartwatches, and camera glasses, have become ubiquitous. This is accompanied by an increasing popularity of Internet services for sharing images or videos. These services are oftentimes provided by social networks like Facebook, Twitter, YouTube, and the like, which typically use cloud- based platforms. With the more widespread use of first-person camera devices such as camera glasses (e.g., Google Glass), it can be expected that continuous capturing and sharing of image/video content, by upload to cloud- based services via wireless communications networks, will become more prominent and commonly accepted in the always-connected future society.
Many social networks have the ability to tag images with the identity of persons represented in these images, based on face recognition algorithms which are applied to images captured by users for the purpose of sharing these via social networks. Face recognition may either be performed on the mobile communications devices which have captured the images, i.e., before they are uploaded to a social-network platform, or after upload using the social-network infrastructure. Successfully recognized persons may be notified by the social-network platform, e.g., by email or via apps which users run on their smartphones. In view of the considerable processing power and energy which is required by face-recognition algorithms, these approaches are typically limited to faces of the social-network contacts of the user uploading the media content.
Summary
It is an object of the invention to provide an improved alternative to the above techniques and prior art.
More specifically, it is an object of the invention to provide an improved solution for sharing images, either still images or videos, which users of mobile communications device, such as mobile phones,
smartphones, tablets, smartwatches, digital cameras with wireless
connectivity, camera glasses, life loggers, or the like, have captured.
These and other objects of the invention are achieved by means of different aspects of the invention, as defined by the independent claims. Embodiments of the invention are characterized by the dependent claims.
According to a first aspect of the invention, a mobile communications device is provided. The mobile communications device comprises a camera, a positioning sensor, an orientation sensor, a wireless network interface, and a processing circuit. The processing circuit causes the mobile
communications device to be operative to capture one or more images using the camera. The mobile communications device is further operative to transmit the one or more captured images, information indicating respective times of capturing the one or more images, and information pertaining to respective fields-of-view of the camera during capturing the one or more images, to a media server.
According to a second aspect of the invention, a media server is provided. The media server comprises a network interface, and a processing circuit. The processing circuit causes the media server to be operative to receive images captured by cameras comprised in a plurality of mobile communications devices, information indicating respective times of capturing the images, and information pertaining to respective fields-of-view of the cameras during capturing the images, from the mobile communications devices. The media server is further operative to associatively store the received images, the information indicating respective times of capturing the images, and the information pertaining to respective fields-of-view of the cameras during capturing the images, in a database. The media server is further operative to receive a request for retrieval of images, the request comprising information pertaining to one or more positions of another mobile communications device at respective times, to select one or more images stored in the database based on determining that the other mobile
communications device was positioned within the respective fields-of-view associatively stored with the one or more images, and to transmit the selected one or more images in response to the received request.
According to a third aspect of the invention, a method is provided. The method is performed by a mobile communications device. The method comprises capturing one or more images using a camera comprised in the mobile communications device. The method further comprises transmitting the one or more captured images, information indicating respective times of capturing the one or more images, and information pertaining to respective fields-of-view of the camera during capturing the one or more images, to a media server.
According to a fourth aspect of the invention, a computer program is provided. The computer program comprises instructions which, when the computer program is executed by a processor comprised in a mobile communications device, cause the mobile communications device to carry out an embodiment of the method according to the third aspect of the invention. According to a fifth aspect of the invention, a computer-readable storage medium is provided. The computer-readable storage medium has an embodiment of the computer program according to the fourth aspect of the invention stored thereon.
According to a sixth aspect of the invention, a data carrier signal is provided. The data carrier signal carries an embodiment of the computer program according to the fourth aspect of the invention.
According to a seventh aspect of the invention, a method is provided. The method is performed by a media server. The method comprises receiving images captured by cameras comprised in a plurality of mobile communications devices, information indicating respective times of capturing the images, and information pertaining to respective fields-of-view of the cameras during capturing the images, from the mobile communications devices. The method further comprises associatively storing the received images, the information indicating respective times of capturing the images, and the information pertaining to respective fields-of-view of the cameras during capturing the images, in a database. The method further comprises receiving a request for retrieval of images, the request comprising information pertaining to one or more positions of another mobile communications device at respective times. The method further comprises selecting one or more images stored in the database based on determining that the other mobile communications device was positioned within the respective fields-of-view associatively stored with the one or more images. The method further comprises transmitting the selected one or more images in response to the received request.
According to an eighth aspect of the invention, a computer program is provided. The computer program comprising instructions which, when the computer program is executed by a processor comprised in a media server, cause the media server to carry out an embodiment of the method according to the seventh aspect of the invention. According to a ninth aspect of the invention, a computer-readable storage medium is provided. The computer-readable storage medium has an embodiment of the computer program according to the eighth aspect of the invention stored thereon.
According to a tenth aspect of the invention, a data carrier signal is provided. The data carrier signal carries an embodiment of the computer program according to the eighth aspect of the invention.
The invention makes use of an understanding that images which are captured with mobile communications devices can be shared with users of other mobile communications devices, which users were positioned within the respective fields-of-view of the cameras capturing these images. The selection of images which are shared with a requesting user of another mobile communications device is based on information pertaining to respective times of capturing the images and respective fields-of-view of the cameras during capturing the images, in combination with time-stamped position information which is provided by the requesting user. This time- stamped position information represents locations which the user has visited in the past while carrying her mobile communications device.
Advantageously, the solution presented herein does not require any prior relation between users capturing images and other users requesting the retrieval of captured images. Since positioning sensors and orientation sensors are prevalent in modern mobile communications devices such as smartphones, the described solution provides an efficient means of sharing images with persons who may be interested in receiving a copy of a captured image, without relying on resource consuming face-recognition algorithms.
Even though advantages of the invention are in some cases been described with reference to embodiments of one or more specific aspects of the invention, corresponding reasoning applies to embodiments of other aspects of the invention. Further objectives of, features of, and advantages with, the invention will become apparent when studying the following detailed disclosure, the drawings and the appended claims. Those skilled in the art realize that different features of the invention can be combined to create embodiments other than those described in the following.
Brief description of the drawings
The above, as well as additional objects, features and advantages of the invention, will be better understood through the following illustrative and non-limiting detailed description of embodiments of the invention, with reference to the appended drawings, in which:
Fig. 1 illustrates sharing of images captured by a mobile
communications device with other mobile communications devices, via a media server, in accordance with embodiments of the invention.
Fig. 2 shows a sequence diagram, illustrating uploading of user- captured images to the media server, in accordance with embodiments of the invention.
Fig. 3 shows a sequence diagram, illustrating retrieval of user- captured images from the media server, in accordance with embodiments of the invention.
Fig. 4 shows a mobile communications device, in accordance with embodiments of the invention.
Fig. 5 shows a media server, in accordance with embodiments of the invention.
Fig. 6 shows a flow chart illustrating a method performed by a mobile communications device, in accordance with embodiments of the invention.
Fig. 7 shows a flow chart illustrating a method performed by a media server, in accordance with embodiments of the invention. All the figures are schematic, not necessarily to scale, and generally only show parts which are necessary in order to elucidate the invention, wherein other parts may be omitted or merely suggested. Detailed description
The invention will now be described more fully herein after with reference to the accompanying drawings, in which certain embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided by way of example so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
In Fig. 1 , sharing of images captured by Mobile Communications Devices (MCD) 110A and 110B with other mobile communications
devices 110C and 110D, in accordance with embodiments of the invention, is illustrated.
In the present context, an image, or images, is understood to be data representing digital content in the form of one or more images, in particular one or more still images or a video comprising a series of images. Mobile communications devices 110A-110D (collectively referenced as 110) may in particular be embodied as user devices such as smartphones, mobile phones, tablets, smartwatches, digital cameras with wireless connectivity, camera glasses, life loggers, or the like, which have the ability to capture, i.e., record and store, images for subsequent transfer to a media server 130 via a wireless communications network 120, e.g., a Radio Access Network (RAN), such as a cellular telecommunications network (e.g., GSM, UMTS, LTE, 5G, NR/NX) a Wireless Local Area Network (WLAN)/Wi-Fi network, Bluetooth, or any other kind of radio- or light-based communications technology. The process of transferring one or more images from the mobile communications devices 110 to the media server 130, which may be implemented as a network node or as a virtual instance in a cloud environment of a media sharing or social-network platform, is herein also referred to as transmitting or uploading images.
In addition to the wireless communications network 120, the transfer of images from one or more mobile communications devices 110 to the media server 130, or in the reverse direction when retrieving images from the media server 130, may involve additional communications networks such as the Internet (not shown in Fig. 1 ).
In the scenario depicted in Fig. 1 , two users visiting a certain location capture images of a scene with their respective mobile communications devices 110A and 110B, which may, e.g., be smartphones. The mobile communications devices 110A and 110B have respective fields-of-view 111A and 111 B (in Fig. 1 illustrated as acute angles limited by dashed
lines 111A/B, and collectively referenced as 111 ), which are determined by properties of the cameras comprised in the mobile communications devices 110A and 110B. The field-of-view of a camera may be adjustable by modifying the optics of the camera, e.g., by changing its focal length (aka optical zoom) or by cropping the area of the image which is captured by the camera (aka digital zoom). That is, the field-of-view is a characteristic of each captured image and may be determined based on the current configuration of the camera (e.g., if optical zoom is used) or settings of a camera app executed on a smartphone (e.g., if digital zoom is used).
In Fig. 1 , two other users carrying mobile communications
devices 110C and 110D are depicted as being positioned within the field-of- view 111 A of the mobile communications devices 110A, and the user of mobile communications devices 110D is depicted as being additionally positioned within the field-of-view 111 B of the mobile communications devices 110B. Accordingly, the user of the mobile communications
device 110C is likely to be represented, i.e., visible, in images which are captured by the mobile communications device 110A, and the user of the mobile communications device 110D is likely to be represented in images which are captured by the mobile communications devices 110A and 110B.
The solution provided herein is directed to enabling sharing of images captured by mobile communications devices 110A/B with users of other mobile communications devices 110C/D which are represented in the captured images. In particular, this may be the case if users carry their mobile communications devices 110A/B during a trip, e.g., for capturing images of a sight they are visiting. As may be the case, other people, which are potentially unknown to the user capturing the image, may accidently be positioned within the field-of-view of the camera during capturing an image, and are accordingly represented in the captured image.
Known solutions for sharing images with others are typically limited to acquaintances of the user who has captured the images, in particular social- network contacts. This may, e.g., be achieved by the user tagging captured images with names of persons which are represented in her images and which she recognizes, or by (semi-)automatically recognizing faces of people known the user, e.g., social-network contacts, by conducting image analysis on the captured images.
As is described herein, embodiments of the invention enable sharing of images captured by mobile communications devices 110A/B with users of other mobile communications devices 110C/D which are likely to be represented in the captured images. This is achieved by transferring images captured by mobile communications devices 110A/B to a media server 130, for later retrieval by users of other mobile communications devices 110C/D which were positioned within the respective fields-of-view 111 A/B of the cameras during capturing the images. The selection of images which is made available to a user requesting retrieval of images is based on information pertaining to respective times of capturing the images and respective fields- of-view during capturing the images, in combination with time-stamped position information which is provided by the requesting user. More specifically, the images are selected based on determining that the user requesting retrieval of images was positioned within the respective fields-of- view when the images were captured. This is achieved by providing information pertaining to past positions of a mobile communications device 110C/D of the requesting user at respective times, i.e., time-stamped position information, to the media server 130. That is, the selection of images by the media server 130 is based on the understanding that the users requesting retrieval of images from the media server 130 have been carrying their respective mobile communications devices 130 when the selected images were captured.
Advantageously, the solution presented herein does not require any prior relation between users capturing images and other users requesting the retrieval of captured images. Since positioning sensors and orientation sensors are prevalent in modern mobile communications devices such as smartphones, the described solution provides an efficient means of sharing images with persons who may be interested in receiving a copy of a captured image, without relying on resource consuming face-recognition algorithms.
In the following, embodiments of the mobile communications device 110 and the media server 130 are described with further reference to Figs. 2 and 3, which show sequence diagrams illustrating transfer of user- captured images to the media server 130 (Fig. 2) and retrieval of user- captured images from the media server 130 (Fig. 3), respectively.
Embodiments of the mobile communications device 110, which are schematically illustrated in Fig. 4, comprise a camera 410, a positioning sensor 420, an orientation sensor 430, a wireless network interface 440, and a processing circuit 450.
The camera 410 is a digital camera, e.g., of CMOS type which is prevalent in today’s smartphones, and is configured to capture images with a field-of-view 111 A/111 B which is determined by the current position and orientation of the camera 410 (and, accordingly, of the mobile
communications device 110 to which the camera 410 is fixated) in space. The field-of-view may be expressed in terms of the angular size of the view cone, as an angle-of-view. For a conventional camera lens, the diagonal field of view FoV can be calculated as
SensorSize
FoV = tan 1 ( 2/ ).
where SensorSize is the size of the camera sensor and f its focal length.
The positioning sensor 420 is configured to determine a current position of the mobile communications device 110, and accordingly the camera 410. It may either be based on the Global Positioning System (GPS), the Global Navigation Satellite System (GNSS), China's BeiDou Navigation Satellite System (BDS), GLONASS, or Galileo, or may receive position information via the wireless network interface 440, e.g., from a positioning server. The position information may, e.g., be based on radio triangulation, radio fingerprinting, or crowd-sourced identifiers which are associated with known positions of access points of wireless communications networks (e.g., cell-IDs or WLAN SSIDs). The current position of the mobile communications device 110 may, e.g., be made available via an Application Programming Interface (API) provided by an operating system of the mobile
communications device 110. The current position at the time of capturing an image may be stored as metadata with the image, or in a separate data record.
The orientation sensor 430 is configured to determine a current orientation of the mobile communications device 110, and accordingly the camera 410, relative to a reference frame, e.g., the direction of gravity. It may comprise one or more sensors of different type, such as accelerometers, gyroscopes, and magnetometers, which are common in today’s
smartphones. The current orientation of the mobile communications device 110 may, e.g., be made available via an API provided by the operating system of the mobile communications device 110. The current orientation at the time of capturing an image may be stored as metadata with the image, or in a separate data record.
The wireless network interface 440 is configured to access the wireless communications network 120 and thereby enable the mobile communications device 110 to communicate, i.e., exchange data in either direction (uplink or downlink), with the media server 130 and optionally any other network node which is accessible via the wireless communications network 120, e.g., a positioning server. It may, e.g., comprise one or more of a cellular modem (e.g., GSM, UMTS, LTE, 5G, NR/NX), a WLAN/Wi-Fi modem, a Bluetooth modem, a Visible Light Communication (VLC) modem, and the like.
The processing circuit 450 may comprise one or more
processors 451 , such as Central Processing Units (CPUs), microprocessors, application-specific processors, Graphics Processing Units (GPUs), and Digital Signal Processors (DSP) including image processors, or a
combination thereof, and a memory 452 comprising a computer program 453 comprising instructions. The computer program 453 is configured, when executed by the processor(s) 451 , to cause the mobile communications device 110 to perform in accordance with embodiments of the invention described herein. The computer program 453 may be downloaded to the memory 452 by means of the wireless network interface 440, as a data carrier signal carrying the computer program 453. The processor(s) 451 may further comprise one or more Application-Specific Integrated Circuits
(ASICs), Field-Programmable Gate Arrays (FPGAs), or the like, which in cooperation with, or as an alternative to, the computer program 453 are configured to cause the mobile communications device 110 to perform in accordance with embodiments of the invention described herein.
Embodiments of the media server 130, which are schematically illustrated in Fig. 5, comprise a network interface 510 and a processing circuit 520. The network interface 510 is configured to enable the media server to communicate, i.e., exchange data in either direction, with mobile
communications devices 110, via the wireless communications network 120, and optionally with other network nodes, e.g., an external database 140 for storing images. It may be any type of wired or wireless network interface, e.g., Ethernet, WLAN/Wi-Fi, or the like.
The processing circuit 520 may comprise one or more
processors 521 , such as Central Processing Units (CPUs), microprocessors, application-specific processors, Graphics Processing Units (GPUs), and Digital Signal Processors (DSP) including image processors, or a
combination thereof, and a memory 522 comprising a computer program 523 comprising instructions. The computer program 523 is configured, when executed by the processor(s) 521 , to cause the media server 130 to perform in accordance with embodiments of the invention described herein. The computer program 523 may be downloaded to the memory 522 by means of the network interface 510, as a data carrier signal carrying the computer program 523. The processor(s) 521 may further comprise one or more Application-Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), or the like, which in cooperation with, or as an alternative to, the computer program instructions 523 are configured to cause the media server 130 to perform in accordance with embodiments of the invention described herein.
More specifically, and with reference to Fig. 2, the mobile
communications device 110 is operative to capture 211 one or more images using the camera 410. Capturing 211 the one or more images may be triggered by a user of the mobile communications device 110, e.g., by pressing a camera button which may either be a hardware button provided on a face of the mobile communications device 110, or a virtual button which is displayed on a touchscreen comprised in the mobile communications device 110 as part of the user interface of a camera app, as is known in the art. Alternatively, capturing 211 the one or more images may be effected repeatedly, periodically, or regularly, or if a current position of the mobile communications device 110 has changed by more than a threshold value (which may optionally be configured by the user of the mobile
communications device 110), in an always-on camera, or life-logger, type of fashion.
The mobile communications device 110 is further operative to transmit 215, herein also referred to as transferring or uploading, the one or more captured images, information indicating respective times of
capturing 211 the one or more images, and information pertaining to respective fields-of-view of the camera 410 during capturing 211 the one or more images, to the media server 130. In particular, the one or more images may be transmitted 215 to the media server 130 for later retrieval, from the media server 130, by a user of another mobile communications
device 110C/D which was positioned within the respective fields-of- view 111A/B of the camera 410 during capturing 211 the one or more images, as is described in further detail below.
The one or more captured images may either be transmitted 215 in the same format as they were captured and stored by the camera 410, i.e., in raw data format or in a compressed file format, or as a representation of the captured images. For instance, the mobile communications device 110 may be operative to transmit 215 a compressed version of the captured images, e.g., with reduced resolution or color space, to the media server 130 for sharing, thereby reducing bandwidth which is required for transmitting 215 the one or more images via the wireless communications network 120, and any other interconnected communications network, to the media server 130.
The information indicating respective times of capturing the one or more images, and the information pertaining to respective fields-of-view of the camera 410 during capturing the one or more images, may be
transmitted 215 together with the one or more images, either as metadata or as a separate data record, or in separate message exchanges between the mobile communications device 110 and the media server 130.
The information indicating respective times of capturing the one or more images may, e.g., comprise time stamps which are obtained from a clock comprised in the mobile communications device 110. The current time may, e.g., be obtained via an API provided by the operating system of the mobile communications device 110. The time of capturing an image may be stored as metadata with the image, or in a separate data record.
The mobile communications device 110 may be operative to determine the respective fields-of-view 111A/B of the camera 410 during capturing the one or more images based on information received from the positioning sensor 420 and the orientation sensor 430.
More specifically, mobile communications device 110 may be operative to determine 212 respective positions of the mobile
communications device 110 during capturing 211 the one or more images using the positioning sensor 420, and determine 213 respective directions in which the camera 410 is pointing during capturing 211 the one or more images using the orientation sensor 430. The information may either be received directly from the positioning sensor 420 and the orientation sensor 430, or via an API of the operating system.
Optionally, the mobile communications device 110 may be operative to determine the respective fields-of-view 111A/B of the camera 410 during capturing 211 the one or more images further based on information received from the camera 410. More specifically, mobile communications device 110 may be operative to determine 214 respective angles-of-view of the camera 410 during capturing 211 the one or more images based on information pertaining to a configuration of the camera 410. The information may either be received directly from the camera 410, via an API of the operating system, as is described hereinbefore, or via an API of a camera app which is executed on the mobile communications device 110 and which is provided for controlling the camera 410 via a (optionally touch-based) user- interface of the mobile communications device 110. The information may, e.g., relate to one or more of a current focal-length setting of the camera 410, a size of the sensor of the camera 410, a current angle-of-view of the camera 410, or the like.
The transmitted 215 information pertaining to the respective fields-of- view of the camera 410 during capturing 211 the one or more images comprises the determined respective positions and the determined
respective directions. It may optionally further comprise the determined respective angles-of-view 111 A/B.
The mobile communications device 110 may further be operative to transmit additional information pertaining to light conditions, whether conditions, or visibility, during capturing the images. This additional information may be transmitted either as metadata or as separate data record, or in separate message exchanges between the mobile
communications device 110 and the media server 130.
The media server 130 is operative to receive 215 images captured by cameras comprised in a plurality of mobile communications devices 110, information indicating respective times of capturing the images, and information pertaining to respective fields-of-view 111 A/B of the cameras during capturing the images, from the mobile communications devices 110. The received information pertaining to respective fields-of-view 111A/B of the cameras during capturing the images may comprise respective positions of the mobile communications devices 110 during capturing the images and respective directions in which the cameras are pointing during capturing the images. Optionally, the received information pertaining to respective fields-of- view 111 A/B of the cameras during capturing the images may further comprise respective angles-of-view of the cameras during capturing the images. As an alternative to receiving 215 respective angles-of-view of the cameras during capturing the images with the information pertaining to respective fields-of-view 111A/B of the cameras during capturing the images, the media server 130 may be operative to determine the respective angles- of-view by analyzing the received images, e.g., based on known dimensions of objects captured in the images (e.g., known buildings which are
recognized using object-recognition algorithms) and the distance between the camera and the object, or based on analyzing a distortion of known geometric shapes (e.g., lines which are known to be parallel or surfaces which are known to rectangular).
The media server 130 is further operative to associatively store 221 the received 215 images, the information indicating respective times of capturing the images, and the information pertaining to respective fields-of- view 111 A/B of the cameras during capturing the images, in a database. The database may either be comprised in, or co-located with, the media server 130 (such as database 530 shown in Fig. 5), or provided separately from the media server 130 and accessible for the media server 130 via network interface 510 (such as database 140 shown in Fig. 1 ), e.g., as a cloud-based storage.
With reference to Fig. 3, the mobile communications device 110C/D may further be operative to associatively store 231 information pertaining to one or more positions of the mobile communications device 110C/D at respective times, e.g., in the form of a sequence of position-timestamp pairs, and to transmit 232 a request for retrieval of images to the media server 130. The transmitted 232 request comprises the information pertaining to one or more positions of the mobile communications device 110C/D at respective times. The mobile communications device 110C/D is further operative to receive 313 one or more images from the media server 130, which images were captured using respective fields-of-view 111 A/B encompassing the position of the mobile communications device 110C/D during capturing the respective images.
Correspondingly, the media server 130 is further operative to receive 232 the request for retrieval of images, the request comprising information pertaining to one or more positions of the mobile communications device 110C/D at respective times (e.g., in the form of a sequence of position-timestamp pairs), to select 311 one or more images stored in the database, and to transmit 313 the selected one or more images in response to the received 232 request. The media server 130 is operative to select 311 the one or more images based on determining that the mobile
communications device 110C/D was positioned within the respective fields- of-view 111 A/B associatively stored with the one or more images in the database. That is, if the position of the mobile communications
device 110C/D, as defined by the position information transmitted 232 with the request for retrieval of images, was within the field-of-view 111A/B of the camera when an image was captured 211 , that image is selected 311 for transmission 313 to the requesting mobile communications device 110C/D.
Optionally, the media server 130 may further be operative to select 311 the one or more images stored in the database 140/530 further based on distances between respective positions of the other mobile communications device 110 and respective positions of the mobile
communications device 110 during capturing 211 the images. For instance, this may be achieved by using a threshold distance, which may optionally be specified by the user requesting the retrieval of images, or by prioritizing or sorting the selected images based on distance. For instance, preference may be given to images captured at shorter distance from the requesting user. In this way, the images which are selected 311 and transmitted 313 in response to a request 232 for retrieval of images are more likely to represent the user of the other mobile communications device 110 in a desirable manner, as the user was closer, or sufficiently close, to the camera during capturing the image.
Additionally or alternatively, the media server 130 may be operative to select images based on light conditions, whether conditions, or visibility, during capturing the images, as images which were“shot into the sun” or captured during low-visibility conditions typically are less preferred. This may be achieved by selecting images based on additional information received from the mobile communications devices 110, or by retrieving historical information about light conditions, whether conditions, or visibility, from a weather database.
Further with reference to Fig. 3, the mobile communications
device 110C/D may further be operative to transmit 232 an image
representation of a user of the mobile communications device 110C/D with the request for retrieval of images to the media server 130. This may, e.g., be an image of the user’s face or body, or part’s thereof, or a set of features which is derived from an image-recognition algorithm, in particular a face- recognition algorithm. The one or more images which are received 313 from the media server 130 represent the user.
Correspondingly, the media server 130 may be operative to
receive 232 an image representation of a user with the request for retrieval of images, and to recognize the user by analyzing one or more images which are selected based on determining that the other mobile communications device 110C/D was positioned within the respective fields-of-view 111 C/D associatively stored with the one or more images, by using the received image representation of the user as input to a face-recognition algorithm.
Advantageously, face recognition is thereby only performed on images which have been selected based on determining that that the requesting mobile communications device 110C/D was positioned within the respective fields- of-view 111 A/B associatively stored with the images in the database. The media server 130 is operative to select 311 the one or more images stored in the database 140/530 further based on determining that the user was successfully recognized.
Optionally, the media server 130 may be operative to select 311 the one or more images stored in the database 140/530 further based on respective facial expressions of the user in the images in which the user was successfully recognized. This may be achieved by utilizing known face- recognition algorithms which are able to detect facial expressions. For instance, selection of images may be limited to images in which the user is smiling.
Optionally, the media server 130 may be operative to select 311 the one or more images stored in the database 140/530 further based respective gazes of the user in the images in which the user was successfully
recognized. This may be achieved by utilizing known face-recognition algorithms which are able to detect a gaze direction. For instance, selection of images may be limited to images in which the user is gazing at the camera.
Further with reference to Fig. 3, the mobile communications
device 110C/D may be operative, if a plurality of images is received 313 from the media server 130, to combine 341 the received plurality of images into a slide show or a video. Alternatively, the media server 130 may be operative to select 311 a plurality of images and to combine 312 the selected plurality images into a slide show or a video. Thereby, based on the position history of the mobile communications device 110C/D, which typically reflects the locations visited by its user during a duration of time, e.g., during a trip which the user has made, images representing the user can be combined into a slide show or video summarizing the user’s trip.
The mobile communications device 110C/D may further be operative, if a plurality of images is received 313 from the media server 130 which were captured at substantially the same time by different mobile communications devices 110A and 110B having respective positions and orientations, to combine 342 the received plurality of images into a 3D image. Alternatively, the mobile communications device 110C/D may be operative to select 341 images, among the received plurality of images, which were captured at substantially the same time by different mobile communications
devices 110A and 110B having respective positions and orientations, and to combine 342 the selected images into a 3D image. As yet a further alternative, the media server 130 may be operative to select 311 a plurality of images which were captured at substantially the same time by different mobile communications devices having respective positions and orientations, and to combine 312 the plurality of images into a 3D image. The resulting 3D image is transmitted 313 as a representation of the selected one or more images in response to the received request. By selecting images which were captured at substantially the same time, i.e., within a defined short time interval during which the captured user, or the captured scene in general, can be considered static, rendition of a 3D image from two or more 2D images may be performed, as is known in the art.
It will be appreciated that the position information which is
received 232 with the request for retrieval of images may contain position information pertaining to current positions of the mobile communications device 110C/D at respective times which does not exactly coincide with capturing times of images stored in the database. In this case, the
received 232 position information may be interpolated to estimate an approximate position of the mobile communications device 110C/D at respective capturing times of images stored in the database. Alternatively, images may be selected which have been captured shortly before, or shortly after, a point in time for which position information has been received 232 from the mobile communications device 110C/D. The selection of images based on respective capturing times which are close in time to time-stamped position information received 232 from the mobile communications
device 110C/D may also be based on a speed of the mobile communications device 110C/D at the relevant time. For instance, if the mobile
communications device 110C/D was substantially stationary during a certain duration of time, selecting 311 images with respective capturing times during that duration of time does not require exact matching of position timestamps with respective capturing times.
It will be appreciated that alternative embodiments of the mobile communications device 110 may be envisaged. In particular, such alternative embodiments may not necessarily be operative to capture 211 one or more images using the camera 410, and to transmit 215 the one or more captured images, information indicating respective times of capturing the one or more images, and information pertaining to respective fields-of-view of the camera during capturing the one or more images, to the media server 130. Rather, an alternative embodiment of the mobile communications device 110C/D may be operative to associatively store 231 information pertaining to one or more positions of the mobile communications device at respective times, transmit 232 a request for retrieval of images to the media server, the request comprising the information pertaining to one or more positions of the mobile communications device at respective times, and to receive 313 one or more images from the media server, without being operative to perform the steps illustrated in Fig. 2, i.e., without transferring captured images to the media server 130 for sharing with users of other mobile communications devices. Such alternative embodiment of the mobile communications device 110 may optionally be operative to perform additional steps, in in accordance with what is described herein.
It may also be envisaged that a request for retrieval of images may be received by the media server 130 from a computing device other than the mobile communications device 110. For instance, users may utilize a personal computer a laptop, a tablet, or the like to request retrieval of images from the media server 130 based on time-stamped position information which has been collected by their respective mobile communication devices, wearables, smartwatches, or the like.
The mobile communications device 110 is operative to exchange information with the media server 130, in particular to transmit 215 the one or more captured images, the information indicating respective times of capturing the one or more images, and the information pertaining to respective fields-of-view of the camera during capturing the one or more images, and to receive 313 one or more images from the media server 130, using any suitable network protocol, combination of network protocols, or protocol stack. For instance, the mobile communications device 110 may be operative to utilize the HyperText Transfer protocol (HTTP), the Transmission Control Protocol (TCP), the Internet Protocol (IP), the User Datagram
Protocol (UDP), the Constrained Application Protocol (CoAP), or the like. The media server 130 is operative to exchange information with the mobile communications devices 110, and optionally with an external database 140, using one or more corresponding network protocols.
In the following, embodiments of a method 600 performed by a mobile communications device, such as the mobile communications device 110, are described with reference to Fig. 6.
The method 600 comprises capturing 601 one or more images using a camera 410 comprised in the mobile communications device 110, and transmitting 605 the one or more captured images, information indicating respective times of capturing the one or more images, and information pertaining to respective fields-of-view 111 of the camera 410 during capturing the one or more images, to a media server 130. More specifically, the one or more images are transmitted 605 to the media server 130 for later retrieval, from the media server 130, by a user of another mobile communications device 110 which was positioned within the respective fields-of-view 111 of the camera 410 during capturing the one or more images. The method 600 may further comprise determining the respective fields-of-view 111 of the camera 410 during capturing the one or more images based on information received from a positioning sensor 420 and an orientation sensor 430 comprised in the mobile communications device 110. Optionally, the respective fields-of-view 111 of the camera 410 during capturing the one or more images are determined further based on information received from the camera 410.
The method 600 may further comprise determining 602 respective positions of the mobile communications device 110 during capturing the one or more images using a positioning sensor 420 comprised in the mobile communications device 110, and determining 603 respective directions in which the camera 410 is pointing during capturing the one or more images using an orientation sensor 430 comprised in the mobile communications device 110. The information pertaining to the respective fields-of-view 111 of the camera 410 during capturing the one or more images comprises the determined respective positions and the determined respective directions. Optionally, the method 600 further comprises determining 604 respective angles-of-view of the camera 410 during capturing the one or more images based on information pertaining to a configuration of the camera 410, and the information pertaining to the respective fields-of-view 111 of the camera 410 during capturing the one or more images further comprises the determined respective angles-of-view.
The method 600 may further comprise associatively storing 606 information pertaining to one or more positions of the mobile communications device 110 at respective times, transmitting 607 a request for retrieval of images to the media server 130, and receiving 608 one or more images from the media server 130. The request comprises the information pertaining to one or more positions of the mobile communications device 110 at respective times. More specifically, the one or more images which are received 608 from the media server 130 were captured using respective fields-of-view 111 encompassing the position of the mobile communications device 110 during capturing the respective images.
Optionally, the request for retrieval of images further comprises an image representation of a user of the mobile communications device 110, and the one or more images which are received 608 from the media server 130 represent the user.
Optionally, a plurality of images is received 608 from the media server 130, and the method 600 further comprises combining 609 the received plurality of images into a slide show or a video.
Optionally, a plurality of images is received 608 from the media server 130 which were captured at substantially the same time by different mobile communications devices 110 having respective positions and orientations, and the method 600 further comprises combining 611 the received plurality of images into a 3D image.
Optionally, a plurality of images is received 608 from the media server 130, and the method 600 further comprises selecting 610 images, among the plurality of images received from the media sever 130, which were captured at substantially the same time by different mobile
communications devices 110 having respective positions and orientations, and combining 611 the selected images into a 3D image.
It will be appreciated that the method 600 may comprise additional, alternative, or modified, steps in accordance with what is described throughout this disclosure. An embodiment of the method 600 may be implemented as a computer program 453 comprising instructions which, when the computer program is executed by a processor 451 comprised in a mobile communications device 110, cause the mobile communications device 110 to carry out an embodiment of the method 600. The computer program 453 may be stored on a computer-readable storage medium 452, such as a memory stick, a Random-Access Memory (RAM), a Read-Only Memory (ROM), a Flash memory, a CDROM, a DVD, or the like. Alternatively, the computer program 453 may be carried by a data carrier signal, e.g., when the computer program is downloaded to a mobile communications device 110 via a wireless network interface 440 comprised in the mobile communications device 110.
In the following, embodiments of a method 600 performed by a media server, such as the media server 130, are described with reference to Fig. 7.
Method 700 comprises receiving 701 images captured by cameras comprised in a plurality of mobile communications devices 110, information indicating respective times of capturing the images, and information pertaining to respective fields-of-view 111 of the cameras during capturing the images, from the mobile communications devices 110. The method 700 further comprises associatively storing 702 the received images, the information indicating respective times of capturing the images, and the information pertaining to respective fields-of-view 111 of the cameras during capturing the images, in a database 140/530. The method 700 further comprises receiving 703 a request for retrieval of images, the request comprising information pertaining to one or more positions of another mobile communications device 110 at respective times. The method 700 further comprises selecting 704 one or more images stored in the database 140/530 based on determining that the other mobile communications device 110 was positioned within the respective fields-of-view 111 associatively stored with the one or more images. The method 700 further comprises transmitting 711 the selected one or more images in response to the received request.
The received information pertaining to respective fields-of-view 111 of the cameras during capturing the images may comprise respective positions of the mobile communications devices 110 during capturing the images and respective directions in which the cameras are pointing during capturing the images. Optionally, the received information pertaining to respective fields-of- view 111 of the cameras during capturing the images may further comprise respective angles-of-view of the cameras during capturing the images. The one or more images stored in the database 140/530 may be selected 704 further based on a distance between respective positions of the other mobile communications device 110 and respective positions of the mobile communications device 110 during capturing the images.
The request for retrieval of images may further comprise an image representation of a user requesting the retrieval of images. The method 600 may further comprise recognizing the user by analyzing one or more images selected based on determining that the other mobile communications device 110 was positioned within the respective fields-of-view 111
associatively stored with the one or more images, and the one or more images stored in the database 140/530 may be selected 704 further based on determining that the user was successfully recognized.
The one or more images stored in the database 140/530 may be selected 704 further based on respective facial expressions of the user in the images in which the user was successfully recognized.
The one or more images stored in the database 140/530 may be selected 704 further based on respective gazes of the user in the images in which the user was successfully recognized.
Optionally, a plurality of images is selected 704, and the method 600 further comprises combining 708 the selected plurality images into a slide show or a video.
Optionally, a plurality of images is selected 704 which were captured at substantially the same time by different mobile communications
devices 110 having respective positions and orientations, and the
method 600 further comprises combining 710 the plurality of images into a 3D image, and transmitting 711 the 3D image as a representation of the selected one or more images in response to the received request.
It will be appreciated that the method 700 may comprise additional, alternative, or modified, steps in accordance with what is described throughout this disclosure. An embodiment of the method 700 may be implemented as a computer program 523 comprising instructions which, when the computer program is executed by a processor 521 comprised in a media server 130, cause the media server 130 to carry out an embodiment of the method 700. The computer program 523 may be stored on a computer- readable storage medium 522, such as a memory stick, a Random-Access Memory (RAM), a Read-Only Memory (ROM), a Flash memory, a CDROM, a DVD, or the like. Alternatively, the computer program 523 may be carried by a data carrier signal, e.g., when the computer program is downloaded to a media server 130 via a network interface 510 comprised in the media server 130.
The person skilled in the art realizes that the invention by no means is limited to the embodiments described above. On the contrary, many modifications and variations are possible within the scope of the appended claims.

Claims

1. A mobile communications device (110) comprising:
a camera (410),
a positioning sensor (420),
an orientation sensor (430),
a wireless network interface (440), and
a processing circuit (450) causing the mobile communications device to be operative to:
capture (211 ) one or more images using the camera (410), and transmit (215) the one or more captured images, information indicating respective times of capturing the one or more images, and information pertaining to respective fields-of-view (111 ) of the camera (410) during capturing the one or more images, to a media server (130).
2. The mobile communications device (110) according to claim 1 , wherein the one or more images are transmitted to the media server (130) for later retrieval (232, 313), from the media server (130), by a user of another mobile communications device (110) which was positioned within the respective fields-of-view (111 ) of the camera (410) during capturing the one or more images.
3. The mobile communications device (110) according to claim 1 or 2, the mobile communications device being operative to determine the respective fields-of-view (111 ) of the camera (410) during capturing the one or more images based on information received from the positioning
sensor (420) and the orientation sensor (430).
4. The mobile communications device (110) according to claim 3, the mobile communications device being operative to determine the respective fields-of-view (111 ) of the camera (410) during capturing the one or more images further based on information received from the camera (410).
5. The mobile communications device (110) according to claim 1 or 2, the mobile communications device being further operative to:
determine (212) respective positions of the mobile communications device (110) during capturing the one or more images using the positioning sensor (420), and
determine (213) respective directions in which the camera (410) is pointing during capturing the one or more images using the orientation sensor (430),
wherein the information pertaining to the respective fields-of-view (111 ) of the camera (410) during capturing the one or more images comprises the determined respective positions and the determined respective directions.
6. The mobile communications device (110) according to claim 5, the mobile communications device being further operative to:
determine (214) respective angles-of-view of the camera (410) during capturing the one or more images based on information pertaining to a configuration of the camera (410),
wherein the information pertaining to the respective fields-of-view (111 ) of the camera during capturing the one or more images further comprises the determined respective angles-of-view.
7. The mobile communications device (110) according to any one of claims 1 to 6, the mobile communications device being further operative to: associatively store (231 ) information pertaining to one or more positions of the mobile communications device (110) at respective times, transmit (232) a request for retrieval of images to the media
server (130), the request comprising the information pertaining to one or more positions of the mobile communications device (110) at respective times, and
receive (313) one or more images from the media server (130).
8. The mobile communications device (110) according to claim 7, wherein the one or more images which are received from the media server (130) were captured using respective fields-of-view (111 )
encompassing the position of the mobile communications device (110) during capturing the respective images.
9. The mobile communications device (110) according to claim 7 or 8, the request (232) for retrieval of images further comprising an image representation of a user of the mobile communications device (110), wherein the one or more images which are received (313) from the media server represent the user.
10. The mobile communications device (110) according to any one of claims 7 to 9, wherein a plurality of images is received (313) from the media server (130), the mobile communications device (110) being further operative to combine (342) the received plurality of images into a slide show or a video.
11. The mobile communications device (110) according to any one of claims 7 to 9, wherein a plurality of images is received (313) from the media server (130) which were captured at substantially the same time by different mobile communications devices (110) having respective positions and orientations, the mobile communications device being further operative to combine (342) the received plurality of images into a 3D image.
12. The mobile communications device (110) according to any one of claims 7 to 9, wherein a plurality of images is received (313) from the media server (130), the mobile communications device being further operative to: select (341 ) images, among the plurality of images received from the media sever, which were captured at substantially the same time by different mobile communications devices (110) having respective positions and orientations, and
combine (342) the selected images into a 3D image.
13. A media server (130) comprising:
a network interface (510), and
a processing circuit (520) causing the media server to be operative to: receive (215) images captured by cameras comprised in a plurality of mobile communications devices (110), information indicating respective times of capturing the images, and information pertaining to respective fields-of-view (111 ) of the cameras during capturing the images, from the mobile communications devices (110), associatively store (221 ) the received images, the information indicating respective times of capturing the images, and the
information pertaining to respective fields-of-view of the cameras during capturing the images, in a database (140; 530),
receive (232) a request for retrieval of images, the request comprising information pertaining to one or more positions of another mobile communications device (110) at respective times,
select (311 ) one or more images stored in the database (140;
530) based on determining that the other mobile communications device (110) was positioned within the respective fields-of-view (111 ) associatively stored with the one or more images, and
transmit (313) the selected one or more images in response to the received request.
14. The media server (130) according to claim 13, wherein the received (232) information pertaining to respective fields-of-view (111 ) of the cameras during capturing the images comprises respective positions of the mobile communications devices (110) during capturing the images and respective directions in which the cameras are pointing during capturing the images.
15. The media server (110) according to claim 14, wherein the received (232) information pertaining to respective fields-of-view (111 ) of the cameras during capturing the images further comprises respective angles-of- view of the cameras during capturing the images.
16. The media server (130) according to any one of claims 13 to 15, the media server being operative to select (311 ) the one or more images stored in the database (140; 530) further based on distances between respective positions of the other mobile communications device (110) and respective positions of the mobile communications device (110) during capturing the images.
17. The media server (110) according to any one of claims 13 to 16, the request (232) for retrieval of images further comprising an image representation of a user requesting the retrieval of images, the media server being operative to:
recognize the user by analyzing one or more images selected based on determining that the other mobile communications device (110) was positioned within the respective fields-of-view (111 ) associatively stored with the one or more images, and
select (311 ) one or more images stored in the database (140; 530) further based on determining that the user was successfully recognized.
18. The media server according to claim 17, the media server (130) being operative to select (311 ) the one or more images stored in the database (140; 530) further based on respective facial expressions of the user in the images in which the user was successfully recognized.
19. The media server according to claim 17, the media server (130) being operative to select the one or more images stored in the
database (140; 530) further based on respective gazes of the user in the images in which the user was successfully recognized.
20. The media server (130) according to any one of claims 13 to 19, wherein a plurality of images is selected (311 ), the media server being further operative to combine (312) the selected plurality of images into a slide show or a video.
21. The media server (130) according to any one of claims 13 to 19, wherein a plurality of images is selected (311 ) which were captured at substantially the same time by different mobile communications
devices (110) having respective positions and orientations, the media server being further operative to:
combine (312) the plurality of images into a 3D image, and
transmit (313) the 3D image as a representation of the selected one or more images in response to the received request.
22. A method (600) performed by a mobile communications
device (110), the method comprising:
capturing (211 ; 601 ) one or more images using a camera (410) comprised in the mobile communications device (110), and transmitting (215; 605) the one or more captured images, information indicating respective times of capturing the one or more images, and information pertaining to respective fields-of-view (111 ) of the camera (410) during capturing the one or more images, to a media server (130).
23. The method (600) according to claim 22, wherein the one or more images are transmitted (215; 605) to the media server (130) for later retrieval (232, 313; 607, 608), from the media server (130), by a user of another mobile communications device (110) which was positioned within the respective fields-of-view (111 ) of the camera (410) during capturing the one or more images.
24. The method (600) according to claim 22 or 23, further comprising determining the respective fields-of-view (111 ) of the camera (410) during capturing the one or more images based on information received from a positioning sensor (420) and an orientation sensor (430) comprised in the mobile communications device (110).
25. The method (600) according to claim 24, wherein the respective fields-of-view (111 ) of the camera (410) during capturing the one or more images are determined further based on information received from the camera (410).
26. The method (600) according to claim 22 or 23, further comprising: determining (212; 602) respective positions of the mobile
communications device (110) during capturing the one or more images using a positioning sensor (420) comprised in the mobile communications device (110), and
determining (213; 603) respective directions in which the
camera (410) is pointing during capturing the one or more images using an orientation sensor (430) comprised in the mobile communications
device (110),
wherein the information pertaining to the respective fields-of-view (111 ) of the camera (410) during capturing the one or more images comprises the determined respective positions and the determined respective directions.
27. The method (600) according to claim 26, further comprising:
determining (214; 604) respective angles-of-view of the camera (410) during capturing the one or more images based on information pertaining to a configuration of the camera (410),
wherein the information pertaining to the respective fields-of-view (111 ) of the camera (410) during capturing the one or more images further comprises the determined respective angles-of-view.
28. The method (600) according to any one of claims 22 to 27, further comprising:
associatively storing (231 ; 606) information pertaining to one or more positions of the mobile communications device (110) at respective times, transmitting (232; 607) a request for retrieval of images to the media server (130), the request comprising the information pertaining to one or more positions of the mobile communications device (110) at respective times, and
receiving (313; 608) one or more images from the media server (130).
29. The method (600) according to claim 28, wherein the one or more images which are received (313; 608) from the media server (130) were captured using respective fields-of-view (111 ) encompassing the position of the mobile communications device (110) during capturing the respective images.
30. The method (600) according to claim 28 or 29, the request (232) for retrieval of images further comprising an image representation of a user of the mobile communications device (110), wherein the one or more images which are received (313; 608) from the media server (130) represent the user.
31. The method (600) according to any one of claims 28 to 30, wherein a plurality of images is received (313; 608) from the media
server (130), the method further comprising combining (342; 609) the received plurality of images into a slide show or a video.
32. The method (600) according to any one of claims 28 to 30, wherein a plurality of images is received (313; 608) from the media
server (130) which were captured at substantially the same time by different mobile communications devices (110) having respective positions and orientations, the method further comprising combining (342; 611 ) the received plurality of images into a 3D image.
33. The method (600) according to any one of claims 28 to 30, wherein a plurality of images is received (313; 608) from the media
server (130), the method further comprising:
selecting (341 ; 610) images, among the plurality of images received from the media sever (130), which were captured at substantially the same time by different mobile communications devices (110) having respective positions and orientations, and
combining (342; 611 ) the selected images into a 3D image.
34. A computer program (453) comprising instructions which, when the computer program is executed by a processor (451 ) comprised in a mobile communications device (110), cause the mobile communications device to carry out the method according to any one of claims 22 to 33.
35. A computer-readable storage medium (452) having stored thereon the computer program (453) according to claim 34.
36. A data carrier signal carrying the computer program (453) according to claim 34.
37. A method (700) performed by a media server (130), the method comprising:
receiving (215; 701 ) images captured by cameras comprised in a plurality of mobile communications devices (110), information indicating respective times of capturing the images, and information pertaining to respective fields-of-view (111 ) of the cameras during capturing the images, from the mobile communications devices (110),
associatively storing (221 ; 702) the received images, the information indicating respective times of capturing the images, and the information pertaining to respective fields-of-view (111 ) of the cameras during capturing the images, in a database (140; 530),
receiving (232; 703) a request for retrieval of images, the request comprising information pertaining to one or more positions of another mobile communications device (110) at respective times,
selecting (311 ; 704) one or more images stored in the database (140; 530) based on determining that the other mobile communications
device (110) was positioned within the respective fields-of-view (111 ) associatively stored with the one or more images, and
transmitting (313; 711 ) the selected one or more images in response to the received request.
38. The method (700) according to claim 37, wherein the received information pertaining to respective fields-of-view (111 ) of the cameras during capturing the images comprises respective positions of the mobile
communications devices (110) during capturing the images and respective directions in which the cameras are pointing during capturing the images.
39. The method (700) according to claim 38, wherein the received information pertaining to respective fields-of-view (111 ) of the cameras during capturing the images further comprises respective angles-of-view of the cameras during capturing the images.
40. The method (700) according to any one of claims 37 to 39, wherein the one or more images stored in the database (140; 530) are selected (311 ; 704) further based on a distance between respective positions of the other mobile communications device (110) and respective positions of the mobile communications device (110) during capturing the images.
41. The method (700) according to any one of claims 37 to 40, the request (232) for retrieval of images further comprising an image
representation of a user requesting the retrieval of images, the method further comprising recognizing the user by analyzing one or more images selected based on determining that the other mobile communications device (110) was positioned within the respective fields-of-view (111 ) associatively stored with the one or more images,
wherein the one or more images stored in the database (140; 530) are selected (311 ; 704) further based on determining that the user was successfully recognized.
42. The method (700) according to claim 41 , wherein the one or more images stored in the database (140; 530) are selected (311 ; 704) further based on respective facial expressions of the user in the images in which the user was successfully recognized.
43. The method (700) according to claim 41 , wherein the one or more images stored in the database (140; 530) are selected (311 ; 704) further based on respective gazes of the user in the images in which the user was successfully recognized.
44. The method (700) according to any one of claims 37 to 43, wherein a plurality of images is selected (311 ; 704), the method further comprising combining (312; 708) the selected plurality images into a slide show or a video.
45. The method (700) according to any one of claims 37 to 43, wherein a plurality of images is selected (311 ; 704) which were captured at substantially the same time by different mobile communications
devices (110) having respective positions and orientations, the method further comprising:
combining (312; 710) the plurality of images into a 3D image, and transmitting (313; 711 ) the 3D image as a representation of the selected one or more images in response to the received request.
46. A computer program (523) comprising instructions which, when the computer program is executed by a processor (521 ) comprised in a media server (130), cause the media server to carry out the method according to any one of claims 37 to 45.
47. A computer-readable storage medium (522) having stored thereon the computer program (523) according to claim 46.
48. A data carrier signal carrying the computer program (523) according to claim 46.
PCT/EP2019/053762 2019-02-15 2019-02-15 Mobile communications device and media server WO2020164726A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/053762 WO2020164726A1 (en) 2019-02-15 2019-02-15 Mobile communications device and media server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/053762 WO2020164726A1 (en) 2019-02-15 2019-02-15 Mobile communications device and media server

Publications (1)

Publication Number Publication Date
WO2020164726A1 true WO2020164726A1 (en) 2020-08-20

Family

ID=65516526

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/053762 WO2020164726A1 (en) 2019-02-15 2019-02-15 Mobile communications device and media server

Country Status (1)

Country Link
WO (1) WO2020164726A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100277611A1 (en) * 2009-05-01 2010-11-04 Adam Holt Automatic content tagging, such as tagging digital images via a wireless cellular network using metadata and facial recognition
US20130100307A1 (en) * 2011-10-25 2013-04-25 Nokia Corporation Methods, apparatuses and computer program products for analyzing context-based media data for tagging and retrieval

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100277611A1 (en) * 2009-05-01 2010-11-04 Adam Holt Automatic content tagging, such as tagging digital images via a wireless cellular network using metadata and facial recognition
US20130100307A1 (en) * 2011-10-25 2013-04-25 Nokia Corporation Methods, apparatuses and computer program products for analyzing context-based media data for tagging and retrieval

Similar Documents

Publication Publication Date Title
KR102466890B1 (en) Fast video capture and sensor adjustment
EP3923634B1 (en) Method for identifying specific position on specific route and electronic device
RU2637886C2 (en) Method and device for exchanging photographs
US9661221B2 (en) Always-on camera sampling strategies
EP3170123B1 (en) System and method for setting focus of digital image based on social relationship
WO2017156793A1 (en) Geographic location-based video processing method
EP3110134B1 (en) Electronic device and method for processing image
US20130329111A1 (en) Contextual help guide
US20170118298A1 (en) Method, device, and computer-readable medium for pushing information
US9020278B2 (en) Conversion of camera settings to reference picture
US9930479B2 (en) Method, apparatus, and mobile terminal for collecting location information
EP2905953A1 (en) Content acquisition device, portable device, server, information processing device and storage medium
US20160202947A1 (en) Method and system for remote viewing via wearable electronic devices
US11381660B2 (en) Selective information sharing between users of a social network
CN107071263A (en) A kind of image processing method and terminal
WO2021115483A1 (en) Image processing method and related apparatus
US20140340535A1 (en) Server, client terminal, system, and program
US10411798B2 (en) Power optimized VLC signal processing with efficient handling of ISP/VFE
US10848692B2 (en) Global shutter and rolling shutter drive start timings for imaging apparatus, imaging method, and imaging program
JP2020509470A (en) Method, apparatus and non-volatile computer readable medium for image synthesis
CN105933651B (en) Method and apparatus based on target route jumper connection video
JP2013021473A (en) Information processing device, information acquisition method, and computer program
WO2020164726A1 (en) Mobile communications device and media server
US20220198829A1 (en) Mobile communications device and application server
CN112989092A (en) Image processing method and related device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19706476

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19706476

Country of ref document: EP

Kind code of ref document: A1