GB2563087A - Methods, systems and devices for image acquisition - Google Patents

Methods, systems and devices for image acquisition Download PDF

Info

Publication number
GB2563087A
GB2563087A GB1708890.7A GB201708890A GB2563087A GB 2563087 A GB2563087 A GB 2563087A GB 201708890 A GB201708890 A GB 201708890A GB 2563087 A GB2563087 A GB 2563087A
Authority
GB
United Kingdom
Prior art keywords
data
camera
target
image
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1708890.7A
Other versions
GB201708890D0 (en
Inventor
Lynch Paul
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB1708890.7A priority Critical patent/GB2563087A/en
Priority to GB1709411.1A priority patent/GB2563088A/en
Publication of GB201708890D0 publication Critical patent/GB201708890D0/en
Publication of GB2563087A publication Critical patent/GB2563087A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

A system and methods of acquiring an image of a target (20, fig 5a). The system has a data processing device and one or more remote cameras (2, fig 5a). The data processing device requests data specifying a target from the camera(s) S103; and the camera(s) acquire and an image of the target S104 and transmit the image to be accessed by a user via the data processing device S107. The system may be used in for example a football stadium (50, fig 5a), a sports track stadium (60, fig 5c) or a concert hall (70, fig 5c). The target may be the data processing device which may be associated with the user; for example the target device may be associated with a user running a race at a sports track stadium (fig 5b). The data processing device may send location data of the target to the camera(s) S106, where the location may be the location of the data processing device.

Description

The present techniques relate to electronic devices and to systems and methods of acquiring images for access by a user.
Traditionally, people wanting images (e.g. pictures or videos) of themselves may use a “selfie stick” which can be difficult to use or may be awkward to carry around. Alternatively, a requestor may request another person to take images of them on the requestors camera device. However, the requestee may not be comfortable taking pictures using the requesters device due to being unfamiliar with the functionality thereof. Furthermore, the requestee may not wish to send the requester a picture taken or acquired on their own device, as doing so may reveal confidential information such as a telephone number.
The present techniques seek to provide improvements to existing systems and methods for acquiring images of a target.
According to a first technique there is provided a system comprising: a data processing device and one or more camera devices remote from the data processing device; wherein the data processing device is configured to transmit request data to the one or more camera devices, the request data specifying a target; and wherein the one or more camera devices are configured to acquire an image of the target, and to transmit the acquired image therefrom to be accessed by a user.
According to a further technique there is provided a method of acquiring an image of a target comprising: transmitting, from a first device to a second device, request data defining the target; receiving, at the second device, the request data; acquiring, at the second device, an image of the target; transmitting, from the second device to the first device or to a remote resource, camera data comprising the image for access by a user.
According to a further technique there is provided a method of acquiring an image of a target comprising: receiving, at a first device, request data identifying a target; acquiring, at the second device, an image of the target; transmitting, from the second device, to a further device or a remote resource, camera data comprising the acquired image for access by a user.
According to a further technique there is provided a device operable in a system and having components adapted to perform the steps of the claimed method.
According to a further technique there is provided a resource operable in a system and having components to perform the steps of the claimed method.
According to a further technique there is provided a computer program product comprising computer-program code tangibly stored on a computer-readable medium, the computer program code executable by a computer system for performing the steps of the claimed method.
The present techniques are diagrammatically illustrated, by way of example, in the accompanying drawings, in which:
Figure la schematically shows a block diagram of a camera device according to an embodiment;
Figure lb schematically shows a block diagram of a target device according to an embodiment;
Figure 2 schematically shows a system comprising a target device and a plurality of camera devices;
Figures 3a-3c schematically show example graphical user interfaces on the camera device of Figure la according to an embodiment;
Figure 4a schematically shows an example of a target device and camera devices according to an embodiment;
Figure 4b schematically shows an example of a target device and camera devices according to a further embodiment;
Figure 5a schematically shows an example of a target device and camera devices according to an embodiment;
Figure 5b schematically shows an example of a target device and camera devices according to a further embodiment;
Figure 5c schematically shows an example of target devices and camera devices according to a further embodiment;
Figure 6 schematically shows an example of target devices and camera devices according to a further embodiment; and
Figure 7 is a flow diagram of steps in an example application.
Figure la schematically shows a block diagram of a data processing device 2, hereafter “camera device” 2, which may be an electronic device with a camera such as a mobile phone, digital camera, glasses (e.g. Google® Glass or Snapchat® spectacles) etc.
The camera device 2 comprises processing circuitry 4, such as a microprocessor or integrated circuit(s) for processing data (e.g. applications) and for controlling various operations performed by the camera device 2.
The camera device 2 also has communication circuitry 6 for communicating with one 5 or more resources remote therefrom such as data processing devices (e.g. further camera devices, computer terminals), services (e.g. cloud service), gateway devices (not shown) etc.
The communication circuitry 6 may use wireless communication 7, such as cellular (e.g. EDGE, 3G, 4G) or wireless networks such as Wi-Fi, ZigBee, Bluetooth or Bluetooth Low Energy (BLE), using any suitable communications protocol such as, for example, lightweight machine-to-machine (LWM2M), IEEE 802.11, WiMAX, or Bluetooth protocols. The communication circuitry 6 may also comprise short range communication capabilities such as radio frequency identification (RFID) or near field communication (NFC).
The camera device 2 also comprises storage circuitry 8 (e.g. non-volatile/volatile storage), for storing data provisioned on or generated by the camera device 2, hereafter “device data”.
Such device data may include application data, which comprises a software application.
Such device data may also include identifier data comprising one or more device identifiers to identify the camera device 2 and may comprise one or more of: universally unique identifier(s) (UUID), globally unique identifier(s) (GUID) and IPv6 address(es), although any suitable device identifier(s) may be used.
The device data may also include authentication data for establishing trust/cryptographic communications between the camera device 2 and a remote resource. Such authentication data may include certificates (e.g. signed by a root authority), cryptographic keys (e.g. public/private key pairs; symmetric key pairs), tokens etc. The authentication data may be provisioned on the camera device 2 by any authorised party (e.g. by an owner, a manufacturer or an installer).
The camera device 2 also comprises camera circuitry 10 having a camera to acquire images (one or more pictures or videos). In operation, data generated by the camera circuitry, hereafter “camera data” may be stored at the storage circuitry 8 on the camera device 2 or transmitted to a remote resource for storage thereat. In examples, the camera data may comprise an acquired image, and, in some examples, may comprise the acquired image having a cryptographic operation (e.g. signing or encryption operation) performed thereon. In other examples, the camera data may also include device identifiers for the camera device that acquired the image, or may comprise location data of the camera device at the time of acquiring the image.
In embodiments, the camera device 2 comprises location determination circuitry 12, such that the camera device 2 can determine its position. Such location determination circuitry may comprise global positioning system (GPS) circuitry and/or may comprise localization circuitry such that the camera device can use any location determination technique, for example, the multilateration of radio signals between one or more communications towers. In other examples triangulation or trilateration techniques may be used. In other examples the location determination circuitry 12 may comprise an accelerometer that detects motion and generates location data in accordance with the motion.
The camera device 2 also comprises input/output (I/O) circuitry 13. Such an I/O device may comprise a display such as an OLED (organic LED) display, LCD (liquid crystal display). In examples the EO circuitry 13 may be a touchscreen such that a user can provide inputs via the touchscreen (e.g. using a finger).
The camera device 2 also comprises power circuitry 14 to power the various circuity and components therein. In examples, the power circuitry 14 may comprise a battery. In another example, the power circuitry 14 may include an energy harvester such as a Wi-Fi energy harvester, which may power the camera device 2 and/or charge the battery.
Figure lb schematically shows a block diagram of a data processing device 20, hereafter “target device” 20, which may be a device such as a mobile phone, a watch, or glasses etc. as described in Figure la.
As depicted in Figure lb, the target device 20 may have substantially similar circuitry to that of the camera device 2 as indicted by identical features numbers which are used to describe similar features, although, in some embodiments, the target device 20 may not have all the circuitry of the camera device 2. For example, as will become apparent to a skilled person, the target device may be an electronic tag without camera circuitry 10, EO circuitry 13 as indicated by the dashed lines in Figure lb, and may also have reduced storage capacity and processing power in comparison to the camera device.
Figure 2 schematically shows an example system 100 comprising a target device 20, a remote resource 30 and a plurality of camera devices 2a-2d. Although only one target device is depicted in Figure 2, the system may have a plurality of such target devices as will become apparent to a person skilled in the art.
The remote resource 30 may comprise one or more services (e.g. cloud services), platforms, computing infrastructure etc. The remote resource 30 may be located on a different network to the camera devices and target devices (e.g. on the internet).
The target device 20 may communicate with the camera devices 2a-2d directly or indirectly, via the resource 30.
In the present illustrative example, the remote resource 30 comprises an application service, but the remote resource 30 may comprise other services such as management services and registry services, although this list is not exhaustive.
The application service 30 may be used to generate data (e.g. command data, comprising instructions for a device to perform one or more operations), or receive data (e.g. request data, camera data etc.) from, the camera devices and/or target device.
An interested party can access data (e.g. camera data) at the remote resource 30 (e.g. 15 via a user interface (UI) on the target device 20, whereby the UI is provided as part of an application running on the target device.
The target device 20 communicates with one or more of the camera devices 2a-2d via remote resource 30, whereby a user transmits request data from the target device 20 to one or more of the camera devices 2a-2d (e.g. via a UI).
As above, the camera devices 2a-2d and/or target devices 20 may be provisioned with authentication data. The authentication data may comprise a public key or certificate for the remote resource 30 or another device or application, whereby the authentication data may be provisioned thereon, for example, as part of a registration process with the remote resource 30.
The communications between the camera devices 2a-2d, the remote resource 30 and/or the target devices 20 may optionally be provided with end-to-end security, such as transport layer security (TFS), datagram transport layer security (DTFS) or secure socket layer (SSF).
A user may want an image to be acquired of a target which may, for example, be a target device, a particular location, an object although this list is not exhaustive.
The target user generates request data requesting for an image to be acquired. Such a request may be made via the target user’s target device 20 (e.g. via a software application running on the target device which a user can interact with via a user interface on the target device or another data processing device).
The target device 20 then transmits the request data to one or more camera devices (e.g. directly or indirectly) via a remote resource 30.
The request data may comprise one or more of: identifier data, cryptographic data, authentication data, location data, parameter data, privacy data, address data and command data although this list is not exhaustive. Furthermore, it will be appreciated that the request data may be transmitted in one or more communications.
In examples the identifier data may comprise a device identifier. Such a device identifier may comprise a GUID or a UUID for the device used to generate the request and/or may comprise a username of the user that generated the request. It will be appreciated that these examples of device identifiers are for illustrative purposes only, and any suitable identifier may be provided.
The cryptographic may be generated by performing a cryptographic operation on the request data (e.g. a signing and/or encryption operation).
As above, the authentication data comprise may cryptographic keys, certificates, tokens etc. In some examples, the remote resource 30 and/or camera devices receiving the request data can verify the signature and ignore the request data if the signature cannot be verified.
The location data may comprise GPS data, or other suitable location data to enable a 20 receiving device to determine the location of the target.
The parameter data defines the minimum requirements for desired image (e.g. size, resolution, aperture quality, colour (e.g. colour, black and white, sepia etc.) exposure, focal length, frames per second, ISO, red-eye reduction. It will be understood that this list is not exhaustive.
In some examples, the remote resource may maintain a database or registry of the capabilities of all camera devices with which it is in communication, and will transmit requests to camera devices that cannot meet the minimum requirements as defined in the parameter data, and/or camera devices receiving such requests (e.g. directly from a target device) may ignore such requests.
The privacy data defines whether the target user wants to remain anonymous to the camera devices to which the request data is transmitted. As will become apparent below, the target user may define the privacy data such that a camera device will automatically acquire an image of a target without alerting a camera user that the picture was acquired, such that the camera user will not know what the target is. Such a setting may be considered to be “private”. In other examples, the target user may define the privacy data such that a camera device indicates when the target is within a display such that the camera user may be capable of identifying the target. Such a setting may be considered to be “public”.
The examples set out for the privacy data are illustrative only, and the user may also define other privacy aspects. For example, the privacy data may prohibit the camera device establishing direct communications with the target device (e.g. such that all acquired images are transmitted to a remote resource). In a further example, the privacy data may instruct the camera devices to delete any acquired images from storage thereon after transmission.
The address data may comprise a identifier for a device(s)/location to which the receiving camera device is to transmit the camera data (e.g. target device 20) or service (e.g. remote resource 30). Such address data may comprise any suitable identifier such as for example an IPv4 address, IPv6 address, cellular number and an instant messaging identifier although this list is not exhaustive.
In an illustrative example, the camera device will automatically acquire an image of the target on receiving the request data. In another illustrative example, the camera device 2 can determine the location of the target from the request data, and notify the camera user of the request. Such a notification may take the form of a sensory output (e.g. sound, vibration, light).
The camera user can then acquire the image of the target user using the camera device
2.
The camera device may establish a communication channel with the target device using address data provided in the received request data such that the camera device can transmit camera data comprising the acquired image to the target device via the established channel. Additionally, or alternatively, the camera device may transmit the camera data via a short range communications channel, which may be initiated by a user (e.g. via NFC by bumping devices, or via RFID by bring devices into range).
Additionally, or alternatively, the camera device 2 may transmit the camera data to the remote resource 30 for storage thereat, whereby an authorised user can access the acquired image (e.g. via an application such as an image viewer application).
In embodiments, the camera device may provide an image identifier associated with the target in the camera data. Such an image identifier may comprise a device identifier for the target device or an identifier for an authorised user. Additionally, or alternatively, the image identifier may comprise a signature provided on the camera data by the camera device (e.g. by signing the camera data with a cryptographic key received in the request data (e.g. a public key of the target device or a target user). Such an image identifier may be used to confirm that the camera data relates to the request data (e.g. by comparing a the image identifier with a device identifier provided in the request data).
Figures 3a-3c illustratively show an example graphical user interfaces (GUI) on the camera device 2 with which a user can interact.
In the present illustrative example, the camera device may notify the user, by way of a sensory output, of a received request to acquire an image of a target, which in the present illustrative example is a target device 20.
The GUI can then provide instructions to the camera user to enable the camera user to 15 move the camera device 2 until the target device 20 is correctly captured by the camera. As above, the parameters defining when the target device 20 is correctly captured (e.g. resolution, light exposure etc.) may be set by the target user via the application on the target device 20 and provided as parameter data in the request data, or such parameters may be defined by the remote resource and included in the request data transmitted to the remote resources/camera devices.
The GUI on the camera device 2 may provide one or more indications 40 to inform the user when the target device is captured by the camera, at which point the user can acquire the image e.g. by pressing a button 42, or the camera device 2 may automatically acquire the image when the target device 2 is correctly captured.
As illustratively depicted in Figures 3a, the target device 20 is not correctly captured by the camera in that it is outside of camera’s view as indicated by the frame indicator (40b), which may be a first colour (e.g. red) when the target device 20 is outside the frame indicator 40b. The text indicator 40a informs the camera user to move, whilst the arrow indicator 40c instruct the camera user to move the camera device 2 such that the target device 20 is within the camera’s view.
As illustratively depicted in Figure 3b, the target device 20 is within the frame but not correctly captured by the camera in that it is too small, as indicated by the frame indicator (40b), which may be a second colour (e.g. orange) when the target device 20 is within the camera’s view, but still not correctly captured by the camera. In such a scenario, the indicators 40a and 40c instruct the camera user to zoom in.
As illustratively depicted in Figure 3c, the target device 20 (a number of which are depicted in Figure 3c) is correctly captured by the camera as shown by the frame indicator (40b), which may be a third colour (e.g. green) when correctly captured. The GUI may inform the camera user when to acquire the image e.g. by generating an input icon 42 (depicted as button in Figure 3c) on the GUI for the camera user to press when the target device is correctly captured by the camera, or the camera device 2 may automatically acquire the image when the target device 2 is correctly captured by the camera.
It will be appreciated that the camera user may not know, and does not need to know, who the target user is because there may be multiple potential targets in the frame. In another example when the camera device automatically acquires the image, the camera user may not know when the image is actually captured as in such a case there is no need to inform the camera user that the image was actually acquired. Such functionality means that the target can remain unknown or anonymous to the camera user.
When the image is acquired, the camera device 2 may transmit the camera data to the remote resource (not shown) where it can be stored (e.g. as indicated in the request data). The target user may then access the camera data at the remote resource (e.g. via an application).
Additionally, or alternatively, the camera device 2 transmits the camera data comprising 20 the acquired image directly to the target device 20 or another device (e.g. as indicated in the request data) or via a communications channel initiated by the target user/camera user (e.g. by bumping devices using NFC).
As above the camera device may include an image identifier in the camera data.
As will be appreciated, there is no particular requirement for the camera device 2 to 25 store the acquired image thereon after transmission, and, therefore, such acquired images may be deleted once transmitted therefrom so as not to reduce the storage capacity on the camera devices. The target user or remote resource may also specify (e.g. as part of a command instruction in the request data) that the camera devices delete the acquired images after transmission.
Furthermore, the target user may pay for access to the acquired image. Payment may be made e.g. via an image viewer application. Therefore, when the target user accesses or downloads the camera data, one or more parties (e.g. the application owner, the camera user etc.) may be compensated, financially or otherwise.
In some examples, the camera device 2 may perform a cryptographic operation on the camera data (e.g. signing or encrypting the camera data using a cryptographic key of the authentication data) before transmitting the camera data, and a user may receive the corresponding cryptographic key to decrypt the camera data when payment is made. Such cryptographic keys may be managed by an application service at the remote resource.
Figures 4a & 4b schematically show an example of a target device 20 and camera devices 2a-2f.
In the present illustrative examples, the target user is a road race runner, and generates a request for one or more images of the target device 20 to be acquired during the race (e.g. for a defined period of time or, for example, until the target user generates a request for no further images to be acquired, or when a threshold number of images have been acquired responsive to the request data e.g. 200).
As the target user runs the race, the target device 20 may periodically send its location data to the remote resource 30 (e.g. every ‘n’: milliseconds, seconds or minutes etc., where ‘n’ is an integer). The remote resource 30 can then transmit the updated location data to other camera devices, which can then determine the location of the target. As above, such the location data may comprise GPS data, or other suitable location data.
The remote resource 30 can send request data to all camera devices 2a-2f that come within a distance threshold of the target device (e.g. l-300m) as the target user progresses along the route, so as to acquire an image of the target device 20. It will be appreciated that the remote resource 30 may also receive location data from the respective camera devices 2a-2f, which may be registered therewith, so that the remote resource can calculate when the target device 20 comes within a distance threshold of a particular camera device and send the request data thereto.
In Figure 4b, rather than receive request data from the remote resource 30, one or more of the camera devices 2a-2f may receive the request data directly from the target device 20 as the runner progresses along the race route.
For example, the target device 20 may broadcast the request data (e.g. via Bluetooth) as the runner progresses, whereby a camera device receiving the broadcast will be capable of locating the target device from location data provided in request data, and acquire an image thereof.
In examples the camera device may establish two-way communications with the target device using information provided in the request data, such that the target device can provide more accurate location data via the established communications channel. Any acquired image may also be sent to the target device via the established channel and/or may be transmitted to the remote resource.
Figures 5a-5c schematically show examples of a target and camera devices 2 according to further embodiments.
In Figure 5a, the target comprises target device 20 which is associated with a target user in the stand at a football stadium 50.
In the present illustrative examples, the target user generates a request for one or more images of the target device 20 to be acquired (e.g. for a defined period of time or, for example, until the target user generates a request for no further images to be acquired). As above, such a request may be made via the target user’s target device 20 (e.g. via an application running on the target device) or another device, with request data (e.g. location data, command data, address data and/or parameter data), being transmitted to a remote resource 30.
As above, such the location data may comprise GPS data, or other suitable location data.
In examples, the remote resource 30 can transmit the request data to all camera devices within a particular distance threshold of the target device (e.g. l-300m) or may determine which camera devices are within the stadium 50 and send the request data to those.
The target device 20 may periodically send its location data to the remote resource 30 (e.g. every ‘n’: milliseconds, seconds or minutes etc., where ‘n’ is an integer). The remote resource 30 can then transmit the updated location data to other camera devices, which can then determine the location of the target.
As illustratively shown in Figure 5a, the camera users may include a player or match official 2h, a manager 2i with the camera device 2 being a mobile phone or digital camera or the like and/or the camera device may be a fixed camera 2g on the goalpost or a fixed camera
2j on the stadium structure itself such as a structural beam or advertising hoarding. Such fixed cameras may be provided on a rotor and may automatically locate the target device 20 and acquire the image(s). In other examples the camera device may be a mobile camera 2k such as a computer-controlled cable-suspended camera 2k or may be a computer controlled drone having a camera thereon (not shown), which may automatically locate and acquire images in response to the request data, or do so under instructions of a user.
As above, the acquired images from the different cameras may be transmitted to the remote resource (not shown), and stored thereat to be accessed by the target user.
It will be appreciated that, in the illustrative example of Figure 5a, the target user is likely to be in a large group of people in the acquired image(s), and the target user may or may not be readily identifiable in the crowd e.g. when an image is acquired using a camera device 2 at a different side of the stadium from the target device 20. However, the target user may enjoy trying to spot himself/herself in the crowd.
It will also be appreciated that the target user may pay a premium to access images acquired by a match official, a player, a manager or a fixed/mobile camera in the stadium 50.
In Figure 5b, the target device 20 is associated with a target user running a race at a sports track stadium 60.
In the present illustrative example, the target user generates a request for one or more images of the target device 20 to be acquired. As above, such a request may be made via the target user’s target device 20 or another device, with request data being transmitted to a remote resource 30.
In examples, the remote resource 30 can send request data to all camera devices within a particular distance threshold of the target device (e.g. l-300m) or may determine which camera devices 2 are within the stadium 60 and send the request data to those.
As above, the target device 20 may periodically send location data to the remote resource 30. The remote resource 30 can then transmit the updated location data to other camera devices in the stadium 60, which can then determine the location of the target as it moves.
As illustratively shown in Figure 5b, the camera users may include spectators, or may be part of the runner’s management team using a camera device. In other examples the camera device 2 may be a fixed camera as part of the structure of the stadium 60 (not shown in Figure 5b), or a mobile camera (not shown in Figure 5b).
As above, the acquired images from the different camera devices 2 may be transmitted to the remote resource (not shown), and stored thereat to be accessed by the target user.
In Figure 5c, a plurality of target devices 20 are associated with respective target users at a concert hall 70.
In the present illustrative examples, the target users generate respective requests for one or more images of the target devices 20 to be acquired. As above, such requests may be made via the target users’ target devices 20 or another device, with request data being transmitted to a remote resource 30.
In examples, the remote resource 30 can send request data to all camera devices 2 within a particular distance threshold of the target device (e.g. l-300m) or may determine which camera devices 2 are within the concert hall 70 and send the request data to those.
As above, the target devices 20 may periodically send their location data to the remote resource 30. The remote resource 30 can then transmit the updated location data to other camera devices in the concert hall 70, which can then determine the location of the target.
As illustratively shown in Figure 5c, the camera users may include other concert goers 2 or may be a member of the cast/band 2m. In other examples the camera devices may be a fixed camera 21 as part of the structure of the concert hall 70, or a mobile camera (not shown in Figure 5 c).
As above, the acquired images from the different cameras may be transmitted to the remote resource (not shown), and stored thereat to be accessed by the target user.
It will also be appreciated that the target user may pay a premium to access images acquired by a member of the cast/band etc. or a fixed camera in the concert hall 70.
It will be appreciated that, in the illustrative example of Figure 5c, the target user is likely to be in a large group of people in the acquired image, and the target user may or may not be readily identifiable in the crowd. However, the user may enjoy trying to spot himself/herself in the crowd.
In Figure 6, a user generates, via data processing device 82, a request for one or more images to be acquired of a target, which, in the illustrative example of Figure 6 is a target location (depicted by the dashed outline), whereby the user may select the target location on a map via an application running on data processing device 82, such that request data, comprising the location data for the target location 82, is transmitted to the remote resource 30.
The remote resource 30 can send request data to all camera devices within a particular distance threshold of the target 80 (e.g. l-500m). As above, the camera devices may periodically transmit their respective locations to the remote resource.
Such functionality may be used by any interested party which may include, for example, 5 the emergency services (police, fire, ambulances), government agencies, school authorities, media agencies.
For example, an alarm at the target location 80 may alert the police to a potential incident at the bank. An interested party (e.g. at a police control room or a bank manager) may, then via a data processing device 82 send a request to all camera users 2 in the vicinity of the target 80, whereby the camera users acquire images of the target 80 whilst the request is valid.
The camera users can be instructed (e.g. with command instructions) to acquire images of the target 82 itself, and/or of the area around the target (e.g. by being instructed to acquire panoramic image). Such functionality will mean that the cameras may acquire images of a getaway car or an accomplice outside the bank.
As above, the acquired images from the different cameras may be transmitted to the remote resource 30, and stored thereat to be accessed by the target user. Therefore, in the present illustrative example, the police, or another interested party, can then access the images acquired by the cameras.
A camera user may be financially compensated for the one or more acquired images (e.g. if the image leads to a conviction or if used in the media as part of a request for information).
Figure 7 illustratively shows examples steps in an example method 100.
At step S101 the method starts.
At step SI02, a target user generates, via a data processing device, request data defining a target. The target may, for example, be a target device or a target location, and the request data may comprise one or more of: identifier data, cryptographic data, authentication data, location data, parameter data, privacy data, address data and command data, although this list is not exhaustive.
At step SI03 the data processing device transmits the request data to one or more camera devices (e.g. directly or indirectly via a remote resource), whereby the request data enables the receiving camera devices to identify the location of the target and acquire images thereof. As above, the data processing device may send the request data to camera devices having capabilities to meet the minimum requirements of a user (as may be defined in parameter data).
At step SI04 the one or more camera devices process the request data and in one 5 example, automatically acquire one or more images of the target, or in an additional, or alternative, example the camera devices provide, via a display, instructions for the camera user to correctly acquire the one or more images.
At step SI05 the camera devices transmit the acquired image(s) to a remote resource for storage thereat to be accessed by the target user. The images may be accessed via an application, e.g. an image viewer application, running on a data processing device (e.g. the target device or a computer terminal). The target user may purchase selected images of the one or more acquired images, such that one or more parties (e.g. the application owner, the camera user etc.) may be compensated, financially or otherwise.
Whilst the one or more acquired images may be stored at the camera device on which they are acquired, in some embodiments the acquired image(s) are deleted once transmitted to the remote resource. The destination to which the acquired images are sent may be defined in the request data (e.g. as address data). In other examples, the camera devices may be provisioned with the address data (e.g. on registration with the remote resource).
At step SI06, the data processing device may send updated location information (e.g.
periodically and/or when the target moves), such that the camera devices can determine the updated location of the target and acquire images thereof.
At step SI07 the target user, or another authorised user, accesses the acquired images. Such access may be provided by the remote resource via an application running on a data processing device.
At step SI08, the method ends.
Whilst the present example describes transmitting the acquired images to a remote resource, the acquired images can, additionally, or alternatively, be transmitted to a data processing device (e.g. the target device), for access thereat.
A person skilled in the art will, on reading the specification, appreciate that the present 30 techniques may be used in many applications.
For example, rather than taking a “selfie” a target user can identify camera users in the vicinity on a map on the target user’s device (e.g. via a software application), and send a request to specific camera users (e.g. a beach, concert, beauty spot) to acquire images of the target user. The user can then access the pictures via a remote resource. Additionally, or alternatively, a communications channel can be established between the devices and the acquired images transferred directly to the target user’s device (e.g.via BLE, NFC etc).
In a further illustrative example, the target may be coordinates for an area where a missing person was last seen, and camera users at the location may acquire images at the location on receiving request data defining the location as a target. In such an example, the distance threshold from the target may be expanded (e.g. to 2km) to increase the number of camera users acquiring images, to increase the likelihood of capturing the missing person.
In other examples, the target may comprise an electronic tag without a display, and may be provided on a person (e.g. on clothing) or on an animal (e.g. a saddle on a horse, or a dog collar) so one or more camera users can acquire images thereof.
In an illustrative examples, a parent could place a target tag in a child’s clothing whereby the parent can request that camera users acquire images of the target electronic tag when detected by cameras.
It will be appreciated that the acquired image may comprise one or more pictures or one or more videos.
In the examples above, a duration for which the camera devices should acquire images may be defined in the request data. In other examples the camera devices may only acquire a threshold number of images (e.g. 10 pictures and/or 10 videos) for any particular target. In other examples, the camera devices may continue to acquire images until the target device can no longer be captured (e.g. it moves out of range) or until a request to cease acquiring images is received. Such a request to cease may be generated by the target user and/or the remote resource.
As will be appreciated by one skilled in the art, aspects of the present technology may be embodied as a system, method or computer program product. Accordingly, aspects of the present technology may take the form of an entirely hardware embodiment, and entirely software embodiment, or an embodiment combining software and hardware aspects.
Embodiments of the present techniques further provide a non-transitory data carrier carrying code which, when implemented on a processor, causes the processor to carry out the methods described herein.
The techniques further provide processor control code to implement the above5 described methods, for example on a general purpose computer system or on a digital signal processor (DSP). The techniques also provide a carrier carrying processor control code to, when running, implement any of the above methods, in particular on a non-transitory data carrier or on a non-transitory computer-readable medium such as a disk, microprocessor, CDor DVD-ROM, programmed memory such as read-only memory (firmware), or on a data carrier such as an optical or electrical signal carrier. The code may be provided on a (nontransitory) carrier such as a disk, a microprocessor, CD- or DVD-ROM, programmed memory such as non-volatile memory (e.g. Flash) or read-only memory (firmware). Code (and/or data) to implement embodiments of the techniques may comprise source, object or executable code in a conventional programming language (interpreted or compiled) such as C, or assembly code, code for setting up or controlling an ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array), or code for a hardware description language such as Verilog™ or VHDL (Very high speed integrated circuit Hardware Description Language). As the skilled person will appreciate, such code and/or data may be distributed between a plurality of coupled components in communication with one another. The techniques may comprise a controller which includes a microprocessor, working memory and program memory coupled to one or more of the components of the system.
Computer program code for carrying out operations for the above-described techniques may be written in any combination of one or more programming languages, including object oriented programming languages and conventional procedural programming languages. Code components may be embodied as procedures, methods or the like, and may comprise subcomponents which may take the form of instructions or sequences of instructions at any of the levels of abstraction, from the direct machine instructions of a native instruction set to highlevel compiled or interpreted language constructs.
It will also be clear to one of skill in the art that all or part of a logical method according to the preferred embodiments of the present techniques may suitably be embodied in a logic apparatus comprising logic elements to perform the steps of the above-described methods, and that such logic elements may comprise components such as logic gates in, for example a programmable logic array or application-specific integrated circuit. Such a logic arrangement may further be embodied in enabling elements for temporarily or permanently establishing logic structures in such an array or circuit using, for example, a virtual hardware descriptor language, which may be stored and transmitted using fixed or transmittable carrier media.
In an embodiment, the present techniques may be realised in the form of a data carrier having functional data thereon, said functional data comprising functional computer data structures to, when loaded into a computer system or network and operated upon thereby, enable said computer system to perform all the steps of the above-described method.
As used in this specification and claims, the terms “for example,” “for instance,” “such as,” and “like,” and the verbs “comprising,” “having,” “including,” and their other verb forms, when used in conjunction with a listing of one or more components or other items, are each to be construed as open-ended, meaning that the listing is not to be considered as excluding other, additional components or items. Other terms are to be construed using their broadest reasonable meaning unless they are used in a context that requires a different interpretation.
In the preceding description, various embodiments of claimed subject matter have been described. For purposes of explanation, specifics, such as amounts, systems and/or configurations, as examples, were set forth. In other instances, well-known features were omitted and/or simplified so as not to obscure claimed subject matter. While certain features have been illustrated and/or described herein, many modifications, substitutions, changes and/or equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all modifications and/or changes as fall within claimed subject matter.

Claims (25)

  1. Claims
    1. A system comprising:
    a data processing device; and one or more camera devices remote from the data processing device;
    wherein the data processing device is configured to transmit request data to the one or more camera devices, the request data specifying a target; and wherein the one or more camera devices are configured to acquire an image of the target, and to transmit the acquired image therefrom to be accessed by a user via the data processing device.
  2. 2. The system according to claim 1, the system comprising a remote resource.
  3. 3. The system according to claim 1 or claim 2, wherein request data is transmitted directly to the one or more camera devices.
  4. 4. The system according to claim 1 or claim 2, wherein the request data is transmitted indirectly to the one or more camera devices.
  5. 5. The system according to any of claims 1 to 4, wherein the target comprises one or more of:
    a data processing device, an electronic tag and a specified location.
  6. 6. The system according to any of claims 2 to 5, wherein the acquired images are transmitted to the remote resource.
  7. 7. The system according to any preceding claim, wherein the camera device automatically acquires the image.
  8. 8. The system according to any preceding claim, wherein the camera device provides an instruction to a user to acquire the image.
  9. 9. The system according to any preceding claim, wherein the one or more camera devices comprise one or more of: a data processing device, a fixed camera and a mobile camera.
  10. 10. The system according to any preceding claim, wherein the data processing device transmits location data to one or more of: the remote resource and the one or more camera devices.
  11. 11. The system according to any preceding claim, wherein the image comprises a picture or
    5 video.
  12. 12. The system according to any preceding claim wherein the request data comprises one or more of: identifier data, cryptographic data, authentication data, location data, parameter data, privacy data, address data and command data.
  13. 13. A method of acquiring an image of a target comprising:
    transmitting, from a first device to a second device, request data defining the target; receiving, at the second device, the request data; acquiring, at the second device, an image of the target;
    15 transmitting, from the second device to the first device or to a remote resource, camera data comprising the image for access by a user.
  14. 14. The method according to claim 13, wherein the request data comprises parameter data defining requirements for any acquired images.
  15. 15. The method according to claim 13 or 14, further comprising: providing, on a display at the second device, instructions for a user to correctly capture the target.
  16. 16. The method according to any of claims 13 to 15, wherein acquiring the image comprises:
    25 automatically acquiring the image.
  17. 17. The method according to any of claims 13 to 16, further comprising:
    transmitting location data from the first device to the second device, directly or indirectly.
  18. 18. The method according to any of claims 13 to 16, further comprising:
    performing, at the second device, a cryptographic operation on the acquired image.
  19. 19. The method according to any of claims 13 to 18, further comprising:
    providing an image identifier on the camera data.
  20. 20. The method according to any of claims 13 to 19, further comprising:
    storing, at the remote resource, the camera data for access by a user.
  21. 21. The method according to any of claims 14 to 20, further comprising:
    ignoring, at the camera device, the request data in response to the parameter data.
  22. 22. A method of acquiring an image of a target comprising:
    10 receiving, at a first device, request data identifying a target;
    acquiring, at the second device, an image of the target;
    transmitting, from the second device, to a further device or a remote resource, camera data comprising the acquired image for access by a user.
    15
  23. 23. A device operable in a system and having components adapted to perform the steps of the method of any of claims 13 to 22.
  24. 24. A resource operable in a system and having components to perform the steps of the method of any of claims 13 to 21.
  25. 25. A computer program product comprising computer-program code tangibly stored on a computer-readable medium, the computer program code executable by a computer system for performing the steps of the method of any of claims 13 to 22.
GB1708890.7A 2017-06-04 2017-06-04 Methods, systems and devices for image acquisition Withdrawn GB2563087A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1708890.7A GB2563087A (en) 2017-06-04 2017-06-04 Methods, systems and devices for image acquisition
GB1709411.1A GB2563088A (en) 2017-06-04 2017-06-13 Methods, systems and devices for accessing images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1708890.7A GB2563087A (en) 2017-06-04 2017-06-04 Methods, systems and devices for image acquisition

Publications (2)

Publication Number Publication Date
GB201708890D0 GB201708890D0 (en) 2017-07-19
GB2563087A true GB2563087A (en) 2018-12-05

Family

ID=59349882

Family Applications (2)

Application Number Title Priority Date Filing Date
GB1708890.7A Withdrawn GB2563087A (en) 2017-06-04 2017-06-04 Methods, systems and devices for image acquisition
GB1709411.1A Withdrawn GB2563088A (en) 2017-06-04 2017-06-13 Methods, systems and devices for accessing images

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB1709411.1A Withdrawn GB2563088A (en) 2017-06-04 2017-06-13 Methods, systems and devices for accessing images

Country Status (1)

Country Link
GB (2) GB2563087A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7777783B1 (en) * 2007-03-23 2010-08-17 Proximex Corporation Multi-video navigation
WO2011136419A1 (en) * 2010-04-30 2011-11-03 (주)아이티엑스시큐리티 Dvr and method for reproducing image thereby
EP2495970A1 (en) * 2009-10-27 2012-09-05 Panasonic Corporation Display image switching device and display method
GB2493580A (en) * 2011-08-08 2013-02-13 Vision Semantics Ltd Method of searching for a target within video data
WO2014119991A1 (en) * 2013-01-30 2014-08-07 Mimos Berhad Directing steerable camera with user bias tracking

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1542186A1 (en) * 2003-12-12 2005-06-15 Chao-Hung Chang Active searching and identifying system and method
WO2011136418A1 (en) * 2010-04-30 2011-11-03 (주)아이티엑스시큐리티 Dvr and method for monitoring image thereof
TWI439134B (en) * 2010-10-25 2014-05-21 Hon Hai Prec Ind Co Ltd 3d digital image monitor system and method
US9111143B2 (en) * 2013-09-27 2015-08-18 At&T Mobility Ii Llc Method and apparatus for image collection and analysis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7777783B1 (en) * 2007-03-23 2010-08-17 Proximex Corporation Multi-video navigation
EP2495970A1 (en) * 2009-10-27 2012-09-05 Panasonic Corporation Display image switching device and display method
WO2011136419A1 (en) * 2010-04-30 2011-11-03 (주)아이티엑스시큐리티 Dvr and method for reproducing image thereby
GB2493580A (en) * 2011-08-08 2013-02-13 Vision Semantics Ltd Method of searching for a target within video data
WO2014119991A1 (en) * 2013-01-30 2014-08-07 Mimos Berhad Directing steerable camera with user bias tracking

Also Published As

Publication number Publication date
GB201708890D0 (en) 2017-07-19
GB2563088A (en) 2018-12-05
GB201709411D0 (en) 2017-07-26

Similar Documents

Publication Publication Date Title
EP3384655B1 (en) Systems and methods for scalable-factor authentication
US10003730B2 (en) Method and device for sharing a camera feature
RU2644509C2 (en) Methods and apparatuses for linking with the device
US10523639B2 (en) Privacy preserving wearable computing device
US8750828B2 (en) Enabling remote and anonymous control of mobile and portable multimedia devices for security, tracking and recovery
US20170163626A1 (en) Method and device for network access of a smart terminal device
US20160352723A1 (en) Method, and apparatus for authenticating access
EP3044943B1 (en) Method and apparatus for token determination for people awareness and location sharing
CN106211359B (en) Method and device for enabling device to obtain service
WO2017101518A1 (en) Positioning information prompting method and apparatus
WO2017156793A1 (en) Geographic location-based video processing method
US20180034772A1 (en) Method and apparatus for bluetooth-based identity recognition
US20160087811A1 (en) Methods and devices of accessing wireless network
US10083319B2 (en) Privacy zone
WO2019148317A1 (en) Uav service supporting method and device
US20170034776A1 (en) Method, apparatus, and system for smart device to access router
KR102559827B1 (en) System for authenticating image based on blockchain and hash encryption technique and method thereof
WO2019028746A1 (en) Unmanned aerial vehicle access method and device
US9723486B2 (en) Method and apparatus for accessing network
US9430673B1 (en) Subject notification and consent for captured images
JP2017529029A (en) Camera control and image streaming
US9674768B2 (en) Method and device for accessing wireless network
CN109155668B (en) Flight path configuration method and device
CN112866222B (en) Data processing method and device and data processing device
US11240634B1 (en) Systems and methods for initiating a secure media session between mobile computing devices during image capture

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)