WO2016019489A1 - Method, apparatus, computer program and system - Google Patents

Method, apparatus, computer program and system Download PDF

Info

Publication number
WO2016019489A1
WO2016019489A1 PCT/CN2014/083617 CN2014083617W WO2016019489A1 WO 2016019489 A1 WO2016019489 A1 WO 2016019489A1 CN 2014083617 W CN2014083617 W CN 2014083617W WO 2016019489 A1 WO2016019489 A1 WO 2016019489A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image capture
user
viewpoint
processor
Prior art date
Application number
PCT/CN2014/083617
Other languages
French (fr)
Inventor
Xiaoping Li
Original Assignee
Nokia Technologies Oy
Navteq (Shanghai) Trading Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy, Navteq (Shanghai) Trading Co., Ltd. filed Critical Nokia Technologies Oy
Priority to PCT/CN2014/083617 priority Critical patent/WO2016019489A1/en
Publication of WO2016019489A1 publication Critical patent/WO2016019489A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • TECHNOLOGICAL FIELD Examples of the present disclosure relate to a method, apparatus, computer program and system. In particular, though without prejudice to the foregoing, certain examples relate to a method, apparatus, computer program and system for requesting a change in an image capture viewpoint. BACKGROUND
  • Image capture devices are well known. However, conventional image capture devices and systems are not always optimal as, typically, a user wishing to capture an image with an image capture device must physically be present him/herself in order to manually control the image capture device to capture an image of a desired scene. Thus a user is limited to capturing images of his/her own surrounding area. Certain examples of the present disclosure seek to provide an improved method, apparatus and computer program for remotely capturing an image.
  • the listing or discussion of any prior-published document or any background in this specification should not necessarily be taken as an acknowledgement that the document or background is part of the state of the art or is common general knowledge.
  • One or more aspects/examples of the present disclosure may or may not address one or more of the background issues.
  • a method comprising causing, at least in part, actions that result in: sending, from a second device to a first device, an instruction to change an image capture viewpoint of the first device;
  • the instruction is configured to cause the first device to generate a user identifiable indication for the requested change of the image capture viewpoint.
  • an apparatus comprising means configured to enable the apparatus at least to perform one or more of the first and second methods mentioned above. According to at least some but not necessarily all examples of the disclosure there is provided an apparatus comprising:
  • At least one memory including computer program code
  • the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform one or more of the first and second methods mentioned above:
  • the above apparatuses may be comprised in a: chipset, module or device.
  • an apparatus comprising means configured to enable the apparatus at least to perform the first method mentioned above;
  • an apparatus comprising means configured to enable the apparatus at least to perform the second method mentioned above.
  • non-transitory computer readable medium encoded with instructions that, when performed by at least one processor, causes at least one or more of the first and second methods mentioned above to be performed.
  • Figure 1 schematically illustrates a method for a first apparatus
  • Figure 2 schematically illustrates a method for a second apparatus
  • Figure 3 schematically illustrates a flowchart of a system comprising a first apparatus and a second apparatus
  • FIGS 4a and 4b schematically illustrate use of examples of the present disclosure
  • Figure 5 schematically illustrates an apparatus
  • Figure 6 schematically illustrates a device.
  • FIG. 1 schematically illustrate, according to one aspect of the present disclosure, a method 100 for performance on a first device 401 , the method comprising causing, at least in part, actions that result in:
  • generating 102 at the first device 401 responsive to the received request, a user identifiable indication 404 for indicating to a user 405 of the first device 401 the requested change in the image capture viewpoint.
  • the figures also schematically illustrate, according to another aspect of the present disclosure, a method 200 for performance on a second device 402, the method comprising causing, at least in part, actions that result in:
  • the first image capture device is a generic term that encompasses, for example, at least the following examples, a device having image capturing functionality such as an image sensor, camera or a camera enabled portable handheld electronic device such as a mobile phone, tablet or any other device having image capturing functionality.
  • the image capture viewpoint may, for example, correspond to the viewpoint at which an image is captured by the first image capture device. This may correspond to at least one of: position, orientation, direction, tilt or angle of the image capture device. Furthermore the image capture viewpoint may correspond to at least one of: field of view, angle of view, perspective and zoom level of the image captured by the image capture device. In certain examples, the image capture viewpoint may correspond to a viewfinder view of the image capture device.
  • the user identifiable indication for indicating to a user of the first image capture device the requested change in the image capture viewpoint may correspond to visual, aural or haptic indicators configured to indicate a requested change in the image capturing viewpoint.
  • the identifiers may correspond to a directional arrow displayed on a display of the image capture device indicating a request for the second device to be tilted/moved in a particular direction or to adjust a zoom level so as to alter the image capture device's image capture viewpoint.
  • Examples of the present disclosure provide the advantage that a second user of a second device, which is remote from a first image capturing device and a first user of the same, to indirectly and remotely control the first image capture device.
  • Examples enable a request for adjustment of an image capture viewpoint of the first image capture device to be sent from the second device to the first image capture device which causes user identifiable indications to be generated and presented to the first user for indicating to the first user the requested change in the image capture viewpoint.
  • examples provide an improved method, apparatus and system for remote image capture that enables a remote second user to request a change in the image capture viewpoint.
  • the first user of the first device is made aware of the request via the generated and presented user identifiable indication and may thus make the requested change to the image capture viewpoint, for example by the first user manually moving the image capture device in a manner indicated.
  • the second user of the second device can send a control signal to the first image capture device to capture an image having the desired image capture viewpoint. Once this has been done, the captured image may be sent to the second device.
  • such further examples enable the second user of the second device to remotely control a remote image capture device by remotely (indirectly) controlling the image capture viewpoint of the image capture device, with the assistance of the first user via the user identifiable indications, and also remotely controlling the image capture device to capture an image having the desired image capture viewpoint and sending the same to the first device.
  • Figure 1 schematically illustrates a method 100 according to an example of the present disclosure.
  • Figure 1 shows a method 100 which may be performed by an apparatus 500 (as shown in Figure 5) which may be comprised in a first image capture device 401 (as shown in Figure 4).
  • a request to change an image capture viewpoint of a first image capture device is received from a second device 402.
  • the request to change an image capture viewpoint may comprise requesting a change in one or more of the: position, orientation, direction, tilt, angle and zoom level of the image capture device, as well as the field/angle of view or perspective of the image captured by the image capture device.
  • the image capture could comprise video capture and that the image captured by the image capture device may be one or more images of a video sequence or a video stream, such as corresponding to the image capture viewpoint of a viewfinder of the image capture device.
  • the request may be received via a receiver or antenna of the first device.
  • a user identifiable indication 404 is generated for indicating to a first user 405 of the first device 401 a requested change in the image capture viewpoint 403.
  • the user identifiable indication may be a visual, aural or haptic user identifiable indication that indicates a requested change of viewpoint. This may in certain examples correspond to the generation of one or more visual indications, such as one or more directional arrows, displayed on a display of the first device. Further, this may in certain examples correspond to an additional the generation of one or more audio indications giving one or more directional prompts.
  • Figure 2 schematically illustrates a method 200 which may be performed by an apparatus 500 which may be comprised in a second device 402 according to an example of the present disclosure.
  • an instruction/request to change an image capture viewpoint of a first device is sent from the second device to the first device.
  • the instruction/request is configured such that, upon receipt at the first device, the instruction/request triggers/causes the first device to generate a user identifiable indication for indicating to a user of the first device the requested change of the image capture viewpoint.
  • Figures 1 and 2 enable a second user of a second device, remote from an image capture device, to remotely control the image capture viewpoint of a first image capture device (albeit indirectly remotely controlling the viewpoint via the first user).
  • this allows the remote user to frame a desired shot remotely without needing specialist robotic hardware/actuators to remotely control movement of the first image capture device.
  • the second device 402 acts as a 'master' device to the 'slave' first image capture device 401.
  • Figure 3 schematically illustrates a flowchart of a system according to an example of the present disclosure.
  • Method blocks performed by the first image capture device are set out on the right hand side of the figure, whilst method blocks for the second device (the “master” device) are set out on the left hand side.
  • an image or a video for example an image file and/or a video file, having a particular image/video capture viewpoint, is captured by a first image/video capture device.
  • the captured image/video is sent in block 302 by the first device and is received, in block 303, by the second device.
  • the transmission of the captured image/video may be via any suitable communication network(s), for example a cellular communication network or other wireless communication network.
  • the received image/video captured by the first imaging capturing device is displayed on a display of the second device.
  • a second user of the second device upon viewing the captured image/video (having a particular imaging viewpoint) on the display of the second device can then decide whether or not the image/video is appropriately framed or if the second user would wish to change the image/video viewpoint, i.e. seek to adjust the aiming or zoom level of the first image/video capture device.
  • the user input to request a change of the image/video capture viewpoint is received at the second device.
  • the user input for requesting a change of the image/video capture viewpoint of the first image/video capture device may comprise detecting movement of the second device.
  • the second user of the second device may change the orientation or direction of the second device by manually manipulating the second device his or herself.
  • a change in the movement may be detected, for example detecting a change in the orientation, yaw, pitch, roll, tilt or angle of direction of the device via appropriate sensors of the second device. Additionally, sensors may be able to detect the direction and magnitude of such movements.
  • user input for requesting a change of the image/video capture viewpoint of the first image/video capture device may comprise the second user of the second device actuating a user interface element of the second device, for example a button or key of a user input device or effect a gesture on a touch screen of the second.
  • a user may provide an indication as to a desire to change the image/video capture viewpoint to the left by performing a swiping gesture on the image/video displayed on the second device to the left which is detected.
  • the second device may itself have zoom controls, actuation of which by the second user may be used to provide the user input of block 305 to request a change in the zoom level.
  • Information concerning the requested change of image/video capture viewpoint is sent in the request of block 201.
  • Other forms of user input for requesting a change in the image/video capture viewpoint may also be provided, such as receiving audio input from the second user which is used in forming the request.
  • the request is received at the first device and in block 102, a user identifiable indication is generated at the first device, responsive to the received request.
  • the user identifiable indication is configured so as to indicate to a user of the first device the requested change in the image/video capture viewpoint.
  • the user identifiable indication may comprise visual, aural or haptic cues. For example, displaying a visual indication representative of the requested change in the image/video capture viewpoint on a display of the first device.
  • one or more arrows pointing in particular direction(s) may be displayed indicative of a requested direction of movement of the second device.
  • an indication of a requested magnitude of the movement of the second device may also be provided, for example not least related to an attribute of the displayed arrow such as its size or colour or brightness.
  • Other means for providing user identifiable indication for indicating a requested change of the image capture viewpoint may also be considered, such as generating text or audio to indicate a desired change in the image capture viewpoint.
  • the first user of the image/video capture device is informed of a desired requested change in the image/video capture viewpoint.
  • the second user may thus manually manipulate his or herself the first image/video capture device, for example aim, pan or tilt the image/video capture device in the manner directed.
  • a further image/video may be captured having the revised image/video capture viewpoint which is sent to the second device for display thereon.
  • the method blocks 301-305, 201 , 101 , 102 and 306 may be repeated until the second user is content with the image/video capture viewpoint of the displayed image/video.
  • the flowchart proceeds via arrow 308 to block 309 in which an input from the second user of the second device is received for controlling the first device to capture an image/video. Responsive to this, a control signal is generated and in block 310 the control signal is sent to the first device for controlling the first device to capture a further image/video. In block 311 the control signal is received and in block 312 responsive to receipt of the control signal, the second device is caused to be controlled so as to capture a further image/video.
  • the image/video capture of block 312 may differ from the image/video capture in block 301. Firstly the image/video capture in block 301 may correspond to a real-time feed of streaming video footage captured by the first image/video capturing device.
  • Such video footage, or sample images/videos may be compressed or down sampled prior to transmission to the second device to reduce bandwidth requirements of the sending in block 302.
  • the image/video captured in block 312 may be at a higher resolution than that of 301 and may further involve optimisation of the image/video capturing process, e.g. via focus, ISO, flash, shutter speed adjustments.
  • control signals may also be sent to automatically control functionality of the first image/video capture device, for example controlling: zoom, focus, ISO, flash and shutter speed levels.
  • the captured further image/video is sent from the first device to the second device.
  • the sent captured further image/video is received at the second device.
  • the received captured image/video is stored on the second device in block 315.
  • the storage at the first device of the captured further image/video may be prevented such that only the second device has a copy of the captured image/video.
  • the flowchart of Figure 3 represents one possible scenario among others.
  • the order of the blocks shown is not absolutely required, so in principle, the various blocks can be performed out of order. Not all the blocks are essential. In certain examples one or more blocks may be performed in a different order or overlapping in time, in series or in parallel one or more blocks may be omitted or added or changed in some combination of ways.
  • communication such as voice or text communication
  • the second user may provide the user identifiable indications in the form of voice of text instructions/guidance as to the images/videos the second user is desirous of capturing, e.g. what the image/video is to be of, how is it to be framed/composed and so on.
  • the first user can take photographs as instructed which are automatically sent to the first device upon capture (and optionally which are prevented from being stored locally on the first image/video capture device itself).
  • Figures 4a and 4b schematically illustrate an example of the present disclosure in use.
  • a second user 406 of second device 402 wishes to take some pictures of the Great Wall of China (figuratively illustrated as 400). However the second user may well not be at the Great Wall and may not even be in China.
  • the user downloads and installs on his device 402 a remote image capture application and registers as a member of a remote image capture service.
  • the application may enable the user to search for other registered members of the service, for example by searching for one or more members based on:
  • One or more servers may keep track of the status and location of members of the service and store member details such as details of a member's image capture device, such as make and model number of the portable electronic wireless communications device comprising image capturing functionality.
  • the server may monitor the members': current location; and whether or not they are currently available for taking part in a remote camera control operation.
  • the second user 406 may use the application to identify a first user 405 located at the Great Wall 400 and moreover determine that: the first user has a particular capturing device such as a Nokia Lumia 925, and that the first user's status indicates the second user's availability for "lending" his camera remotely.
  • the second user sends a request to the first user asking if he can remotely "borrow" the camera on the Nokia Lumia 925 to take pictures at the present time.
  • the image capture/camera functionality of the first device may be turned on and an image on a display of the first device may be displayed which shows the image currently being captured, namely an image 407 corresponding to the current image capture viewpoint 403, i.e.
  • the viewfinder image 407 is sent 302 to the second device 402.
  • the viewfinder image 407 is received at the second device 402 and displayed thereon.
  • the second user can see the view of the camera of the remote first device, i.e. the second user can see exactly the same view as that captured by the remote first image capture device.
  • this may be performed substantially in real time.
  • the image 407 of the viewfinder which is transmitted from the first device to the second device may be compressed or provided at a low resolution. Since the viewpoint image 407 is merely for establishing the desired image viewpoint direction, i.e.
  • the first user of the first image capture device is holding and aiming the image capture device so that is has a first image capture viewpoint in which the captured image 407 has the Great Wall on a right hand side of the image and moreover it only partially encompasses the Great Wall with some of the Great Wall being cut off from the captured image.
  • the second user upon seeing such a viewpoint image 407 on his device 402 may provide a user input for requesting a change in the image capture viewpoint.
  • Such an input may correspond to the second user swiping on the displayed image to the right (as indicated by arrow 409) or tilting/aiming the second device 402 towards the right (as indicated by arrow 410).
  • Such user input(s) are detected and a magnitude and direction of a desired change of image capture viewpoint are determined therefrom.
  • Such information is sent 201 in a request to change the image capture viewpoint.
  • a user identifiable indication 404 is generated on the first device 401 , in this case an arrow pointing towards the right indicating to the first user 405 that the second user 406 would wish the imaging capturing device to be moved towards the right.
  • Figure 4b shows the scenario wherein the second user has his/herself moved the first image capture device (turned it to the right) so as to change its image capture viewpoint to a new image capture viewpoint 41 1 , i.e. change the angle and direction 41 1 ' at which the first image capture device is aiming.
  • a further viewfinder image 412 of the revised image capture viewpoint is sent 302 to the second device.
  • the user of the second device is content with the revised image capture viewpoint, he may send a control signal 310 (for example selecting a: take photo, shoot, shutter release, or shoot user interface element such as a capture photo icon) which causes the first image capture device to take a photograph and capture an image at the desired image capture viewpoint.
  • This captured image may be sent 313 to the second device for storage thereon.
  • both devices may store the captured image such that the captured image is shared between the devices.
  • Examples of the invention may take the form of a method, an apparatus, a computer program or a system. Accordingly, examples may be implemented in hardware, software or a combination of hardware and software.
  • the blocks support: combinations of means for performing the specified functions; combinations of steps for performing the specified functions; and computer program instructions/algorithm/user interface for performing the specified functions. It will also be understood that each block, and combinations of blocks, can be implemented by special purpose hardware-based systems which perform the specified functions or steps, or combinations of special purpose hardware and computer program instructions.
  • Figure 5 schematically illustrates an example of an apparatus 500 comprising means configured to enable the apparatus to at least perform the above described methods, not least for example the method for the first device and/or the method for the second device.
  • Figure 5 focuses on the functional components necessary for describing the operation of the apparatus.
  • the apparatus 500 comprises a controller 501.
  • the controller 501 can be in hardware alone (e.g. processing circuitry comprising one or more processors and memory circuitry comprising one or more memory elements), have certain aspects in software including firmware alone or can be a combination of hardware and software (including firmware).
  • the controller may be implemented using instructions that enable hardware functionality, for example, by using executable computer program instructions in a general-purpose or special-purpose processor that may be stored on a computer readable storage medium (disk, memory etc.) or carried by a signal carrier to be performed by such a processor.
  • the apparatus 500 comprises a controller 501 which is provided by a processor 502 and memory 503.
  • a single processor and a single memory are illustrated in other implementations there may be multiple processors and/or there may be multiple memories some or all of which may be integrated/removable and/or may provide permanent/semi-permanent/ dynamic/cached storage.
  • the memory 503 stores a computer program 504 comprising computer program instructions 505 that control the operation of the apparatus when loaded into the processor 502.
  • the computer program instructions provide the logic and routines that enable the apparatus to perform the methods described above.
  • the at least one memory 503 and the computer program instructions 505 are configured to, with the at least one processor 502, cause the apparatus 500 at least to perform the method described, for example with respect to figures 1 , 2 and 3.
  • the processor 502 is configured to read from and write to the memory 503.
  • the processor 502 may also comprise an input interface 506 via which data and/or commands are input to the processor 502, and an output interface 507 via which data and/or commands are output by the processor 502.
  • the memory 503 stores a computer program 504 comprising computer program instructions 505.
  • the instructions control the operation of the apparatus 500 when loaded into the processor 502.
  • the processor 502 by reading the memory 503 is able to load and execute the computer program 504.
  • the computer program instructions 505 provide the logic and routines that enables the apparatus 500 to perform the methods described below and illustrated in Figures 1 , 2 and 3.
  • the computer program 504 may arrive at the apparatus 500 via any suitable delivery mechanism 51 1.
  • the delivery mechanism 510 may be, for example, a non-transitory computer-readable storage medium, a computer program product, a memory device, a record medium such as a compact disc read-only memory or digital versatile disc, or an article of manufacture that tangibly embodies the computer program 504.
  • the delivery mechanism may be a signal configured to reliably transfer the computer program 504.
  • any such computer program instructions may be loaded onto a computer or other programmable apparatus (i.e., hardware) to produce a machine, such that the instructions when performed on the programmable apparatus create means for implementing the functions specified in the blocks.
  • These computer program instructions may also be stored in a computer-readable medium that can direct a programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the blocks.
  • the computer program instructions may also be loaded onto a programmable apparatus to cause a series of operational steps to be performed on the programmable apparatus to produce a computer-implemented process such that the instructions which are performed on the programmable apparatus provide steps for implementing the functions specified in the blocks.
  • the apparatus 500 may receive, propagate or transmit the computer program 504 as a computer data signal.
  • the apparatus may be provided in a module.
  • module refers to a unit or apparatus that excludes certain parts/components that would be added by an end manufacturer or a user.
  • the apparatus may be provided in an electronic device, for example, mobile terminal, according to an exemplary embodiment of the present invention. It should be understood, however, that a mobile terminal is merely illustrative of an electronic device that would benefit from examples of implementations of the present disclosure and, therefore, should not be taken to limit the scope of the present disclosure to the same. While certain in certain implementation examples the apparatus may be provided in a mobile terminal, other types of electronic devices, such as, but not limited to, hand portable electronic devices, portable digital assistants (PDAs), pagers, mobile computers, desktop computers, televisions, gaming devices, laptop computers, tablets, cameras, video recorders, GPS devices and other types of electronic systems, may readily employ examples of the present disclosure. Furthermore, devices may readily employ examples of the present disclosure regardless of their intent to provide mobility.
  • the apparatus 500 may, for example, be an electronic device, a client device, mobile cellular telephone, a wireless communications device, a hand-portable electronic device etc. or a module or chipset for use in any of the foregoing.
  • Figure 6 schematically illustrates a device 600 comprising the apparatus 500 as well as additional physical elements/components/means configured to perform the above described methods.
  • each of the components described below may be one of more of any device, means or circuitry embodied in hardware, software or a combination of hardware and software that is configured to perform the corresponding functions of the respective components as described in greater detail below.
  • the device comprises at least one processor 502, at least one memory 503 including computer program code 504 wherein the at least one memory and the computer program code are configured to, with the at least one processor cause the apparatus at least to perform the above described methods such as those described with regards to figures 1 , 2 and 3.
  • the apparatus comprises one or more various input means in communication with the controller/processor such as:
  • communication device or means for example wired or wireless communication means 601 for communicating in one or more networks for receiving information/data/images and control signals;
  • movement sensors or sensing means 602 for sensing changes in movement of the device 600.
  • Such means may include not least for example one or more gyroscopes configured to determine a change in angle of the device with respect to one or more orthogonal axes so as to detect changes in yaw, pitch, roll or orientation of the device, GPS or other positioning sensing means to determine a position of the device; image capturing means 603 such as an image sensor or camera, user interface means 604, for receiving user input.
  • keys, buttons or a touch sensitive input device For example keys, buttons or a touch sensitive input device; and
  • audio input means 605 such as a microphone to receive a user's audio input.
  • controller/processor may be in communication with one or more various output means, not least for example:
  • a display 606 which may be a touch sensitive display
  • a communications device or means 607 e.g. for wirelessly transmitting information and data such as, not least for example captured images, requests and control signals; and user interface output means 608, these could correspond to audio, visual or haptic output devices for providing the user identifiable indication.
  • such means may correspond to an audio output means such as a speaker for providing an audio output from the device.
  • capturing and transmitting an image 407 other media may instead be captured such as one or more of: a sequence of images, audio or video.
  • a video may instead be captured and sent.
  • a video may instead be remotely captured and send to the second device.
  • the wording 'send', 'receive' and 'communication' and their derivatives mean operationally sending/receiving/in communication. It should be appreciated that any number or combination of intervening components can exist (including no interVening components).
  • communication between the first and second devices may be via one or more base stations in a mobile cellular telecommunication network, or routers of a WLAN.
  • references to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed- function device, gate array or programmable logic device etc.
  • the term 'circuitry' refers to all of the following:
  • circuits such as a microprocessors) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • circuitry would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware.
  • circuitry would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or other network device.”
  • the apparatus 600 is embodied on a hand held portable electronic device, such as a mobile telephone, tablet or personal digital assistant, that may additionally provide one or more audio/text/video communication functions (e.g. tele-communication, video- communication, and/or text transmission (Short Message Service (SMS)/ Multimedia Message Service (MMS)/emailing) functions), interactive/non-interactive viewing functions (e.g. web-browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. Moving Picture Experts Group-1 Audio Layer 3 (MP3) or other format and/or (frequency modulation/amplitude modulation) radio broadcast recording/playing), downloading/sending of data functions, image capture function (e.g. using a (e.g.
  • Examples of the present invention provide both a method and corresponding apparatus consisting of various modules or means that provide the functionality for performing the steps of the method.
  • the modules or means may be implemented as hardware, or may be implemented as software or firmware to be performed by a computer processor.
  • examples of the invention can be provided as a computer program product including a computer readable storage structure embodying computer program instructions (i.e. the software or firmware) thereon for performing by the computer processor.
  • example' or 'for example' or 'may' in the text denotes, whether explicitly stated or not, that such features or functions are present in at least the described example, whether described as an example or not, and that they can be, but are not necessarily, present in some or all other examples.
  • 'example', 'for example' or 'may' refers to a particular instance in a class of examples.
  • a property of the instance can be a property of only that instance or a property of the class or a property of a sub-class of the class that includes some but not all of the instances in the class.
  • the apparatus described may alternatively or in addition comprise apparatus which in some other embodiments comprises a distributed system of apparatus, for example, a slave/master or client/server apparatus system.
  • a distributed system of apparatus for example, a slave/master or client/server apparatus system.
  • each apparatus forming a component and/or part of the system provides (or implements) one or more features which collectively implement an embodiment of the invention.
  • an apparatus is re-configured by an entity other than its initial manufacturer to implement an embodiment of the invention by being provided with additional software, for example by a user downloading such software, which when executed causes the apparatus to implement an example of an embodiment of the invention (such implementation being either entirely by the apparatus or as part of a system of apparatus as mentioned hereinabove).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

A method, apparatus, computer program and system for requesting a change in an image capture viewpoint is disclosed. The method comprises receiving, at a first image capture device from a second device, a request to change an image capture viewpoint of the first device (101); and generating at the first device, responsive to the received request, a user identifiable indication for indicating the requested change in the image capture viewpoint (102).

Description

METHOD, APPARATUS, COMPUTER PROGRAM AND SYSTEM
TECHNOLOGICAL FIELD Examples of the present disclosure relate to a method, apparatus, computer program and system. In particular, though without prejudice to the foregoing, certain examples relate to a method, apparatus, computer program and system for requesting a change in an image capture viewpoint. BACKGROUND
Image capture devices are well known. However, conventional image capture devices and systems are not always optimal as, typically, a user wishing to capture an image with an image capture device must physically be present him/herself in order to manually control the image capture device to capture an image of a desired scene. Thus a user is limited to capturing images of his/her own surrounding area. Certain examples of the present disclosure seek to provide an improved method, apparatus and computer program for remotely capturing an image. The listing or discussion of any prior-published document or any background in this specification should not necessarily be taken as an acknowledgement that the document or background is part of the state of the art or is common general knowledge. One or more aspects/examples of the present disclosure may or may not address one or more of the background issues.
BRIEF SUMMARY
According to at least some but not necessarily all examples of the disclosure there is provided a method comprising causing, at least in part, actions that result in:
receiving, at a first image capture device from a second device, a request to change an image capture viewpoint of the first device; and
generating at the first device, responsive to the received request, a user identifiable indication for indicating the requested change in the image capture viewpoint. According to at least some but not necessarily all examples of the disclosure there is provided a method comprising causing, at least in part, actions that result in: sending, from a second device to a first device, an instruction to change an image capture viewpoint of the first device;
wherein the instruction is configured to cause the first device to generate a user identifiable indication for the requested change of the image capture viewpoint.
According to at least some but not necessarily all examples of the disclosure there is provided an apparatus comprising means configured to enable the apparatus at least to perform one or more of the first and second methods mentioned above. According to at least some but not necessarily all examples of the disclosure there is provided an apparatus comprising:
at least one processor; and
at least one memory including computer program code;
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform one or more of the first and second methods mentioned above:
According to at least some but not necessarily all examples of the disclosure the above apparatuses may be comprised in a: chipset, module or device.
According to at least some but not necessarily all examples of the disclosure there is provided:
an apparatus comprising means configured to enable the apparatus at least to perform the first method mentioned above; and
an apparatus comprising means configured to enable the apparatus at least to perform the second method mentioned above.
According to at least some but not necessarily all examples of the disclosure there is provided a computer program that, when performed by at least one processor, causes at least one or more of the first and second methods mentioned above to be performed.
According to at least some but not necessarily all examples of the disclosure there is provided a non-transitory computer readable medium encoded with instructions that, when performed by at least one processor, causes at least one or more of the first and second methods mentioned above to be performed. The above examples and the accompanying claims may be suitably combined in any manner apparent to one of ordinary skill in the art.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of various examples that are useful for understanding the detailed description reference will now be made by way of example only to the accompanying drawings in which:
Figure 1 schematically illustrates a method for a first apparatus;
Figure 2 schematically illustrates a method for a second apparatus;
Figure 3 schematically illustrates a flowchart of a system comprising a first apparatus and a second apparatus;
Figures 4a and 4b schematically illustrate use of examples of the present disclosure; Figure 5 schematically illustrates an apparatus; and
Figure 6 schematically illustrates a device.
DETAILED DESCRIPTION
The figures schematically illustrate, according to one aspect of the present disclosure, a method 100 for performance on a first device 401 , the method comprising causing, at least in part, actions that result in:
receiving 101 , at a first image capture device 401 from a second device 402, a request to change an image capture viewpoint 403 of the first device; and
generating 102 at the first device 401 , responsive to the received request, a user identifiable indication 404 for indicating to a user 405 of the first device 401 the requested change in the image capture viewpoint.
The figures also schematically illustrate, according to another aspect of the present disclosure, a method 200 for performance on a second device 402, the method comprising causing, at least in part, actions that result in:
sending 201 , from the second device to the first image capture device 401 , a request to change the image capture viewpoint 403 of the first image capture device; wherein the request is configured such that, upon receipt at the first image capture device 401 , the request causes the first image capture device to generate a user identifiable indication 404 for indicating to a user 405 of the first image capture device the requested change of the image capture viewpoint. The first image capture device is a generic term that encompasses, for example, at least the following examples, a device having image capturing functionality such as an image sensor, camera or a camera enabled portable handheld electronic device such as a mobile phone, tablet or any other device having image capturing functionality.
The image capture viewpoint may, for example, correspond to the viewpoint at which an image is captured by the first image capture device. This may correspond to at least one of: position, orientation, direction, tilt or angle of the image capture device. Furthermore the image capture viewpoint may correspond to at least one of: field of view, angle of view, perspective and zoom level of the image captured by the image capture device. In certain examples, the image capture viewpoint may correspond to a viewfinder view of the image capture device. The user identifiable indication for indicating to a user of the first image capture device the requested change in the image capture viewpoint may correspond to visual, aural or haptic indicators configured to indicate a requested change in the image capturing viewpoint. For examples, the identifiers may correspond to a directional arrow displayed on a display of the image capture device indicating a request for the second device to be tilted/moved in a particular direction or to adjust a zoom level so as to alter the image capture device's image capture viewpoint.
Examples of the present disclosure provide the advantage that a second user of a second device, which is remote from a first image capturing device and a first user of the same, to indirectly and remotely control the first image capture device. Examples enable a request for adjustment of an image capture viewpoint of the first image capture device to be sent from the second device to the first image capture device which causes user identifiable indications to be generated and presented to the first user for indicating to the first user the requested change in the image capture viewpoint. Advantageously, examples provide an improved method, apparatus and system for remote image capture that enables a remote second user to request a change in the image capture viewpoint. The first user of the first device is made aware of the request via the generated and presented user identifiable indication and may thus make the requested change to the image capture viewpoint, for example by the first user manually moving the image capture device in a manner indicated. In further examples of the present disclosure, once the second user is satisfied with the image capture viewpoint, the second user of the second device can send a control signal to the first image capture device to capture an image having the desired image capture viewpoint. Once this has been done, the captured image may be sent to the second device. Advantageously, such further examples enable the second user of the second device to remotely control a remote image capture device by remotely (indirectly) controlling the image capture viewpoint of the image capture device, with the assistance of the first user via the user identifiable indications, and also remotely controlling the image capture device to capture an image having the desired image capture viewpoint and sending the same to the first device.
An example of a method for requesting a change in an image capture viewpoint will now be described with reference to the figures. Similar reference numerals are used in the figures to designate similar features. For clarity, all reference numerals are not necessarily displayed in all figures.
Figure 1 schematically illustrates a method 100 according to an example of the present disclosure. Figure 1 shows a method 100 which may be performed by an apparatus 500 (as shown in Figure 5) which may be comprised in a first image capture device 401 (as shown in Figure 4). In block 101 a request to change an image capture viewpoint of a first image capture device is received from a second device 402. The request to change an image capture viewpoint may comprise requesting a change in one or more of the: position, orientation, direction, tilt, angle and zoom level of the image capture device, as well as the field/angle of view or perspective of the image captured by the image capture device. It is to be appreciated that the image capture could comprise video capture and that the image captured by the image capture device may be one or more images of a video sequence or a video stream, such as corresponding to the image capture viewpoint of a viewfinder of the image capture device.
The request may be received via a receiver or antenna of the first device. In block 102, responsive to the received request a user identifiable indication 404 is generated for indicating to a first user 405 of the first device 401 a requested change in the image capture viewpoint 403. The user identifiable indication may be a visual, aural or haptic user identifiable indication that indicates a requested change of viewpoint. This may in certain examples correspond to the generation of one or more visual indications, such as one or more directional arrows, displayed on a display of the first device. Further, this may in certain examples correspond to an additional the generation of one or more audio indications giving one or more directional prompts.
Figure 2 schematically illustrates a method 200 which may be performed by an apparatus 500 which may be comprised in a second device 402 according to an example of the present disclosure. In block 201 , an instruction/request to change an image capture viewpoint of a first device is sent from the second device to the first device. The instruction/request is configured such that, upon receipt at the first device, the instruction/request triggers/causes the first device to generate a user identifiable indication for indicating to a user of the first device the requested change of the image capture viewpoint.
The methods of Figures 1 and 2 enable a second user of a second device, remote from an image capture device, to remotely control the image capture viewpoint of a first image capture device (albeit indirectly remotely controlling the viewpoint via the first user). Advantageously, this allows the remote user to frame a desired shot remotely without needing specialist robotic hardware/actuators to remotely control movement of the first image capture device.
In the methods set out above and discussed further below, the second device 402 acts as a 'master' device to the 'slave' first image capture device 401.
Figure 3 schematically illustrates a flowchart of a system according to an example of the present disclosure. Method blocks performed by the first image capture device (the "slave" device) are set out on the right hand side of the figure, whilst method blocks for the second device (the "master" device) are set out on the left hand side.
In block 301 of method 300, an image or a video, for example an image file and/or a video file, having a particular image/video capture viewpoint, is captured by a first image/video capture device. The captured image/video is sent in block 302 by the first device and is received, in block 303, by the second device. The transmission of the captured image/video may be via any suitable communication network(s), for example a cellular communication network or other wireless communication network. In block 304 the received image/video captured by the first imaging capturing device is displayed on a display of the second device. A second user of the second device, upon viewing the captured image/video (having a particular imaging viewpoint) on the display of the second device can then decide whether or not the image/video is appropriately framed or if the second user would wish to change the image/video viewpoint, i.e. seek to adjust the aiming or zoom level of the first image/video capture device.
Should the second user wish to change the image/video capture viewpoint the user may provide a user input in this regard. In block 305, the user input to request a change of the image/video capture viewpoint is received at the second device. The user input for requesting a change of the image/video capture viewpoint of the first image/video capture device may comprise detecting movement of the second device. For example the second user of the second device may change the orientation or direction of the second device by manually manipulating the second device his or herself. A change in the movement may be detected, for example detecting a change in the orientation, yaw, pitch, roll, tilt or angle of direction of the device via appropriate sensors of the second device. Additionally, sensors may be able to detect the direction and magnitude of such movements. Information regarding the same may be comprised in the instruction/request which is sent to the first device in block 201. In addition or alternatively, user input for requesting a change of the image/video capture viewpoint of the first image/video capture device may comprise the second user of the second device actuating a user interface element of the second device, for example a button or key of a user input device or effect a gesture on a touch screen of the second. For example, a user may provide an indication as to a desire to change the image/video capture viewpoint to the left by performing a swiping gesture on the image/video displayed on the second device to the left which is detected. On certain other particular examples, the second device may itself have zoom controls, actuation of which by the second user may be used to provide the user input of block 305 to request a change in the zoom level. Information concerning the requested change of image/video capture viewpoint is sent in the request of block 201. Other forms of user input for requesting a change in the image/video capture viewpoint may also be provided, such as receiving audio input from the second user which is used in forming the request.
In block 101 , the request is received at the first device and in block 102, a user identifiable indication is generated at the first device, responsive to the received request. The user identifiable indication is configured so as to indicate to a user of the first device the requested change in the image/video capture viewpoint. The user identifiable indication may comprise visual, aural or haptic cues. For example, displaying a visual indication representative of the requested change in the image/video capture viewpoint on a display of the first device.
In one particular example one or more arrows pointing in particular direction(s) (e.g. up, down, left, right or diagonal) may be displayed indicative of a requested direction of movement of the second device. Moreover, an indication of a requested magnitude of the movement of the second device may also be provided, for example not least related to an attribute of the displayed arrow such as its size or colour or brightness. Other means for providing user identifiable indication for indicating a requested change of the image capture viewpoint may also be considered, such as generating text or audio to indicate a desired change in the image capture viewpoint.
In block 306, upon generation of the user identifiable indication, the first user of the image/video capture device is informed of a desired requested change in the image/video capture viewpoint. The second user may thus manually manipulate his or herself the first image/video capture device, for example aim, pan or tilt the image/video capture device in the manner directed. As indicated in the feedback loop arrow 307, a further image/video may be captured having the revised image/video capture viewpoint which is sent to the second device for display thereon. In the feedback loop, the method blocks 301-305, 201 , 101 , 102 and 306 may be repeated until the second user is content with the image/video capture viewpoint of the displayed image/video. Once this is the case, the flowchart proceeds via arrow 308 to block 309 in which an input from the second user of the second device is received for controlling the first device to capture an image/video. Responsive to this, a control signal is generated and in block 310 the control signal is sent to the first device for controlling the first device to capture a further image/video. In block 311 the control signal is received and in block 312 responsive to receipt of the control signal, the second device is caused to be controlled so as to capture a further image/video. The image/video capture of block 312 may differ from the image/video capture in block 301. Firstly the image/video capture in block 301 may correspond to a real-time feed of streaming video footage captured by the first image/video capturing device. Such video footage, or sample images/videos may be compressed or down sampled prior to transmission to the second device to reduce bandwidth requirements of the sending in block 302. The image/video captured in block 312 may be at a higher resolution than that of 301 and may further involve optimisation of the image/video capturing process, e.g. via focus, ISO, flash, shutter speed adjustments.
In addition to the second device sending an image/video capture control signal, other control signals (not shown) may also be sent to automatically control functionality of the first image/video capture device, for example controlling: zoom, focus, ISO, flash and shutter speed levels.
In block 313, the captured further image/video is sent from the first device to the second device. In block 314, the sent captured further image/video is received at the second device. The received captured image/video is stored on the second device in block 315.
Optionally, in block 316, the storage at the first device of the captured further image/video may be prevented such that only the second device has a copy of the captured image/video.
The flowchart of Figure 3 represents one possible scenario among others. The order of the blocks shown is not absolutely required, so in principle, the various blocks can be performed out of order. Not all the blocks are essential. In certain examples one or more blocks may be performed in a different order or overlapping in time, in series or in parallel one or more blocks may be omitted or added or changed in some combination of ways.
In a further example, communication, such as voice or text communication, may also occur during the process of Figure 3. During such communication the second user may provide the user identifiable indications in the form of voice of text instructions/guidance as to the images/videos the second user is desirous of capturing, e.g. what the image/video is to be of, how is it to be framed/composed and so on. Instead of the second user sending a control signal to capture each image/video, the first user can take photographs as instructed which are automatically sent to the first device upon capture (and optionally which are prevented from being stored locally on the first image/video capture device itself).
Figures 4a and 4b schematically illustrate an example of the present disclosure in use. In this example, a second user 406 of second device 402 wishes to take some pictures of the Great Wall of China (figuratively illustrated as 400). However the second user may well not be at the Great Wall and may not even be in China. The user downloads and installs on his device 402 a remote image capture application and registers as a member of a remote image capture service. The application may enable the user to search for other registered members of the service, for example by searching for one or more members based on:
location (e.g. based on their current particular location, or based on their proximity to the second user),
particular member/username, e.g. a friend of the second user whom the second user knows is in China visiting the Great Wall,
status of member, e.g. availability to remotely 'share' camera, and/or
based on image capturing device of the member.
One or more servers may keep track of the status and location of members of the service and store member details such as details of a member's image capture device, such as make and model number of the portable electronic wireless communications device comprising image capturing functionality. The server may monitor the members': current location; and whether or not they are currently available for taking part in a remote camera control operation.
The second user 406 may use the application to identify a first user 405 located at the Great Wall 400 and moreover determine that: the first user has a particular capturing device such as a Nokia Lumia 925, and that the first user's status indicates the second user's availability for "lending" his camera remotely. The second user sends a request to the first user asking if he can remotely "borrow" the camera on the Nokia Lumia 925 to take pictures at the present time. Once the first user accepts the request, the image capture/camera functionality of the first device may be turned on and an image on a display of the first device may be displayed which shows the image currently being captured, namely an image 407 corresponding to the current image capture viewpoint 403, i.e. an angle and direction 403' at which the first image capture device is aiming. The viewfinder image 407 is sent 302 to the second device 402. The viewfinder image 407 is received at the second device 402 and displayed thereon. The second user can see the view of the camera of the remote first device, i.e. the second user can see exactly the same view as that captured by the remote first image capture device. Preferably, where wireless communication bandwidth permits, this may be performed substantially in real time. In order to reduce bandwidth requirements, the image 407 of the viewfinder which is transmitted from the first device to the second device may be compressed or provided at a low resolution. Since the viewpoint image 407 is merely for establishing the desired image viewpoint direction, i.e. for framing a shot, it is not necessary for it to be of a high quality image, i.e. as a high quality image as may be subsequently remotely captured as discussed below. As shown in figure 4a, the first user of the first image capture device is holding and aiming the image capture device so that is has a first image capture viewpoint in which the captured image 407 has the Great Wall on a right hand side of the image and moreover it only partially encompasses the Great Wall with some of the Great Wall being cut off from the captured image. The second user, upon seeing such a viewpoint image 407 on his device 402 may provide a user input for requesting a change in the image capture viewpoint. Such an input may correspond to the second user swiping on the displayed image to the right (as indicated by arrow 409) or tilting/aiming the second device 402 towards the right (as indicated by arrow 410). Such user input(s) are detected and a magnitude and direction of a desired change of image capture viewpoint are determined therefrom. Such information is sent 201 in a request to change the image capture viewpoint. Upon receipt of the request, a user identifiable indication 404 is generated on the first device 401 , in this case an arrow pointing towards the right indicating to the first user 405 that the second user 406 would wish the imaging capturing device to be moved towards the right.
Figure 4b shows the scenario wherein the second user has his/herself moved the first image capture device (turned it to the right) so as to change its image capture viewpoint to a new image capture viewpoint 41 1 , i.e. change the angle and direction 41 1 ' at which the first image capture device is aiming. A further viewfinder image 412 of the revised image capture viewpoint is sent 302 to the second device. Once the user of the second device is content with the revised image capture viewpoint, he may send a control signal 310 (for example selecting a: take photo, shoot, shutter release, or shoot user interface element such as a capture photo icon) which causes the first image capture device to take a photograph and capture an image at the desired image capture viewpoint. This captured image may be sent 313 to the second device for storage thereon. Furthermore the captured image may be prevented from being stored on the first device so that only the second user has a copy of the photograph (and not the first device /first user). However, in an alternative mode of operation, both devices may store the captured image such that the captured image is shared between the devices. Examples of the invention may take the form of a method, an apparatus, a computer program or a system. Accordingly, examples may be implemented in hardware, software or a combination of hardware and software.
Examples of the invention have been variously described using flowchart illustrations and schematic block diagrams. It will be understood that each block (of the flowchart illustrations and block diagrams), and combinations of blocks, can be implemented by computer program instructions of a computer program. These program instructions may be provided to one or more processor(s), processing circuitry or controller(s) such that the instructions which execute on the same create means for causing implementing the functions specified in the block or blocks. The computer program instructions may be executed by the processor(s) to cause a series of operational steps to be performed by the processor(s) to produce a computer implemented process such that the instructions which execute on the processors) provide steps for implementing the functions specified in the block or blocks described above with regards to Figures 1 - 3.
Accordingly, the blocks support: combinations of means for performing the specified functions; combinations of steps for performing the specified functions; and computer program instructions/algorithm/user interface for performing the specified functions. It will also be understood that each block, and combinations of blocks, can be implemented by special purpose hardware-based systems which perform the specified functions or steps, or combinations of special purpose hardware and computer program instructions.
Figure 5 schematically illustrates an example of an apparatus 500 comprising means configured to enable the apparatus to at least perform the above described methods, not least for example the method for the first device and/or the method for the second device.
Figure 5 focuses on the functional components necessary for describing the operation of the apparatus.
The apparatus 500 comprises a controller 501. Implementation of the controller 501 can be in hardware alone (e.g. processing circuitry comprising one or more processors and memory circuitry comprising one or more memory elements), have certain aspects in software including firmware alone or can be a combination of hardware and software (including firmware). The controller may be implemented using instructions that enable hardware functionality, for example, by using executable computer program instructions in a general-purpose or special-purpose processor that may be stored on a computer readable storage medium (disk, memory etc.) or carried by a signal carrier to be performed by such a processor.
In the illustrated example, the apparatus 500 comprises a controller 501 which is provided by a processor 502 and memory 503. Although a single processor and a single memory are illustrated in other implementations there may be multiple processors and/or there may be multiple memories some or all of which may be integrated/removable and/or may provide permanent/semi-permanent/ dynamic/cached storage. The memory 503 stores a computer program 504 comprising computer program instructions 505 that control the operation of the apparatus when loaded into the processor 502. The computer program instructions provide the logic and routines that enable the apparatus to perform the methods described above. The at least one memory 503 and the computer program instructions 505 are configured to, with the at least one processor 502, cause the apparatus 500 at least to perform the method described, for example with respect to figures 1 , 2 and 3.
The processor 502 is configured to read from and write to the memory 503. The processor 502 may also comprise an input interface 506 via which data and/or commands are input to the processor 502, and an output interface 507 via which data and/or commands are output by the processor 502.
The memory 503 stores a computer program 504 comprising computer program instructions 505. The instructions control the operation of the apparatus 500 when loaded into the processor 502. The processor 502 by reading the memory 503 is able to load and execute the computer program 504. The computer program instructions 505 provide the logic and routines that enables the apparatus 500 to perform the methods described below and illustrated in Figures 1 , 2 and 3. The computer program 504 may arrive at the apparatus 500 via any suitable delivery mechanism 51 1. The delivery mechanism 510 may be, for example, a non-transitory computer-readable storage medium, a computer program product, a memory device, a record medium such as a compact disc read-only memory or digital versatile disc, or an article of manufacture that tangibly embodies the computer program 504. The delivery mechanism may be a signal configured to reliably transfer the computer program 504.
As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus (i.e., hardware) to produce a machine, such that the instructions when performed on the programmable apparatus create means for implementing the functions specified in the blocks. These computer program instructions may also be stored in a computer-readable medium that can direct a programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the blocks. The computer program instructions may also be loaded onto a programmable apparatus to cause a series of operational steps to be performed on the programmable apparatus to produce a computer-implemented process such that the instructions which are performed on the programmable apparatus provide steps for implementing the functions specified in the blocks.
The apparatus 500 may receive, propagate or transmit the computer program 504 as a computer data signal.
The apparatus may be provided in a module. As used here 'module' refers to a unit or apparatus that excludes certain parts/components that would be added by an end manufacturer or a user.
The apparatus may be provided in an electronic device, for example, mobile terminal, according to an exemplary embodiment of the present invention. It should be understood, however, that a mobile terminal is merely illustrative of an electronic device that would benefit from examples of implementations of the present disclosure and, therefore, should not be taken to limit the scope of the present disclosure to the same. While certain in certain implementation examples the apparatus may be provided in a mobile terminal, other types of electronic devices, such as, but not limited to, hand portable electronic devices, portable digital assistants (PDAs), pagers, mobile computers, desktop computers, televisions, gaming devices, laptop computers, tablets, cameras, video recorders, GPS devices and other types of electronic systems, may readily employ examples of the present disclosure. Furthermore, devices may readily employ examples of the present disclosure regardless of their intent to provide mobility. The apparatus 500 may, for example, be an electronic device, a client device, mobile cellular telephone, a wireless communications device, a hand-portable electronic device etc. or a module or chipset for use in any of the foregoing.
Figure 6 schematically illustrates a device 600 comprising the apparatus 500 as well as additional physical elements/components/means configured to perform the above described methods.
Although the device will be described below in terms of comprising various components, it should be understood that the components may be embodied as or otherwise controlled by a corresponding processing element or processor of the apparatus. In this regard, each of the components described below may be one of more of any device, means or circuitry embodied in hardware, software or a combination of hardware and software that is configured to perform the corresponding functions of the respective components as described in greater detail below.
The device comprises at least one processor 502, at least one memory 503 including computer program code 504 wherein the at least one memory and the computer program code are configured to, with the at least one processor cause the apparatus at least to perform the above described methods such as those described with regards to figures 1 , 2 and 3.
The apparatus comprises one or more various input means in communication with the controller/processor such as:
communication device or means, for example wired or wireless communication means 601 for communicating in one or more networks for receiving information/data/images and control signals;
movement sensors or sensing means 602 for sensing changes in movement of the device 600. Such means may include not least for example one or more gyroscopes configured to determine a change in angle of the device with respect to one or more orthogonal axes so as to detect changes in yaw, pitch, roll or orientation of the device, GPS or other positioning sensing means to determine a position of the device; image capturing means 603 such as an image sensor or camera, user interface means 604, for receiving user input. For example keys, buttons or a touch sensitive input device; and
audio input means 605 such as a microphone to receive a user's audio input.
Likewise, the controller/processor may be in communication with one or more various output means, not least for example:
a display 606, which may be a touch sensitive display,
a communications device or means 607, e.g. for wirelessly transmitting information and data such as, not least for example captured images, requests and control signals; and user interface output means 608, these could correspond to audio, visual or haptic output devices for providing the user identifiable indication. In certain examples such means may correspond to an audio output means such as a speaker for providing an audio output from the device.
In examples of the disclosure given above, instead of capturing and transmitting an image 407 other media may instead be captured such as one or more of: a sequence of images, audio or video. For example, instead of capturing and sending a still viewfinder image in blocks 301 and 302, a video may instead be captured and sent. Additionally, likewise, instead of the capturing and transmission of a still image in blocks 312 and 313, a video may instead be remotely captured and send to the second device.
In the following description, the wording 'send', 'receive' and 'communication' and their derivatives mean operationally sending/receiving/in communication. It should be appreciated that any number or combination of intervening components can exist (including no interVening components). For example, communication between the first and second devices may be via one or more base stations in a mobile cellular telecommunication network, or routers of a WLAN. References to 'computer-readable storage medium', 'computer program product', 'tangibly embodied computer program' etc. or a 'controller', 'computer', 'processor' etc. should be understood to encompass not only computers having different architectures such as single /multi- processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other devices. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed- function device, gate array or programmable logic device etc. As used in this application, the term 'circuitry' refers to all of the following:
(a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and
(b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processors)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and
(c) to circuits, such as a microprocessors) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
This definition of 'circuitry' applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term "circuitry" would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term "circuitry" would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or other network device."
In one example, the apparatus 600 is embodied on a hand held portable electronic device, such as a mobile telephone, tablet or personal digital assistant, that may additionally provide one or more audio/text/video communication functions (e.g. tele-communication, video- communication, and/or text transmission (Short Message Service (SMS)/ Multimedia Message Service (MMS)/emailing) functions), interactive/non-interactive viewing functions (e.g. web-browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. Moving Picture Experts Group-1 Audio Layer 3 (MP3) or other format and/or (frequency modulation/amplitude modulation) radio broadcast recording/playing), downloading/sending of data functions, image capture function (e.g. using a (e.g. in-built) digital camera), and gaming functions. Examples of the present invention provide both a method and corresponding apparatus consisting of various modules or means that provide the functionality for performing the steps of the method. The modules or means may be implemented as hardware, or may be implemented as software or firmware to be performed by a computer processor. In particular, in the case of firmware or software, examples of the invention can be provided as a computer program product including a computer readable storage structure embodying computer program instructions (i.e. the software or firmware) thereon for performing by the computer processor.
Features described in the preceding description may be used in combinations other than the combinations explicitly described.
Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not. Although features have been described with reference to certain examples, those features may also be present in other examples whether described or not. Although various examples of the present invention have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the invention as claimed.
The term 'comprise' is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising Y indicates that X may comprise only one Y or may comprise more than one Y. If it is intended to use 'comprise' with an exclusive meaning then it will be made clear in the context by referring to "comprising only one ..." or by using "consisting". In this description, reference has been made to various examples. The description of features or functions in relation to an example indicates that those features or functions are present in that example. The use of the term 'example' or 'for example' or 'may' in the text denotes, whether explicitly stated or not, that such features or functions are present in at least the described example, whether described as an example or not, and that they can be, but are not necessarily, present in some or all other examples. Thus 'example', 'for example' or 'may' refers to a particular instance in a class of examples. A property of the instance can be a property of only that instance or a property of the class or a property of a sub-class of the class that includes some but not all of the instances in the class. In the above description, the apparatus described may alternatively or in addition comprise apparatus which in some other embodiments comprises a distributed system of apparatus, for example, a slave/master or client/server apparatus system. In examples of embodiments where an apparatus provided forms (or a method is implemented as) a distributed system, each apparatus forming a component and/or part of the system provides (or implements) one or more features which collectively implement an embodiment of the invention.
In some examples of embodiments, an apparatus is re-configured by an entity other than its initial manufacturer to implement an embodiment of the invention by being provided with additional software, for example by a user downloading such software, which when executed causes the apparatus to implement an example of an embodiment of the invention (such implementation being either entirely by the apparatus or as part of a system of apparatus as mentioned hereinabove).
The above description describes some examples of embodiments of an invention however those of ordinary skill in the art will be aware of possible alternative structures and method features which offer equivalent functionality to the specific examples of such structures and features described herein above and which for the sake of brevity and clarity have been omitted from the above description. Nonetheless, the above description should be read as implicitly including reference to such alternative structures and method features which provide equivalent functionality unless such alternative structures or method features are explicitly excluded in the above description of the embodiments of the invention.
Whilst endeavouring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance it should be understood that the applicant claims protection in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not particular emphasis has been placed thereon.

Claims

WHAT IS CLAIMED IS:
A method comprising causing, at least in part, actions that result in:
receiving, at a first image capture device from a second device, a request to change an image capture viewpoint of the first device; and
generating at the first device, responsive to the received request, a user identifiable indication for indicating the requested change in the image capture viewpoint.
The method of claim 1 , further comprising at least one or more of:
capturing an image at an image capture viewpoint by the first device ;
sending a captured image to a second device;
receiving a control signal from the second device for controlling the first device to capture a further image;
responsive to receipt of a control signal from the second device, capturing a further image at the first device;
sending a captured further image to the second device; and
preventing storage at the first device of a captured further image.
A method comprising causing, at least in part, actions that result in:
sending, from a second device to a first device, an instruction to change an image capture viewpoint of the first device;
wherein the instruction is configured to cause the first device to generate a user identifiable indication for the requested change of the image capture viewpoint.
The method of claim 3, further comprising receiving, at the second device, a user input to request a change of the image capture viewpoint of the first device.
The method of claim 4, wherein receiving the user input to request a change of the image capture viewpoint comprises one or more of:
detecting movement of the second device; and
detecting user actuation of a user interface of the second device.
The method of claim 3 or 4, wherein the method further comprises at least one or more of:
displaying, at the second device, an image or video captured at a first image capture viewpoint of the first device; displaying, at the second device, a further image captured at a changed image capture viewpoint of the first device;
detecting a user input in the second device for controlling the first device to capture a second image or video;
sending a control signal to the first device for instructing the first device to capture a second image; and
receiving a captured second image from the first device; and
storing at the second device a captured second image received from the first device.
The method of any one or more of the previous claims wherein generating said user identifiable indication for indicating a requested change of the image capture viewpoint comprises:
displaying a visual indication representative of the requested change in the image capture viewpoint on a display of the first device.
The method of any one or more of the previous claims wherein the request comprises at least one or more of:
information indicative of a requested direction of movement of the second device; and
information indicative of a requested magnitude of movement of the second device.
An apparatus comprising means configured to enable the apparatus at least to perform:
the method as claimed in one or more of claims 1 to 8.
An apparatus comprising:
at least one processor; and
at least one memory including computer program code;
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform:
the method as claimed in claim 1.
An apparatus comprising:
at least one processor; and
at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform:
the method as claimed in claim 3.
A chipset comprising the apparatus of any one or more of previous claims 9 to 1 1 .
A module comprising the apparatus of any one or more of previous claims 9 to 1 1 or the chipset of claim 12.
A device comprising the apparatus of any one or more of previous claims 9 to 1 1 , the chipset of claim 12 or the module of claim 13.
The device of claim 14, wherein the device is configured for at least one of:
wireless communication, mobile telephony, portable handheld use.
A system comprising:
an apparatus comprising means configured to enable the apparatus at least to perform the method as claimed in one or more of claims 1 , 2, 7 and 8 ; and an apparatus comprising means configured to enable the apparatus at least to perform the method as claimed in one or more of claims 3 to 8.
A computer program that, when performed by at least one processor, causes at least the method as claimed in any one or more of claims 1 to 8 to be performed.
A carrier signal carrying the computer program as claimed in claim 17.
A computer readable storage medium encoded with instructions that, when performed by a processor, performs the method of any one or more of claims 1 to 8.
A user interface configured to enable the performance of the method of any one or more of claims 1 to 8.
A non-transitory computer readable medium encoded with instructions that, when performed by at least one processor, causes at least the following to be performed: the method as claimed in claim 1. A non-transitory computer readable medium encoded with instructions that, when performed by at least one processor, causes at least the following to be performed: the method as claimed in claim 3.
PCT/CN2014/083617 2014-08-04 2014-08-04 Method, apparatus, computer program and system WO2016019489A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/083617 WO2016019489A1 (en) 2014-08-04 2014-08-04 Method, apparatus, computer program and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/083617 WO2016019489A1 (en) 2014-08-04 2014-08-04 Method, apparatus, computer program and system

Publications (1)

Publication Number Publication Date
WO2016019489A1 true WO2016019489A1 (en) 2016-02-11

Family

ID=55262973

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/083617 WO2016019489A1 (en) 2014-08-04 2014-08-04 Method, apparatus, computer program and system

Country Status (1)

Country Link
WO (1) WO2016019489A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111699077A (en) * 2018-02-01 2020-09-22 Abb瑞士股份有限公司 Vision-based operation for robots

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1705346A (en) * 2004-06-03 2005-12-07 乐金电子(中国)研究开发中心有限公司 Camera mobile phone remote controlling system and method
CN1830204A (en) * 2003-07-30 2006-09-06 松下电器产业株式会社 Camera unit and a camera unit control method
CN101087151A (en) * 2006-06-07 2007-12-12 三星电子株式会社 Remote control system and method for portable device
US20110058052A1 (en) * 2009-09-04 2011-03-10 Apple Inc. Systems and methods for remote camera control
US20140071349A1 (en) * 2012-09-12 2014-03-13 Nokia Corporation Method, apparatus, and computer program product for changing a viewing angle of a video image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1830204A (en) * 2003-07-30 2006-09-06 松下电器产业株式会社 Camera unit and a camera unit control method
CN1705346A (en) * 2004-06-03 2005-12-07 乐金电子(中国)研究开发中心有限公司 Camera mobile phone remote controlling system and method
CN101087151A (en) * 2006-06-07 2007-12-12 三星电子株式会社 Remote control system and method for portable device
US20110058052A1 (en) * 2009-09-04 2011-03-10 Apple Inc. Systems and methods for remote camera control
US20140071349A1 (en) * 2012-09-12 2014-03-13 Nokia Corporation Method, apparatus, and computer program product for changing a viewing angle of a video image

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111699077A (en) * 2018-02-01 2020-09-22 Abb瑞士股份有限公司 Vision-based operation for robots
CN111699077B (en) * 2018-02-01 2024-01-23 Abb瑞士股份有限公司 Vision-based operation for robots
US11926065B2 (en) 2018-02-01 2024-03-12 Abb Schweiz Ag Vision-based operation for robot

Similar Documents

Publication Publication Date Title
JP7385052B2 (en) Photography methods, equipment, electronic equipment and storage media
JP6335395B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
EP2731326B1 (en) Method and apparatus for shooting and storing multi-focused image in electronic device
KR101899351B1 (en) Method and apparatus for performing video communication in a mobile terminal
JP6348611B2 (en) Automatic focusing method, apparatus, program and recording medium
JP2013162487A (en) Image display apparatus and imaging apparatus
JP6532958B2 (en) Smart airplane device photographing method, smart airplane device, program and recording medium
CN109922253B (en) Lens anti-shake method and device and mobile equipment
RU2664674C2 (en) Method and apparatus for creating a panorama
JP2017130920A (en) Method for recommending position of photography, computer program and system for recommending position of photography
EP3211546A1 (en) Picture acquiring method and apparatus, computer program and recording medium
JP6266577B2 (en) Electronics
JP2018503271A (en) Network status information display method and apparatus
CN111385470B (en) Electronic device, control method of electronic device, and computer-readable medium
JP2017501646A (en) Method and apparatus for controlling display of video
US20150215514A1 (en) Device for wirelessly controlling a camera
JP2018098801A (en) Imaging control device, imaging control method, imaging system, and program
KR102501713B1 (en) Method for displaying an image and an electronic device thereof
JP2017162371A (en) Image processing device, image processing method and program
CN108111751B (en) Shooting angle adjusting method and device
CN114430453B (en) Camera anti-shake system, control method, equipment and medium
KR20150087681A (en) Holder for mobile terminal and method for changing direction of the mobile terminal using the same
US11206349B2 (en) Video processing method, apparatus and medium
JP2014030104A (en) Receiving device, image sharing system, receiving method, and program
US20130235233A1 (en) Methods and devices for capturing images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14899328

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14899328

Country of ref document: EP

Kind code of ref document: A1