WO2013076720A1 - A system and methods for personalizing an image - Google Patents

A system and methods for personalizing an image Download PDF

Info

Publication number
WO2013076720A1
WO2013076720A1 PCT/IL2012/050457 IL2012050457W WO2013076720A1 WO 2013076720 A1 WO2013076720 A1 WO 2013076720A1 IL 2012050457 W IL2012050457 W IL 2012050457W WO 2013076720 A1 WO2013076720 A1 WO 2013076720A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
file
scene
individual
receiving
Prior art date
Application number
PCT/IL2012/050457
Other languages
French (fr)
Inventor
Itzik Klein
Shai ROSEN
Original Assignee
Spoton-Video Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spoton-Video Ltd. filed Critical Spoton-Video Ltd.
Publication of WO2013076720A1 publication Critical patent/WO2013076720A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • H04N9/8211Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being a sound signal
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal

Definitions

  • the present invention relates to video and still images, more particularly, but not exclusively to personalizing video clips and still images.
  • Video clips are also used in social network services such as Facebook, wherein personal videos related to activities of an individual are uploaded to his personal profile. Video clips may also be used for training purposes. In such cases the clip may be used for analyzing an activity of an individual; for example for analyzing sport activities. Such sport activities may be, for example, skiing, biking, running or playing a game. Other clips may be used for security purposes.
  • ski resorts place cameras on the site. These cameras produce video clips of the visitors of the resort which may be used as a souvenir.
  • the video clips contain background details such as other people or objects in the area.
  • Moverati provides a system that captures video films on the site.
  • the automated video camera technology captures people moving in their favorite places, and then delivers the videos to a server for editing.
  • the personalization of the film is done by attaching a tag to each individual who wants a personalized video and then identifying the individual in the image by the attached tag.
  • US 2008002031 discloses a method and a system for personalizing video using a tag which is attached to an individual.
  • the tag may include GPS.
  • the method comprises captivating the camera according to the movement of the individual.
  • WO 9810358 discloses tags for personalizing an image.
  • the tag may include an RFID.
  • WO 2010075430 discloses a systems, methods and software for capturing and processing media items such as digital images or video associated with a sporting event such as a marathon or bicycle race.
  • the items may be recognized according to facial recognition.
  • the present invention discloses a method for personalizing a calibrated image of an individual in a scene; the method may comprise the steps of;
  • the method may further comprise receiving a plurality of calibrated images and coupling between the at least one position and an at least one image of the plurality of images; thereby providing a personalized frame.
  • the method may further comprise receiving a plurality of calibrated images and coupling between the at least one time stamp and an at least one image of the plurality of images; thereby providing a personalized frame.
  • the position of the individual may be recorded and transmitted by a smart phone application.
  • the invention discloses a method for personalizing a calibrated image of an individual in a scene; wherein the image is captured by a camera positioned at the scene; the method may comprise the steps of
  • the image may be a video, a video clip, a three dimensional image or a combination thereof.
  • the cropping may comprise cropping a large area for a nearby position and cropping a small area for a far position.
  • the cropping may also comprise predicting at least one new position; wherein the predicting the at least one new position comprises determining the at least one new position according to two consecutive positions recorded in the file.
  • the predicting can be based on movement data extracted from the location file or by dedicated detectors or by a combination thereof.
  • the movement may include speed, distance, maximal speed and average speed data, and it may be presented on a screen on top of the personalized image, as an overlay, beside, before, or after the personalized image or using any combination thereof.
  • the transmission of the file may be after a termination of the recording.
  • the method may further comprise capturing a second image by the portable computerized device during the scene and synchronizing the second image with the personalized image.
  • the synchronization may also be a fusion of the images.
  • the method may further comprise resizing each frame of the calibrated image to the same size; thereby providing a homogenous image.
  • the image may be a video, a video clip, a three dimensional image or a combination thereof.
  • the cropping may comprise cropping a large area for a nearby position and cropping a small area for a far position.
  • the position may be received by one member of a group consisting of GPS, WiFi transceiver, Cellular transceiver and a combination thereof.
  • the cropping may comprise predicting at least one new position; wherein the predicting the at least one new position comprises determining the at least one new position according to two consecutive positions recorded in the file.
  • the method may further comprise receiving audio data captured by the individual during the scene and synchronizing the audio data with the clipped image.
  • the present invention discloses a computer program placed on a magnetic readable media for personalizing a calibrated image of an individual in a scene, the computer program may comprise:
  • a first program instruction for receiving a file wherein the file comprises at least one position of the individual in the scene and an at least one time stamp;
  • the present invention discloses an apparatus for personalizing a calibrated image of an individual in a scene, the apparatus may comprise:
  • a receiving unit configured for receiving a file; wherein the file comprises at least one position of the individual in the scene and at least one timestamp; wherein the receiving unit is further configured for receiving the calibrated image;
  • a processor configured for clipping the calibrated image according to the file; thereby providing a personalized image.
  • Fig. 1 is a simplified schematic diagram illustrating a system for personalizing an image, according to embodiments of the present invention
  • Fig. 2 is a simplified schematic flowchart of a method of personalizing an image, according to embodiments of the present invention
  • Fig. 3 is a simplified schematic diagram illustrating a portable computerized device configured for personalizing an image, according to embodiments of the present invention
  • Fig. 4 is a simplified schematic diagram illustrating a server configured for personalizing an image, according to embodiments of the present invention
  • Fig. 5 is a simplified schematic flowchart of an exemplary scenario of predicting positions, according to embodiments of the present invention.
  • Fig. 6 is a simplified schematic flowchart of an exemplary scenario for personalizing an image, according to embodiments of the present invention.
  • Fig. 8 is a simplified schematic diagram illustrating two cameras recording a ski track.
  • Fig. 9 is a simplified schematic diagram illustrating a screen of an application displaying a personalized image.
  • the term personalizing an image refers hereinafter to rendering an image of a scene.
  • the image is captured by cameras located in a site related to the scene.
  • the rendered image focuses on a specific individual in the scene.
  • the personalization may include appending more data related to the individual or to the behavior of the individual at the scene.
  • the scene may start at one place and may end at another place. For example the scene may start at a beginning of a ski course and may end at the end of the ski course.
  • image refers hereinafter to a video film, video clip or a still image.
  • the image is a two dimensional or a three dimensional image.
  • cropping refers hereinafter to selecting a rectangle within the frame.
  • the cropping includes resizing the rectangle to the targeted frame size. The cropping is done on a frame by frame, so a rectangle of a different size and position could be selected from each frame of the image.
  • a system and methods for personalizing an image may be used for providing souvenir image at a ski resort or a cruise or any other tourist attraction or a sport event.
  • the system and methods may also be used for training purposes.
  • the system and methods may also be used for supervising purposes, for example for supervising a behavior of a child in a kindergarten.
  • the system may include one or more cameras for providing images from a scene.
  • the cameras may be high resolution cameras. Using a stationary camera with high resolution enables extracting many different personalized images from a single frame.
  • the personalized images are generated without incorporating complicated mechanical tracking systems.
  • the system also includes one or more potable computerized devices.
  • the portable computerized unit can be carried by any individual who wishes to personalize an image of a scene.
  • the portable computerized devices include a location detecting unit.
  • the location detecting unit is any unit capable of detecting location and time stamp.
  • Examples for a location detecting unit are Global Position System (GPS), or a detecting unit which detects position according to the radio signaling of the cellular network, or a detecting unit which detects position according to the radio signaling of the WiFi network, or any combination of the above.
  • the portable computerized unit includes a processor and storage for processing and for storing location data and any other relevant data related to the scene and to the activity of the individual carrying the portable computerized device at the scene. The data may be stored in a file.
  • the portable computerized device may include a transmitting unit for transmitting the stored data to a server.
  • the portable computerized unit may be a cellular telephone.
  • the server may receive data files from a plurality of portable computerized devices.
  • the data files may include location data, inertial parameters, recording sound and images taken from the portable computerized device during the scene.
  • the server may also receive a plurality of images from a plurality of cameras located in one or more sites.
  • the server may personalize one or more image according to the received data.
  • the server may transmit the personalized images to the individuals. For example the server may transmit a link of the personalized image to the portable computerized device of the individual. In another embodiment the link may be transmitted to an email address of the individual.
  • One technical problem dealt by the embodiments of the disclosure is to personalize an image according to location data.
  • One other technical problem dealt by the embodiments of the disclosure is to personalize an image for a plurality of individuals.
  • Yet one other technical problem is to enable an individual to personalize a video clip about his activity.
  • One technical solution is storing, at the portable computerized unit, location data of the position of the individual during the scene; transmitting the location data and at least one image of the scene to a server; calibrating the image and cropping the image according to the calibration and according to the location data. If the image is a video clip, the cropping may be done per each frame of the video. The cropping process may crop a large area for a nearby item and a short area for a far item.
  • the location data is GPS data.
  • the calibration is performed when positioning the camera at the first time on site by a technician. The calibration can be performed for example by designating a known position of selected pixels in a frame and then extrapolating the position of the remaining pixels in accordance.
  • One other technical problem dealt by the embodiments of the disclosure is to make the personalization available at any time and regardless of the weather condition or regardless of the communication infrastructure. Yet, one other technical problem is to provide an image personalization method and system which reduces the energy consumption.
  • One other technical solution is to save the location data at the mobile computerized device in a file; to send the data to the server and to perform the personalization at the server according to the data recorded in the file.
  • the file containing the location data and other relevant data that is saved at the computerized device does not need to be sent in real-time, and it can be sent when the user chooses or when conditions allow. It does not require real time connectivity, which is not available everywhere, may cost money (in case of cellular connectivity) and consume a lot of energy which causes the battery of the portable computerized device to run out quickly.
  • the application does not rely on data streaming, which may suffer from data losses. Saving the data in a file and sending it as a file ensures data integrity.
  • One other technical problem is to overcome scenarios in which the recorded data does not provide enough information. For example there might be 25 frames per second in a video film while the GPS may provide location data every second.
  • One other technical solution is to perform a prediction of the movement of the individual and to perform the cropping according to the prediction.
  • One other technical problem is to incorporate additional information to the image.
  • Such information includes speed information, sound information recorded on the scene, map of the site including information about the location of the individual on the map, and of his route, and the like.
  • One other technical solution is to calculate the speed and position of the individual on the map from the location information and to incorporate it within the image. Yet one other technical solution is to extract the sound information from the file, extract the time-stamp information and match it to the relevant video frame.
  • One other technical problem is to provide post-production of an unlimited number of different customized images.
  • the customization includes, changing resolution, changing zoom ratio, selecting if to incorporate speed information, sound information or an audio track, map with the individual's route on it, and the like from the same source image.
  • One technical solution is receiving an image including a plurality of individuals. Personalizing the image per each individual by cropping the image according to a location file of the individual. Customizing the image or plurality of images according to the individual requirements. In some cases, the individual requirements are stored in the location file.
  • System 100 includes one or more cameras 101 configured for providing images from a scene, one or more potable computerized devices 103 configured for recording data from the scene, one or more servers 102 configured for personalizing an image.
  • the one or more cameras 101 may be high resolution cameras.
  • the one or more cameras 101 may be located at one or more positions at the scene for capturing frames from the scene. For example if the scene is from a ski resort, the one or more cameras 101 may capture images from a route of the scene and transmit the images to a server 102. The frames captured by the one or more cameras 101 may partly overlap to ensure a total coverage of the area that is to be recorded.
  • the potable computerized device 103 may include a location detecting unit (not shown), a processor (not shown), a storage (not shown) and a transmitting unit (not shown).
  • the potable computerized device 103 may record data related to the scene and in particular positioning data and may transmit the recorded data to the server 102.
  • the portable computerized device 103 may be carried by any individual who wishes to personalize an image of a scene.
  • the portable computerized device 103 may be a smart phone.
  • the portable computerized device is described in greater details in fig. 3.
  • the server may receive data files from a plurality of portable computerized units.
  • the data files may include location data, inertial parameters, recording data and images taken from the portable computerized unit during the scene.
  • the server may also receive a plurality of images from a plurality of cameras located in one or more sites.
  • the server may personalize one or more image according to the received data.
  • the server may get also specific customization requests through a web interface.
  • the server may transmit the personalized images to the individuals. For example the server may transmit a link of the personalized image to the portable computerized unit of the individual. In another embodiment the link may be transmitted to an email address of the individual.
  • Fig. 2 is a simplified schematic flowchart of a method of personalizing an image, according to embodiments of the present invention.
  • location data is detected by an application that is installed on a remote computerized device.
  • the remote computerized device may be carried by any of the individuals of the scene.
  • the location data comprises a position of an individual in a scene.
  • the location data comprises GPS data.
  • the location is determined by methods of triangulation, using the WiFi network, or the cellular network, or any combination thereof.
  • the location data comprises time stamp.
  • the location data is recorded in a file.
  • additional data such as sound, image taken by the portable computerized device during the scene, time and date of the recording or any other data related to the scene and to the individual participated at the scene is recorded in a file.
  • each image is calibrated when positioning the camera at the first time on site by a technician.
  • the calibration is performed by designating a known position of selected pixels in a frame and then extrapolating the position of the remaining pixels in accordance. Calibration can be done by anchoring an object in the frame and extrapolating the location value of the pixels according to the anchor.
  • each image that is transmitted is identified by start and end time.
  • each image can also be identified by site identification and by location at the site. For example the image can be identified by the name of a ski resort and by a location at a ski resort. The image can also be identified by date and by start time and end time of the capturing.
  • the file is transmitted to a server.
  • the server matches the file's geo position with the location within the image.
  • This matching is composed of two parts. The first is matching the time of the location to the time of the relevant frame and the second is finding the location within the specific frame. The matching is done per each frame that is related to the position and the time. The location matching is done based on the pre calibration that was done by the technician.
  • the frame is cropped by the server.
  • the cropping is performed according to the positions recorded in the file that is coupled with the image.
  • the cropping is done for all the frames that are related to the location data.
  • the system crops an area surrounding the pixel in the image that is related to the position.
  • the area can be larger for nearby position and smaller for a far position.
  • a nearby and a far position is determined by a calibration process.
  • the cropping also comprises predicting positions of the individual according to the recorded positions. The predicting process is explained in greater detail in Fig. 5.
  • Fig. 3 is a simplified schematic diagram illustrating a portable computerized device configured for personalizing an image, according to embodiments of the present invention.
  • the portable computerized device 103 may be a cellular telephone or a smart telephone or any other device that can be carried by a person and can record position data and transmit the data to a server.
  • the portable computerized device 103 is configured for recording data related to the activity of the user of the portable computerized device in the scene.
  • the portable computerized device 103 is provided to the user at the site.
  • the portable computerized device 103 is a smart phone of the user.
  • the portable computerized device 103 includes transmitting unit 1031, a location detecting unit 1033, processor 1034, camera 1035, clock 1036, microphone 1037, storage 1038 and inertial server 1039.
  • the location detecting unit 1033 may comprise a GPS. In some embodiments, the location detecting unit 1033 detects the location of the user of the portable computerized device. In some embodiments, the location is detected at predefined intervals. For example, the location may be detected every second. In some embodiments the location detecting unit 1033 records time stamps. The detected location is recorded in the storage 1038.
  • the storage 1038 may be flash memory.
  • the camera 1035 is used for capturing a second image.
  • the second image is kept in the storage 1038.
  • the captured image is sent to the server; for providing additional view of the scene.
  • the second image provides a view of the scene from the user point of view.
  • the microphone 1037 may be used for recording sound during the scene.
  • the sound may comprise speech of the user and/or background noise.
  • the recorded data may be kept in the storage 1038.
  • the microphone 1037 may also be used for informing the user of the portable computerized device 103 about the beginning and the end of the recording.
  • the inertial sensor 1039 is used for recording inertial data.
  • the inertial data may be stored in the storage 1038.
  • the inertial sensor 1039 is also used for predicting positions as explained in fig. 5.
  • the inertial data can be provided by sensors integral to the mobile device and can also be derived from consecutive location information provided by the GPS for example.
  • the inertial data can be information presenting speed, distance, acceleration, inclination and direction.
  • the transmitting unit 1031 is used for transmitting the data that is recorded in the storage 1038 to a remote computerized device.
  • the transmitting unit 1031 comprises a WiFi transceiver, or a WiMax transceiver, or Cellular transceiver or any other wire or wireless communication unit.
  • the transmitting unit 1031 transmits the stored data after the completion of the recording automatically or upon user request.
  • Fig. 4 is a simplified schematic diagram illustrating a server configured for personalizing an image, according to embodiments of the present invention.
  • the server 102 receives a plurality of images from an at least one site.
  • the at least one site may be, for example, a ski resort, a running track, a cruise ship , a playground or any other site in which images are being captured and sent to the remote computerized device.
  • the images are captured during a predefined timeframe; for example during a day.
  • the server 102 includes a receiving unit 1021, a transmitting unit 1022 and a processor 1023.
  • the receiving unit 1021 is configured for receiving a file.
  • the file comprises at least one position of the individual in the scene.
  • the receiving unit is also configured for receiving calibrated images captured by cameras in an at least one site.
  • the processor 1023 is configured for personalizing the calibrated images.
  • the personalization is performed by cropping an at least one frame of the image according to the location data in the file.
  • the personalizing is done by selecting the right camera feed according to the location data.
  • the personalization is done by both selecting the right camera feed and by cropping the image.
  • Fig. 5 is a simplified schematic flowchart of an exemplary scenario of predicting positions, according to embodiments of the present invention.
  • the frequency of the recording is lower than the frame rate (number of frames per second) a prediction process is used.
  • the frame rate may be 25 frame/s, for PAL (Phase Alternating Line) and SEC AM (Sequential Color with Memory) standards while the GPS may update location every second.
  • the prediction process predicts one or more positions according to two known consecutive positions. The prediction provides a smooth video.
  • Fig. 5 shows an example of known positions (501, 504 and 507, respectively) and predicted positions (502, 503. 505 and 506, respectively).
  • the prediction is also performed by using inertial data. The prediction is important for providing continuous flow of the video as the individual is being tracked.
  • Fig. 6 is a simplified schematic flowchart of an exemplary scenario for personalizing an image, according to embodiments of the present invention.
  • steps 610, 620 are performed by a user of an application for personalizing the image.
  • steps 630, 640, 650, 660, 670, 680, 690 and 691 are performed by an application installed on a portable computerized device.
  • the portable computerized device is carried by the user during the scene.
  • the application record data related to the scene and in particular to the activity of the individual during the scene.
  • the application is activated automatically.
  • the application is activated manually.
  • the data may be recorded in a file and may be transmitted to a server.
  • steps 694, 695, 696 and 697 are performed by a server which may be installed at a server center.
  • the server is located on the site where the cameras are located and performs the personalization related to this site.
  • the server may receive files from the portable computerized devices.
  • the portable computerized devices may be dispersed in a plurality of sites.
  • the server may receive images from at least one camera in an at least one site.
  • the server may personalize images for a plurality of users. Each image may be personalized according to a recorded file that is sent from a portable computerized device.
  • steps 692 and 693 are performed by a system that is located in a site.
  • the system includes a camera for capturing images at the site.
  • the at least one image is a high resolution video film.
  • the captured image is sent to the server.
  • a video file is captured throughout the day and is sent once a day to the service center. Alternatively the video is transmitted in real time.
  • the capturing of the image starts.
  • the image is a video image.
  • the site may have a plurality of cameras. Each camera may capture an image from a different location. In some embodiments there may be an overlap between the images that are captured by a plurality of cameras at the site to ensure continuous capturing of the objects during the track.
  • the user checks if the application is already downloaded.
  • the application is downloaded to the portable computerized device.
  • the portable computerized device is a smart phone.
  • a check is performed to verify that the user is registered to the application.
  • the user is registered to the service with his user name and his email address.
  • the user activates the application by pressing on the start button.
  • the application is activated automatically. Automatic activation may be done, for example, according to GPS position or position calculation that was performed by other means.
  • the user's position is periodically recorded into a file.
  • the user terminates the application by, for example, pressing on a stop key.
  • the termination is performed automatically after a timeout or according to position detection and region of interest that are downloaded to the portable computerized device.
  • the recorded file is transmitted to a server which may be located at a service center or may be located on site.
  • a server which may be located at a service center or may be located on site.
  • captured videos from all the cameras at the site are transmitted to a local aggregation center.
  • the local aggregation center transmits the images to a service center or to a server located on site.
  • a personalized video may be generated.
  • the user is notified about the generating of the personalized video.
  • the user receives a URL which enables him to access the personalized file.
  • the access to the personalized file is secured by a user name and a password.
  • the user downloads the personalized file.
  • the user shares the file using a social network.
  • the user may also issue request to perform additional operations such as adding additional data or text.
  • Fig. 7 is a simplified schematic diagram of an exemplary application for personalizing an image used in a portable computerized device, according to embodiments of the present invention.
  • the application may be downloaded to a portable computerized device.
  • the application is automatically activated and terminated. Automatic activation of the application may be done by determining a start position by a GPS or other means of the portable computerized device. Automatic termination may be done by determining a final position by a GPS or other means of the portable computerized device.
  • the application may start and terminate manually by the user of the portable computerized device. The user may be notified about the recording by a sign at the beginning of the track or by a voice trigger.
  • a welcome message is displayed.
  • the welcome message is displayed automatically, for example as a result of entering a zone in which the user wants to record a video clip.
  • the welcome message is displayed when the user activates the application.
  • the user is authenticated.
  • the user is informed about the activating of the application.
  • the application records positions in the scene.
  • the application records GPS positions.
  • the user may stop the recording by pressing on the cancel button.
  • the application records other data related to the scene. Such data may be a video captured by the camera of the portable computerized device, sound recording, inertial parameters and the like.
  • the application terminates the recording. Terminating may be done automatically or manually (by pressing on the cancel button as explained at 740).
  • the user may send the recording data to a server by pressing on the send button.
  • the data is sent to the server.
  • the user is notified about the process of sending the data.
  • the user may cancel the sending by pressing on the cancel button.
  • the user is notified about the completion of the sending.
  • the server can start the cropping process.
  • Fig. 8 is a simplified schematic diagram illustrating two cameras (801 and 802) recording a ski track.
  • the drawing shows predefined regions of interest 804 and 805 from which video is to be cropped.
  • Fig. 9 is a simplified schematic diagram illustrating a screen of an application displaying a personalized image.
  • the screen 900 displays a personalized video clip 901.
  • the display of the personalized video clip 901 is synchronized with a display of a second image that has been captured by a camera of the user 902.
  • the display may also include advertisements 903, editing options 904 and option for connecting to social networks 905 and 906.
  • the display includes statistical information, a position on the map or satellite view.
  • the statistical data includes speed, time, distance and the like or a combination thereof.

Abstract

The invention discloses a system and method for personalizing an image of an individual in a scene by receiving a file containing the position of the individual in the scene and a time stamp, receiving a calibrated image and coupling between the position of the individual to the calibrated image; the image can then be cropped and sent to the user as a personalized image.

Description

A SYSTEM AND METHODS FOR PERSONALIZING AN IMAGE
FIELD AND BACKGROUND OF THE INVENTION
The present invention relates to video and still images, more particularly, but not exclusively to personalizing video clips and still images.
With the spread of the Internet, video clips have become very popular online.
By mid 2006 there were tens of millions of video clips available online. Some websites focus entirely on offering free video clips to users and many other add video clip content to their websites. Video clips are also used in social network services such as Facebook, wherein personal videos related to activities of an individual are uploaded to his personal profile. Video clips may also be used for training purposes. In such cases the clip may be used for analyzing an activity of an individual; for example for analyzing sport activities. Such sport activities may be, for example, skiing, biking, running or playing a game. Other clips may be used for security purposes.
Some places such as ski resorts place cameras on the site. These cameras produce video clips of the visitors of the resort which may be used as a souvenir.
In many cases the video clips contain background details such as other people or objects in the area. There are methods known in the art for personalizing an image.
Moverati provides a system that captures video films on the site. The automated video camera technology captures people moving in their favorite places, and then delivers the videos to a server for editing. The personalization of the film is done by attaching a tag to each individual who wants a personalized video and then identifying the individual in the image by the attached tag.
In other methods known in the art the personalizing is performed by analyzing the video and by detecting an individual in a video according to biometric data. US 2008002031 discloses a method and a system for personalizing video using a tag which is attached to an individual. The tag may include GPS. The method comprises captivating the camera according to the movement of the individual.
WO 9810358 discloses tags for personalizing an image. The tag may include an RFID.
WO 2010075430 discloses a systems, methods and software for capturing and processing media items such as digital images or video associated with a sporting event such as a marathon or bicycle race.
The items may be recognized according to facial recognition.
SUMMARY OF THE INVENTION
The present invention discloses a method for personalizing a calibrated image of an individual in a scene; the method may comprise the steps of;
a. receiving a file comprising at least one position of the individual in the scene and an at least one time stamp;
b. receiving the calibrated image; and
c. coupling between the at least one position and an at least one frame of the calibrated image; thereby providing a personalized frame.
In a preferred embodiment, the method may further comprise receiving a plurality of calibrated images and coupling between the at least one position and an at least one image of the plurality of images; thereby providing a personalized frame. In a preferred embodiment, the method may further comprise receiving a plurality of calibrated images and coupling between the at least one time stamp and an at least one image of the plurality of images; thereby providing a personalized frame.
In a preferred embodiment, the position of the individual may be recorded and transmitted by a smart phone application. The invention discloses a method for personalizing a calibrated image of an individual in a scene; wherein the image is captured by a camera positioned at the scene; the method may comprise the steps of
a. receiving the calibrated image by a server;
b. recording data in a file by a portable computerized device ; the data comprises a position of the individual in the scene and a time stamp related to the position;
c. transmitting the file from the portable computerized device to the server;
d. cropping, by the server, the calibrated image according to the data in the file; thereby providing a personalized image.
In a preferred embodiment, the image may be a video, a video clip, a three dimensional image or a combination thereof.
In a preferred embodiment, the cropping may comprise cropping a large area for a nearby position and cropping a small area for a far position. The cropping may also comprise predicting at least one new position; wherein the predicting the at least one new position comprises determining the at least one new position according to two consecutive positions recorded in the file. The predicting can be based on movement data extracted from the location file or by dedicated detectors or by a combination thereof. The movement may include speed, distance, maximal speed and average speed data, and it may be presented on a screen on top of the personalized image, as an overlay, beside, before, or after the personalized image or using any combination thereof.
In a preferred embodiment, the transmission of the file may be after a termination of the recording.
In a preferred embodiment, the method may further comprise capturing a second image by the portable computerized device during the scene and synchronizing the second image with the personalized image. The synchronization may also be a fusion of the images. The present invention discloses a method for personalizing a calibrated image of an individual in a scene; the method may comprise the steps of;
a. receiving a file; wherein the file comprises at least one position of the individual in the scene and an at least one time stamp;
b. receiving the calibrated image; and
c. cropping the calibrated image according to the file; thereby providing a personalized frame. In a preferred embodiment, the method may further comprise resizing each frame of the calibrated image to the same size; thereby providing a homogenous image.
In a preferred embodiment, the image may be a video, a video clip, a three dimensional image or a combination thereof.
In a preferred embodiment, the cropping may comprise cropping a large area for a nearby position and cropping a small area for a far position. In a preferred embodiment, the position may be received by one member of a group consisting of GPS, WiFi transceiver, Cellular transceiver and a combination thereof.
In a preferred embodiment, the cropping may comprise predicting at least one new position; wherein the predicting the at least one new position comprises determining the at least one new position according to two consecutive positions recorded in the file.
In a preferred embodiment, the method may further comprise receiving audio data captured by the individual during the scene and synchronizing the audio data with the clipped image. The present invention discloses a computer program placed on a magnetic readable media for personalizing a calibrated image of an individual in a scene, the computer program may comprise:
a first program instruction for receiving a file; wherein the file comprises at least one position of the individual in the scene and an at least one time stamp;
a second program instruction for receiving the calibrated image; and a third program instruction for cropping the calibrated image according to the file; thereby providing a personalized frame; wherein the first, second and third program instructions are stored on a computer readable medium.
The present invention discloses an apparatus for personalizing a calibrated image of an individual in a scene, the apparatus may comprise:
i) a receiving unit configured for receiving a file; wherein the file comprises at least one position of the individual in the scene and at least one timestamp; wherein the receiving unit is further configured for receiving the calibrated image; and
ii) a processor configured for clipping the calibrated image according to the file; thereby providing a personalized image.
BRIEF DESCRIPTION OF THE DRAWINGS The invention is herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in order to provide what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice. In the drawings:
Fig. 1 is a simplified schematic diagram illustrating a system for personalizing an image, according to embodiments of the present invention;
Fig. 2 is a simplified schematic flowchart of a method of personalizing an image, according to embodiments of the present invention;
Fig. 3 is a simplified schematic diagram illustrating a portable computerized device configured for personalizing an image, according to embodiments of the present invention;
Fig. 4 is a simplified schematic diagram illustrating a server configured for personalizing an image, according to embodiments of the present invention;
Fig. 5 is a simplified schematic flowchart of an exemplary scenario of predicting positions, according to embodiments of the present invention;
Fig. 6 is a simplified schematic flowchart of an exemplary scenario for personalizing an image, according to embodiments of the present invention;
Fig. 7 is a simplified schematic diagram of an exemplary application for personalizing an image used in a portable computerized device, according to embodiments of the present invention;
Fig. 8 is a simplified schematic diagram illustrating two cameras recording a ski track; and
Fig. 9 is a simplified schematic diagram illustrating a screen of an application displaying a personalized image.
DETAILED DESCRIPTION OF THE EMBODIMENTS The principles and operation of an apparatus and method according to the present invention may be better understood with reference to the drawings and accompanying description.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
The term personalizing an image refers hereinafter to rendering an image of a scene. The image is captured by cameras located in a site related to the scene. The rendered image focuses on a specific individual in the scene. In some cases the personalization may include appending more data related to the individual or to the behavior of the individual at the scene. The scene may start at one place and may end at another place. For example the scene may start at a beginning of a ski course and may end at the end of the ski course.
The term image refers hereinafter to a video film, video clip or a still image.
The image is a two dimensional or a three dimensional image.
The term cropping refers hereinafter to selecting a rectangle within the frame. In some embodiments the cropping includes resizing the rectangle to the targeted frame size. The cropping is done on a frame by frame, so a rectangle of a different size and position could be selected from each frame of the image.
According to some embodiments there is provided a system and methods for personalizing an image. The system and methods may be used for providing souvenir image at a ski resort or a cruise or any other tourist attraction or a sport event. The system and methods may also be used for training purposes. The system and methods may also be used for supervising purposes, for example for supervising a behavior of a child in a kindergarten.
According to some embodiments, the system may include one or more cameras for providing images from a scene. The cameras may be high resolution cameras. Using a stationary camera with high resolution enables extracting many different personalized images from a single frame. The personalized images are generated without incorporating complicated mechanical tracking systems. The system also includes one or more potable computerized devices. The portable computerized unit can be carried by any individual who wishes to personalize an image of a scene. The portable computerized devices include a location detecting unit. The location detecting unit is any unit capable of detecting location and time stamp. Examples for a location detecting unit are Global Position System (GPS), or a detecting unit which detects position according to the radio signaling of the cellular network, or a detecting unit which detects position according to the radio signaling of the WiFi network, or any combination of the above. The portable computerized unit includes a processor and storage for processing and for storing location data and any other relevant data related to the scene and to the activity of the individual carrying the portable computerized device at the scene. The data may be stored in a file. The portable computerized device may include a transmitting unit for transmitting the stored data to a server. The portable computerized unit may be a cellular telephone.
The server may receive data files from a plurality of portable computerized devices. The data files may include location data, inertial parameters, recording sound and images taken from the portable computerized device during the scene. The server may also receive a plurality of images from a plurality of cameras located in one or more sites. The server may personalize one or more image according to the received data. The server may transmit the personalized images to the individuals. For example the server may transmit a link of the personalized image to the portable computerized device of the individual. In another embodiment the link may be transmitted to an email address of the individual.
One technical problem dealt by the embodiments of the disclosure is to personalize an image according to location data. One other technical problem dealt by the embodiments of the disclosure is to personalize an image for a plurality of individuals. Yet one other technical problem is to enable an individual to personalize a video clip about his activity.
One technical solution is storing, at the portable computerized unit, location data of the position of the individual during the scene; transmitting the location data and at least one image of the scene to a server; calibrating the image and cropping the image according to the calibration and according to the location data. If the image is a video clip, the cropping may be done per each frame of the video. The cropping process may crop a large area for a nearby item and a short area for a far item. In some embodiments the location data is GPS data. In some embodiments the calibration is performed when positioning the camera at the first time on site by a technician. The calibration can be performed for example by designating a known position of selected pixels in a frame and then extrapolating the position of the remaining pixels in accordance.
One other technical problem dealt by the embodiments of the disclosure is to make the personalization available at any time and regardless of the weather condition or regardless of the communication infrastructure. Yet, one other technical problem is to provide an image personalization method and system which reduces the energy consumption.
One other technical solution is to save the location data at the mobile computerized device in a file; to send the data to the server and to perform the personalization at the server according to the data recorded in the file. The file containing the location data and other relevant data that is saved at the computerized device does not need to be sent in real-time, and it can be sent when the user chooses or when conditions allow. It does not require real time connectivity, which is not available everywhere, may cost money (in case of cellular connectivity) and consume a lot of energy which causes the battery of the portable computerized device to run out quickly.
The application does not rely on data streaming, which may suffer from data losses. Saving the data in a file and sending it as a file ensures data integrity.
One other technical problem is to overcome scenarios in which the recorded data does not provide enough information. For example there might be 25 frames per second in a video film while the GPS may provide location data every second.
One other technical solution is to perform a prediction of the movement of the individual and to perform the cropping according to the prediction.
One other technical problem is to incorporate additional information to the image. Such information includes speed information, sound information recorded on the scene, map of the site including information about the location of the individual on the map, and of his route, and the like.
One other technical solution is to calculate the speed and position of the individual on the map from the location information and to incorporate it within the image. Yet one other technical solution is to extract the sound information from the file, extract the time-stamp information and match it to the relevant video frame.
One other technical problem is to provide post-production of an unlimited number of different customized images. The customization includes, changing resolution, changing zoom ratio, selecting if to incorporate speed information, sound information or an audio track, map with the individual's route on it, and the like from the same source image. One technical solution is receiving an image including a plurality of individuals. Personalizing the image per each individual by cropping the image according to a location file of the individual. Customizing the image or plurality of images according to the individual requirements. In some cases, the individual requirements are stored in the location file.
Reference is now made to Fig. 1 which is a simplified schematic diagram illustrating a system for personalizing an image, according to embodiments of the present invention. System 100 includes one or more cameras 101 configured for providing images from a scene, one or more potable computerized devices 103 configured for recording data from the scene, one or more servers 102 configured for personalizing an image.
The one or more cameras 101 may be high resolution cameras. The one or more cameras 101 may be located at one or more positions at the scene for capturing frames from the scene. For example if the scene is from a ski resort, the one or more cameras 101 may capture images from a route of the scene and transmit the images to a server 102. The frames captured by the one or more cameras 101 may partly overlap to ensure a total coverage of the area that is to be recorded.
The potable computerized device 103 may include a location detecting unit (not shown), a processor (not shown), a storage (not shown) and a transmitting unit (not shown). The potable computerized device 103 may record data related to the scene and in particular positioning data and may transmit the recorded data to the server 102. The portable computerized device 103 may be carried by any individual who wishes to personalize an image of a scene. The portable computerized device 103 may be a smart phone. The portable computerized device is described in greater details in fig. 3.
The server may receive data files from a plurality of portable computerized units. The data files may include location data, inertial parameters, recording data and images taken from the portable computerized unit during the scene. The server may also receive a plurality of images from a plurality of cameras located in one or more sites. The server may personalize one or more image according to the received data. The server may get also specific customization requests through a web interface. The server may transmit the personalized images to the individuals. For example the server may transmit a link of the personalized image to the portable computerized unit of the individual. In another embodiment the link may be transmitted to an email address of the individual.
Fig. 2 is a simplified schematic flowchart of a method of personalizing an image, according to embodiments of the present invention.
At 201 location data is detected by an application that is installed on a remote computerized device. The remote computerized device may be carried by any of the individuals of the scene. The location data comprises a position of an individual in a scene. In some embodiments the location data comprises GPS data. In some embodiments the location is determined by methods of triangulation, using the WiFi network, or the cellular network, or any combination thereof. The location data comprises time stamp.
At 202 the location data is recorded in a file. In some embodiments, additional data such as sound, image taken by the portable computerized device during the scene, time and date of the recording or any other data related to the scene and to the individual participated at the scene is recorded in a file.
At 203 images that have been captured by cameras at a site are transmitted to the server. In some embodiments each image is calibrated when positioning the camera at the first time on site by a technician. In some embodiments, the calibration is performed by designating a known position of selected pixels in a frame and then extrapolating the position of the remaining pixels in accordance. Calibration can be done by anchoring an object in the frame and extrapolating the location value of the pixels according to the anchor.
In some embodiments each image that is transmitted is identified by start and end time. In some cases each image can also be identified by site identification and by location at the site. For example the image can be identified by the name of a ski resort and by a location at a ski resort. The image can also be identified by date and by start time and end time of the capturing.
At 204, the file is transmitted to a server.
At 205, the server matches the file's geo position with the location within the image. This matching is composed of two parts. The first is matching the time of the location to the time of the relevant frame and the second is finding the location within the specific frame. The matching is done per each frame that is related to the position and the time. The location matching is done based on the pre calibration that was done by the technician.
At 206, the frame is cropped by the server. The cropping is performed according to the positions recorded in the file that is coupled with the image. The cropping is done for all the frames that are related to the location data. Per each position, the system crops an area surrounding the pixel in the image that is related to the position. The area can be larger for nearby position and smaller for a far position. A nearby and a far position is determined by a calibration process. In some embodiments the cropping also comprises predicting positions of the individual according to the recorded positions. The predicting process is explained in greater detail in Fig. 5.
Fig. 3 is a simplified schematic diagram illustrating a portable computerized device configured for personalizing an image, according to embodiments of the present invention. The portable computerized device 103 may be a cellular telephone or a smart telephone or any other device that can be carried by a person and can record position data and transmit the data to a server. The portable computerized device 103 is configured for recording data related to the activity of the user of the portable computerized device in the scene. In some embodiment, the portable computerized device 103 is provided to the user at the site. In some other embodiments, the portable computerized device 103 is a smart phone of the user.
The portable computerized device 103 includes transmitting unit 1031, a location detecting unit 1033, processor 1034, camera 1035, clock 1036, microphone 1037, storage 1038 and inertial server 1039.
The location detecting unit 1033 may comprise a GPS. In some embodiments, the location detecting unit 1033 detects the location of the user of the portable computerized device. In some embodiments, the location is detected at predefined intervals. For example, the location may be detected every second. In some embodiments the location detecting unit 1033 records time stamps. The detected location is recorded in the storage 1038. The storage 1038 may be flash memory.
The camera 1035 is used for capturing a second image. In some embodiments, the second image is kept in the storage 1038. The captured image is sent to the server; for providing additional view of the scene. The second image provides a view of the scene from the user point of view. The microphone 1037 may be used for recording sound during the scene. The sound may comprise speech of the user and/or background noise. The recorded data may be kept in the storage 1038. In some cases, in which the recording is started automatically, the microphone 1037 may also be used for informing the user of the portable computerized device 103 about the beginning and the end of the recording.
The inertial sensor 1039 is used for recording inertial data. The inertial data may be stored in the storage 1038. In some embodiments the inertial sensor 1039 is also used for predicting positions as explained in fig. 5. The inertial data can be provided by sensors integral to the mobile device and can also be derived from consecutive location information provided by the GPS for example. The inertial data can be information presenting speed, distance, acceleration, inclination and direction.
The transmitting unit 1031 is used for transmitting the data that is recorded in the storage 1038 to a remote computerized device. The transmitting unit 1031 comprises a WiFi transceiver, or a WiMax transceiver, or Cellular transceiver or any other wire or wireless communication unit. The transmitting unit 1031 transmits the stored data after the completion of the recording automatically or upon user request.
Fig. 4 is a simplified schematic diagram illustrating a server configured for personalizing an image, according to embodiments of the present invention. The server 102 receives a plurality of images from an at least one site. The at least one site may be, for example, a ski resort, a running track, a cruise ship , a playground or any other site in which images are being captured and sent to the remote computerized device. The images are captured during a predefined timeframe; for example during a day.
The server 102 includes a receiving unit 1021, a transmitting unit 1022 and a processor 1023.
The receiving unit 1021 is configured for receiving a file. The file comprises at least one position of the individual in the scene. The receiving unit is also configured for receiving calibrated images captured by cameras in an at least one site.
The processor 1023 is configured for personalizing the calibrated images. The personalization is performed by cropping an at least one frame of the image according to the location data in the file. In some embodiments, the personalizing is done by selecting the right camera feed according to the location data. In another embodiment the personalization is done by both selecting the right camera feed and by cropping the image.
Fig. 5 is a simplified schematic flowchart of an exemplary scenario of predicting positions, according to embodiments of the present invention.
In some cases when the frequency of the recording is lower than the frame rate (number of frames per second) a prediction process is used. For example, the frame rate may be 25 frame/s, for PAL (Phase Alternating Line) and SEC AM (Sequential Color with Memory) standards while the GPS may update location every second. In some embodiments the prediction process predicts one or more positions according to two known consecutive positions. The prediction provides a smooth video. Fig. 5 shows an example of known positions (501, 504 and 507, respectively) and predicted positions (502, 503. 505 and 506, respectively). In some embodiments the prediction is also performed by using inertial data. The prediction is important for providing continuous flow of the video as the individual is being tracked.
Fig. 6 is a simplified schematic flowchart of an exemplary scenario for personalizing an image, according to embodiments of the present invention.
In some embodiments, steps 610, 620 are performed by a user of an application for personalizing the image.
In some embodiments, steps 630, 640, 650, 660, 670, 680, 690 and 691 are performed by an application installed on a portable computerized device. The portable computerized device is carried by the user during the scene. The application record data related to the scene and in particular to the activity of the individual during the scene. In some embodiments the application is activated automatically. In some other embodiments the application is activated manually. The data may be recorded in a file and may be transmitted to a server.
In some embodiments steps 694, 695, 696 and 697 are performed by a server which may be installed at a server center. In some embodiment, the server is located on the site where the cameras are located and performs the personalization related to this site. The server may receive files from the portable computerized devices. The portable computerized devices may be dispersed in a plurality of sites. The server may receive images from at least one camera in an at least one site. The server may personalize images for a plurality of users. Each image may be personalized according to a recorded file that is sent from a portable computerized device.
In some embodiments steps 692 and 693 are performed by a system that is located in a site. The system includes a camera for capturing images at the site. In some embodiments the at least one image is a high resolution video film. The captured image is sent to the server. In some embodiments a video file is captured throughout the day and is sent once a day to the service center. Alternatively the video is transmitted in real time.
At 692 which may be performed at the beginning of the day, the capturing of the image starts. In some embodiments the image is a video image. The site may have a plurality of cameras. Each camera may capture an image from a different location. In some embodiments there may be an overlap between the images that are captured by a plurality of cameras at the site to ensure continuous capturing of the objects during the track.
At 610, the user checks if the application is already downloaded.
At 620, the application is downloaded to the portable computerized device. In some embodiments, the portable computerized device is a smart phone.
At 630 a check is performed to verify that the user is registered to the application.
At 640, which is performed if the user is not already registered, the user is registered to the service with his user name and his email address.
At 650, the user activates the application by pressing on the start button. In some other embodiments the application is activated automatically. Automatic activation may be done, for example, according to GPS position or position calculation that was performed by other means.
At 660, the user's position is periodically recorded into a file.
At 670, the user terminates the application by, for example, pressing on a stop key. In some embodiments the termination is performed automatically after a timeout or according to position detection and region of interest that are downloaded to the portable computerized device.
At 680, the recorded file is transmitted to a server which may be located at a service center or may be located on site. At 693, captured videos from all the cameras at the site are transmitted to a local aggregation center.
At 694, the local aggregation center transmits the images to a service center or to a server located on site.
At 695, which may be performed after receiving the captured video and the recorded file, a personalized video may be generated.
At 696, the user is notified about the generating of the personalized video. In some embodiments, the user receives a URL which enables him to access the personalized file. In some embodiments the access to the personalized file is secured by a user name and a password.
At 690, the user downloads the personalized file.
At 691, the user shares the file using a social network. The user may also issue request to perform additional operations such as adding additional data or text.
At 697, the additional operation is performed.
Fig. 7 is a simplified schematic diagram of an exemplary application for personalizing an image used in a portable computerized device, according to embodiments of the present invention. The application may be downloaded to a portable computerized device. In some embodiment the application is automatically activated and terminated. Automatic activation of the application may be done by determining a start position by a GPS or other means of the portable computerized device. Automatic termination may be done by determining a final position by a GPS or other means of the portable computerized device. In some other embodiments, the application may start and terminate manually by the user of the portable computerized device. The user may be notified about the recording by a sign at the beginning of the track or by a voice trigger.
At 710 a welcome message is displayed. In some embodiments the welcome message is displayed automatically, for example as a result of entering a zone in which the user wants to record a video clip. In some other embodiments the welcome message is displayed when the user activates the application.
At 720 the user is authenticated.
At 730, the user is informed about the activating of the application.
At 740, the application records positions in the scene. In some embodiments the application records GPS positions. The user may stop the recording by pressing on the cancel button. In some embodiments the application records other data related to the scene. Such data may be a video captured by the camera of the portable computerized device, sound recording, inertial parameters and the like.
At 750, the application terminates the recording. Terminating may be done automatically or manually (by pressing on the cancel button as explained at 740). The user may send the recording data to a server by pressing on the send button.
At 760, the data is sent to the server. The user is notified about the process of sending the data. The user may cancel the sending by pressing on the cancel button. At 770, the user is notified about the completion of the sending. The server can start the cropping process.
Fig. 8 is a simplified schematic diagram illustrating two cameras (801 and 802) recording a ski track. In the figures there is an overlap 803 between a video taken by camera 801 and a video taken by camera 802. The drawing shows predefined regions of interest 804 and 805 from which video is to be cropped.
Fig. 9 is a simplified schematic diagram illustrating a screen of an application displaying a personalized image. The screen 900 displays a personalized video clip 901. The display of the personalized video clip 901 is synchronized with a display of a second image that has been captured by a camera of the user 902. The display may also include advertisements 903, editing options 904 and option for connecting to social networks 905 and 906. In some embodiments, the display includes statistical information, a position on the map or satellite view. In some embodiments the statistical data includes speed, time, distance and the like or a combination thereof.

Claims

WHAT IS CLAIMED IS:
1. A method for personalizing a calibrated image of an individual in a scene; the method comprising the steps of;
a. receiving a file; wherein the file comprises at least one position of the individual in the scene and an at least one time stamp;
b. receiving the calibrated image; and
c. coupling between the information in the file to at least one corresponding frame of the calibrated image; thereby extracting a personalized image.
2. The method of claim 1, further comprises receiving a plurality of calibrated images and coupling between the at least one position and an at least one image of the plurality of images; thereby providing a personalized frame.
3. The method of claim 1, further comprising receiving a plurality of calibrated images and coupling between the at least one time stamp and at least one image of the plurality of images; thereby providing a personalized frame.
4. The method of claim 1, wherein the position with respect to time of the individual is recorded and transmitted by a smart phone application.
5. A method for personalizing a calibrated image of an individual in a scene; wherein the image is captured by a camera positioned at the scene; the method comprising the steps of:
a. receiving the calibrated image by a server;
b. recording data in a file by a portable computerized device ; the data comprises a position of the individual in the scene and a time stamp related to the position;
c. transmitting the file from the portable computerized device to the server; and
d. cropping, by the server, the calibrated image according to the data in the file; thereby providing a personalized image.
6. The method of claim 5; wherein the image is a video clip.
7. The method of claim 5, wherein the cropping comprises cropping a large area for a nearby position and cropping a small area for a far position.
8. The method of claim 5, wherein the cropping field is dependent on predefined regions of interest.
9. The method of claim 5, wherein the cropping field is dependent on speed, acceleration and directional inputs or on combinations thereof.
10. The method of claim 5, wherein the cropping comprises predicting at least one new position; wherein the predicting the at least one new position comprises determining the at least one new position according to two consecutive positions recorded in the file.
11. The method of claim 10, wherein the predicting is according to inertial data.
12. The method of claim 4, wherein the transmission of the file is after a termination of the recording.
13. The method of claim 1, further comprising recording inertial data related to the individual in the scene, transmitting the inertial data to the server and presenting the inertial data on a screen.
14. The method of claim 1, further comprising capturing a second image by the portable computerized device during the scene and synchronizing the second image with the personalized image.
15. The method of claim 5, further comprises resizing each frame of the calibrated image to the same size; thereby providing a homogenous image.
16. The method of claim 5, wherein the position is received by one member of a group consisting of GPS, WiFi transceiver, Cellular transceiver and a combination thereof.
17. The method of claim 5, wherein the position of the individual is recorded and transmitted by a smart phone application.
18. The method of claim 5, further comprising receiving audio data captured by the individual during the scene and synchronizing the audio data with the clipped image.
19. A computer program placed on a magnetic readable media for personalizing a calibrated image of an individual in a scene, the computer program comprising:
a first program instruction for receiving a file; wherein the file comprises at least one position of the individual in the scene and an at least one time stamp;
a second program instruction for receiving the calibrated image; and a third program instruction for cropping the calibrated image according to the file; thereby providing a personalized frame; wherein the first, second and third program instructions are stored on a computer readable medium.
20. An apparatus for personalizing a calibrated image of an individual in a scene, the apparatus comprising:
a. a receiving unit configured for receiving a file; wherein the file comprises at least one position of the individual in the scene and at least one time stamp; wherein the receiving unit is further configured for receiving the calibrated image; and
b. a processor configured for extracting the calibrated image
according to the file; thereby providing a personalized image.
PCT/IL2012/050457 2011-11-27 2012-11-14 A system and methods for personalizing an image WO2013076720A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161563779P 2011-11-27 2011-11-27
US61/563,779 2011-11-27

Publications (1)

Publication Number Publication Date
WO2013076720A1 true WO2013076720A1 (en) 2013-05-30

Family

ID=48469238

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2012/050457 WO2013076720A1 (en) 2011-11-27 2012-11-14 A system and methods for personalizing an image

Country Status (1)

Country Link
WO (1) WO2013076720A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017051063A1 (en) * 2015-09-23 2017-03-30 Nokia Technologies Oy Video content selection
US20170337428A1 (en) * 2014-12-15 2017-11-23 Sony Corporation Information processing method, image processing apparatus, and program
US9883134B2 (en) 2014-07-02 2018-01-30 Amer Sports Digital Services Oy System, a method, a wearable digital device and a recording device for remote activation of a storage operation of pictorial information
US10362263B2 (en) 2014-07-02 2019-07-23 Amer Sports Digital Services Oy System and method for remote activation of a storage operation of pictorial information
CN111788827A (en) * 2017-12-22 2020-10-16 亚玛芬体育数字服务公司 System and method for remotely activating storage operation of picture information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090041298A1 (en) * 2007-08-06 2009-02-12 Sandler Michael S Image capture system and method
US20090087161A1 (en) * 2007-09-28 2009-04-02 Graceenote, Inc. Synthesizing a presentation of a multimedia event
WO2011001180A1 (en) * 2009-07-01 2011-01-06 E-Plate Limited Video acquisition and compilation system and method of assembling and distributing a composite video
US20110090344A1 (en) * 2009-10-21 2011-04-21 Pvi Virtual Media Services, Llc Object Trail-Based Analysis and Control of Video
EP2388752A1 (en) * 2010-03-31 2011-11-23 Disney Enterprises, Inc. Predicting object location in a video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090041298A1 (en) * 2007-08-06 2009-02-12 Sandler Michael S Image capture system and method
US20090087161A1 (en) * 2007-09-28 2009-04-02 Graceenote, Inc. Synthesizing a presentation of a multimedia event
WO2011001180A1 (en) * 2009-07-01 2011-01-06 E-Plate Limited Video acquisition and compilation system and method of assembling and distributing a composite video
US20110090344A1 (en) * 2009-10-21 2011-04-21 Pvi Virtual Media Services, Llc Object Trail-Based Analysis and Control of Video
EP2388752A1 (en) * 2010-03-31 2011-11-23 Disney Enterprises, Inc. Predicting object location in a video

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9883134B2 (en) 2014-07-02 2018-01-30 Amer Sports Digital Services Oy System, a method, a wearable digital device and a recording device for remote activation of a storage operation of pictorial information
US10362263B2 (en) 2014-07-02 2019-07-23 Amer Sports Digital Services Oy System and method for remote activation of a storage operation of pictorial information
US20170337428A1 (en) * 2014-12-15 2017-11-23 Sony Corporation Information processing method, image processing apparatus, and program
US10984248B2 (en) * 2014-12-15 2021-04-20 Sony Corporation Setting of input images based on input music
WO2017051063A1 (en) * 2015-09-23 2017-03-30 Nokia Technologies Oy Video content selection
US10468066B2 (en) 2015-09-23 2019-11-05 Nokia Technologies Oy Video content selection
CN111788827A (en) * 2017-12-22 2020-10-16 亚玛芬体育数字服务公司 System and method for remotely activating storage operation of picture information

Similar Documents

Publication Publication Date Title
US10084961B2 (en) Automatic generation of video from spherical content using audio/visual analysis
CN103635954B (en) Strengthen the system of viewdata stream based on geographical and visual information
US9443556B2 (en) Video acquisition and compilation system and method of assembling and distributing a composite video
US20110256886A1 (en) System and method for providing automatic location-based imaging using mobile and stationary cameras
US10147399B1 (en) Adaptive fiducials for image match recognition and tracking
US9407807B2 (en) Distributed automatic image and video processing
KR20190127865A (en) How to Assign Virtual Tools, Servers, Clients, and Storage Media
WO2013076720A1 (en) A system and methods for personalizing an image
US20060110154A1 (en) Adding metadata to pictures
US20150055931A1 (en) Video system for capturing and storing video content
CN103377268A (en) System for carrying out target identification and event reproduction by using post-set data
JP6359704B2 (en) A method for supplying information associated with an event to a person
KR101397096B1 (en) Terminal and server having function for providing advertisement contents using qr code
US20090239577A1 (en) Method and system for multimedia captures with remote triggering
JP2015233204A (en) Image recording device and image recording method
US20180232384A1 (en) Methods and apparatus for information capture and presentation
CN111145189B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN110633656A (en) Scenic spot person-searching information publishing method and system and storage medium
CN114930319A (en) Music recommendation method and device
CN110598602A (en) Scenic spot person searching management method and system and storage medium
CN112437332B (en) Playing method and device of target multimedia information
WO2021075279A1 (en) Information processing device and method, and program
CN115729404A (en) Notification message processing method and device
US20150106738A1 (en) System and method for processing image or audio data
CN114051100B (en) Method, system and terminal equipment for sharing photographing information in real time

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12851797

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12851797

Country of ref document: EP

Kind code of ref document: A1