US20100283843A1 - Multiple resolution video network with eye tracking based control - Google Patents
Multiple resolution video network with eye tracking based control Download PDFInfo
- Publication number
- US20100283843A1 US20100283843A1 US12/669,685 US66968508A US2010283843A1 US 20100283843 A1 US20100283843 A1 US 20100283843A1 US 66968508 A US66968508 A US 66968508A US 2010283843 A1 US2010283843 A1 US 2010283843A1
- Authority
- US
- United States
- Prior art keywords
- resolution
- image
- camera
- display device
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000004891 communication Methods 0.000 claims abstract description 25
- 238000000034 method Methods 0.000 claims description 62
- 230000008859 change Effects 0.000 claims description 22
- 239000002131 composite material Substances 0.000 description 41
- 230000008569 process Effects 0.000 description 37
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 5
- 230000000644 propagated effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/12—Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
- H04N23/661—Transmitting camera control signals through networks, e.g. control via the Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Definitions
- This application discloses an invention which is related, generally and in various embodiments, to a multiple resolution video network with eye tracking based control.
- Video networks are becoming more and more commonplace, including those utilizing digital cameras in the security and surveillance fields. For most security and surveillance applications, more optimal results are generally realized when more high resolution cameras are included in the video network. More cameras may be utilized to cover a larger geographic area, increase the number of views associated with a particular area, decrease the number of “blind” spots, etc.
- the maximum bandwidth of the video network often operates to limit the number of high resolution cameras which can be effectively included in the video network.
- the amount of bandwidth generally needed to transmit high resolution images (e.g., 640 ⁇ 480 pixels) from a high resolution camera at a high frame rate (e.g., 30 frames per second) and at a low compression rate percentage (e.g., 10%) is on the order of approximately nine Megabits per second.
- the video network may not be able to support a single high resolution camera, depending on the desired resolution, frame rate and compression rate percentage.
- the approximately nine Megabits per second bandwidth needed in the above example far exceeds the capacity of current Bluetooth technology, which is only on the order of approximately three Megabits per second.
- the capacity problem is not limited to video networks which include wireless channels.
- a video network which includes twelve high resolution cameras the required bandwidth generally needed to concurrently transmit high resolution images (e.g., 640 ⁇ 480 pixels) from the twelve high resolution cameras at a high frame rate (e.g., 30 frames per second) and at a low compression rate percentage (e.g., 10%) is on the order of approximately one-hundred and eight Megabits per second, which exceeds the capacity of traditional Ethernet cable, which is only on the order of approximately one-hundred Megabits per second.
- video networks utilizing traditional Ethernet cable are often limited to including fewer than twelve high resolution cameras in the video network.
- this application discloses a system.
- the system includes a computing device configured for communication with a plurality of multiple resolution cameras and with a display device.
- the computing device includes a camera resolution module configured for instructing at least one of the multiple resolution cameras to operate at a first resolution at a first period of time and at a second resolution at a second period of time.
- the first resolution is different than the second resolution.
- this application discloses a method.
- the method is implemented at least in part by a computing device.
- the method includes receiving a first image from a multiple resolution camera at a first resolution, generating a change of resolution instruction, sending the change of resolution instruction to the multiple resolution camera, and receiving a second image from the multiple resolution camera at a second resolution.
- the second resolution is different than the first resolution.
- aspects of the invention may be implemented by a computing device and/or a computer program stored on a computer-readable medium.
- the computer-readable medium may comprise a disk, a device, and/or a propagated signal.
- FIG. 1 illustrates various embodiments of a system
- FIG. 2 illustrates various embodiments of a display device of the system of FIG. 1 ;
- FIG. 3 illustrates various embodiments of a control system of the system of FIG. 1 ;
- FIG. 4 illustrates various embodiments of another system
- FIG. 5 illustrates various embodiments of a method for controlling the data flow rate of a video network
- FIG. 6 illustrates various embodiments of another method for controlling the data flow rate of a video network.
- FIG. 1 illustrates various embodiments of a system 10 .
- the system 10 includes a plurality of multiple resolution video cameras 12 , a control system 14 in communication with the cameras 12 , and a display device 16 in communication with the control system 14
- a control system 14 in communication with the cameras 12
- a display device 16 in communication with the control system 14
- the system 10 may include any number of cameras 12 .
- Each multiple resolution video camera 12 is configured for operation at more than one resolution, and includes a resolution selection module 18 which is configured to switch the camera 12 to a given resolution.
- the cameras 12 may be embodied as any suitable multiple resolution cameras.
- the cameras 12 may be embodied as cameras similar to network cameras manufactured by Axis Communications AB of Lund, Sweden.
- the resolution of each camera 12 may be dynamically controlled to switch from one resolution to another.
- a given camera 12 may operate at a first resolution at a first time period, and at a second resolution at a second time period.
- the given camera 12 may operate at a resolution of 640 ⁇ 480 pixels at a first time period, and at a resolution of 320 ⁇ 240 pixels at a second time period.
- the multiple resolution video cameras 12 may be configured for operation at resolutions other than 640 ⁇ 480 and 320 ⁇ 240.
- the multiple resolution cameras 12 may be configured for operation at more than two different resolutions (e.g., at high, medium, and low resolutions).
- the cameras 12 are configured to capture images (i.e., frames) at either the first resolution or at the second resolution, and to send the captured images to the control system 14 .
- the cameras 12 are also configured to send the captured images to the control system 14 at any suitable frame rate.
- the cameras 12 may operate to send the captured images to the control system 14 at a frame rate of thirty frames per second.
- the phrase image can mean a single image (i.e., a single frame) or a plurality of images (i.e., a plurality of frames).
- the cameras 12 may also be configured to generate images associated with the captured images, and to send the associated images to the control system 14 .
- the cameras 12 are in communication with the control system 14 via a network 20 .
- the cameras 12 and the control system 14 each include hardware and/or software components for communicating with the network 20 and with each other.
- the cameras 12 and the control system 14 may be structured and arranged to communicate through the network 20 via wired and/or wireless pathways using various communication protocols (e.g., HTTP, TCP/IP, UDP, WAP, WiFi, Bluetooth) and/or to operate within or in concert with one or more other communications systems.
- various communication protocols e.g., HTTP, TCP/IP, UDP, WAP, WiFi, Bluetooth
- the network 20 may include any type of delivery, system including, but not limited to, a local area network (e.g., Ethernet), a wide area network (e.g. the Internet and/or World Wide Web), a telephone network (e.g., analog, digital, wired, wireless, PSTN, ISDN, GSM, GPRS, and/or xDSL), a packet-switched network, a radio network, a television network, a cable network, a satellite network, and/or any other wired or wireless communications network configured to carry data.
- the network 20 may include elements, such as, for example, intermediate nodes, proxy servers, routers, switches, and adapters configured to direct and/or deliver data.
- the display device 16 may be embodied as any suitable display device.
- the display device 16 is configured to display the images sent by the respective cameras 12 , and the images may be displayed by the display device 16 in any suitable arrangement.
- the display device 16 may display a number of images at one resolution (e.g., a low resolution), and at least one other image at a different resolution (e.g., a high resolution). For example, as shown in FIG. 2 , a “high” resolution image may be displayed on the “left” side of the display device 16 and “low” resolution images may be displayed on the “right” side of the display device 16 .
- a “high” resolution image may be displayed proximate a center of the display device 16 and “low” resolution images may be displayed around or proximate the “high” resolution image.
- the display device 16 may display the images in many other arrangements.
- the display device 16 may also be configured to display a composite image which includes a high resolution portion and a low resolution portion.
- the system 10 may include any number of display devices 16 .
- the system 10 may include two display devices 16 —one for displaying the “high” resolution image and the other for displaying the “low” resolution images.
- FIG. 3 illustrates various embodiments of the control system 14 of FIG. 1 .
- the control system 14 includes an eye tracking device 22 , and a computing device 24 which is in communication with the plurality of cameras 12 , the display device 16 , and the eye tracking device 22 .
- the control system 14 may include any number of eye tracking devices 22 and any number of computing devices 24 .
- the system 10 may include one eye tracking device 22 for each display device 16 .
- the eye tracking device 22 may be embodied as any suitable eye tracking device.
- the eye tracking device 22 may be embodied as or similar to the EyeTech TM2 model manufactured by EyeTech Digital Systems, Inc. of Mesa, Arizona.
- a first infrared light 22 a of the eye tracking device 22 is positioned proximate a first edge (e.g., a “left” edge) of the display device 16 and a second infrared light 22 b of the eye tracking device 22 is positioned proximate a second edge (e.g., a “right” edge) of the display device 16 .
- the first and second infrared lights 22 a, 22 b are utilized to detect and/or track the position of a person's eyes who is viewing the display device 16 .
- the computing device 24 includes a display module 26 , an eye tracking module 28 , and a camera resolution module 30 .
- the display module 26 is in communication with the display device 16 , and is configured for delivering images sent from the cameras 12 to the display device 16 .
- each individual camera 12 is configured for operation at more than one resolution, it is understood that the display module 26 may deliver images of different resolutions to the display device 16 at a given time, For example, in a system 10 with four cameras 12 , the display module 26 may deliver the images sent from one of the four cameras 12 to the display device 16 at a first resolution (e.g., at a “high” resolution) and the respective images sent from the other three cameras 12 to the display device 16 at a second resolution (e.g., at a “low” resolution).
- a first resolution e.g., at a “high” resolution
- second resolution e.g., at a “low” resolution
- the eye tracking module 28 is in communication with the eye tracking device 22 , and is configured for associating an individual camera 12 with a position of a person's eye (or eyes) who is viewing the display device 16 .
- the eye tracking module 28 associates the position of the person's eye with a position on the display device 16 , associates the position on the display device 16 with an image on the display device 16 , and associates the image on the display device 16 with an individual camera 12 .
- the camera resolution module 30 is in communication with the plurality of cameras 12 , and is configured for dynamically instructing each camera 12 which resolution to operate at based on information determined by the eye tracking module 28 . Such information includes which image on the display device 16 the person's eye is focusing on, and which camera 12 sent the image. According to various embodiments, when a person's eye is focused on a particular image on the display device 16 , the camera resolution module 30 instructs the resolution selection module 18 of the appropriate camera 12 to operate the camera 12 at a high resolution. For each camera 12 which is not associated with the particular image, the camera resolution module 30 instructs the appropriate resolution selection modules 18 to operate the corresponding cameras 12 at a low resolution.
- the high resolution image sent by the given camera 12 will be displayed at a high resolution on the display device 16
- the respective low resolution images sent by the other cameras 12 will be displayed at a low resolution on the display device 16
- the format of the instruction to change a camera 12 from one resolution to another resolution may be realized in any suitable manner.
- the camera resolution module 30 may send a simple high or low signal (e.g., a “0” or a “1”) to the resolution selection module 18 of a given camera 12 to initiate a change of the resolution of the camera 12 .
- the modules 18 , 26 , 28 , 30 may be implemented in either hardware, firmware, software or combinations thereof.
- the software may utilize any suitable computer language (e.g., C, C++, Java, JavaScript, Visual Basic, VBScript, Delphi) and may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, storage medium, or propagated signal capable of delivering instructions to a device.
- the respective modules 18 e.g., software application, computer program
- modules 26 , 28 , 30 may be stored on another computer-readable medium (e.g., disk, device, and/or propagated signal) such that when a computer reads the medium, the functions described herein are performed.
- another computer-readable medium e.g., disk, device, and/or propagated signal
- the respective modules 18 may reside at the corresponding cameras 12 .
- Each of the modules 26 , 28 , 30 may be in communication with one another, and may reside at the computing device 24 , at other devices within the system 10 , or combinations thereof.
- the modules 26 , 28 , 30 may be distributed across a plurality of computing devices 24 .
- the functionality of the modules 26 , 28 , 30 may be combined into fewer modules (e.g., a single module).
- FIG. 4 illustrates various embodiments of another system 40 .
- the system 40 is similar to the system 10 of FIG. 1 , but is different in the ways described hereinbelow.
- each of the cameras 12 further includes an image resolution module 42 .
- each of the image resolution modules 42 is configured to determine whether a high resolution image captured by the corresponding camera 12 includes a particular object of interest.
- the respective image resolution modules 42 may be configured to determine whether a variety of different objects of interest are included in a given high resolution image.
- a human face may be an object of interest.
- the determination may be realized in any suitable manner.
- facial recognition software e.g., software functionally similar to Intel Open CV
- residing at the image resolution module 42 may be utilized to determine whether a high resolution image captured by the corresponding camera 12 includes a human face.
- Each of the image resolution modules 42 is also configured to define a location of the object of interest within the high resolution image captured by the corresponding camera 12 when the image resolution module 42 determines that the high resolution image includes an object of interest.
- the location of the object of interest relative to the entire high resolution image captured by the corresponding camera 12 may be defined in any suitable manner.
- the relative location of the object of interest is defined by coordinates (e.g., the four corners of the object of interest, the center point and radius of the object of interest, etc.) associated with the object of interest.
- Each of the image resolution modules 42 is further configured to generate two images associated with the high resolution image captured by the corresponding camera 12 when the image resolution module 42 determines that the high resolution image includes an object of interest.
- the first associated image is a high resolution image of the object of interest portion (e.g., the portion defined by the coordinates) of the high resolution image captured by the camera 12 .
- the second associated image is a low resolution image of the high resolution image captured by the camera 12 .
- the location of the object of interest relative to the entire high resolution image captured by the corresponding camera 12 , and the two associated images generated by a given image resolution module 42 are sent to the control system 14 in lieu of the high resolution image captured by the corresponding camera 12 .
- the two associated images and the relative location of the object of interest may be considered to be composite information.
- the system 40 of FIG. 4 is also different from the system 10 of FIG. 1 in that the computing device 24 of the control system 14 of system 40 further includes a composite image module 44 .
- the composite image module 44 is configured to generate a composite image based on the composite information sent from a given camera 12 .
- the composite image module 44 is in communication with the display module 26 , and is configured to send generated composite images to the display module 26 .
- the system 40 of FIG. 4 is also different from the system 10 of FIG. 1 in that the display module 26 is further configured to send a composite image generated by the composite image module 44 to the display device 16 .
- the composite image module 44 may generate a composite image in any suitable manner. For example, according to various embodiments, the composite image module 44 generates the composite image by superimposing the first associated image (i.e., the high resolution image of the object of interest portion of the high resolution image captured by the camera 12 ) on the second associated image (i.e., the low resolution image of the high resolution image captured by the camera 12 ) at the location determined by the image resolution module 42 .
- the first associated image i.e., the high resolution image of the object of interest portion of the high resolution image captured by the camera 12
- the second associated image i.e., the low resolution image of the high resolution image captured by the camera 12
- the composite image module 44 generates the composite image by deleting a portion of the second associated image corresponding to the location of the object of interest as determined by the image resolution module 42 .
- the composite image module 44 then inserts the first associated image onto the remaining portion of the second associated image at the location previously occupied by the deleted portion of the second associated image.
- the composite image module 44 generates the composite image by deleting a portion of the second associated image corresponding to the location of the object of interest as determined by the image resolution module 42 .
- the composite image module 44 then positions the remaining portion of the second associated image over the first associated image such that the location previously occupied by the deleted portion of the second associated image is aligned with the first associated image.
- the modules 42 , 44 may be implemented in either hardware, firmware, software or combinations thereof.
- the software may utilize any suitable computer language (e.g., C, C++, Java, JavaScript, Visual Basic, VBScript, Delphi) and may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, storage medium, or propagated signal capable of delivering instructions to a device.
- the respective modules 42 e.g., software application, computer program
- the module 44 e.g., software application, computer program
- another computer-readable medium e.g., disk, device, and/or propagated signal
- the respective modules 42 may reside at the corresponding cameras 12 , and may be in communication with the corresponding resolution selection modules 18 .
- the module 44 may be in communication with the modules 26 , 28 and 30 , and may reside at the computing device 24 , at other devices within the system 40 , or combinations thereof.
- the module 40 may be distributed across a plurality of computing devices 24 .
- FIG. 5 illustrates various embodiments of a method 50 for controlling the data flow rate of a video network.
- the method 50 may be implemented by various components of the system 10 of FIG. 1 .
- the method 50 will be described in the context of the system 10 of FIG. 1 .
- each of the cameras 12 may be operating at a low resolution, and sending low resolution images to the control system 14 via the network 20 .
- the control system 14 may he receiving the low resolution images, and sending the low resolution images to the display device 16 via the display module 26 .
- the display device 16 may be receiving the low resolution images, and displaying the low resolution images for viewing by a person or other user.
- the process 50 starts at block 52 , where the person focuses on a particular low resolution image which is displayed on the display device 16 . From block 52 , the process advances to block 54 , where the eye tracking device 22 detects the position of a person's eyes who is viewing the display device 16 , and sends an indication of the detected eye position to the eye tracking module 28 .
- the process advances to block 56 , where the eye tracking module 28 associates the indication of the detected eye position with a position on the display device 16 , associates the position on the display device 16 with an image on the display device 16 , and associates the image on the display device 16 with an individual camera 12 .
- the process advances to block 58 , where the computing device 24 determines if the person has been focusing on the same image for a predetermined period of time.
- the computing device 24 may determine if the person has been focusing on the same image for a predetermined period of time in any suitable manner.
- the computing device 24 may maintain a timer which resets every time the eye tracking module 28 associates the indication of the detected eye position with a particular image on the display screen 16 .
- the predetermined period of time may be in the range of approximately 1.5 to 2 seconds. According to other embodiments, the predetermined period of time may be more than 2 seconds or less than 1.5 seconds.
- the process advances to block 60 , where the camera resolution module 30 generates a change of resolution instruction and sends the change of resolution instruction to the associated camera 12 (i.e., the camera 12 associated with the viewed image).
- the change of resolution instruction is an instruction to change the resolution of the associated camera 12 from the low resolution to a high resolution.
- the process advances to block 62 , where the associated camera 12 receives the change of resolution instruction, and the resolution selection module 18 causes the associated camera 12 to switch from the low resolution to the high resolution. From block 62 , the process advances to block 64 , where the associated camera 12 now captures a high resolution image and sends the high resolution image to the control system 14 via the network 20 .
- the process advances to block 66 , where the control system 14 receives the high resolution image from the associated camera 12 , and sends the high resolution image to the display device 16 via the display modulo 26 .
- the process advances to block 68 , where the display device 16 receives the high resolution image, and displays the high resolution image for viewing by the person or other user.
- the high resolution image may occupy a larger area of the display device than any of the individual low resolution images.
- the process returns to block 52 when the person changes his or her focus from the high resolution image to a different image (e.g., a low resolution image) which is displayed on the display device 16 .
- a different image e.g., a low resolution image
- the process described at blocks 52 - 68 may be repeated any number of times.
- FIG. 6 illustrates various embodiments of another method 80 for controlling the data flow rate of a video network.
- the method 80 may be implemented by various components of the system 40 of FIG. 4 .
- the method will be described in the context of the system 40 of FIG. 4 .
- each of the cameras 12 may be operating at a low resolution, and sending low resolution images to the control system 14 via the network 20 .
- the control system 14 may be receiving the low resolution images, and sending the low resolution images to the display device 16 via the display module 26 .
- the display device 16 may be receiving the low resolution images, and displaying the low resolution images for viewing by a person or other user.
- the process 80 starts at block 82 , where the person focuses on a particular low resolution image which is displayed on the display device 16 . From block 82 , the process advances to block 84 , where the eye tracking device 22 detects the position of a person's eyes who is viewing the display device 16 , and sends an indication of the detected eye position to the eye tracking module 28 .
- the process advances to block 86 , where the eye tracking module 28 associates the indication of the detected eye position with a position on the display device 16 , associates the position on the display device 16 with an image on the display device 16 , and associates the image on the display device 16 with an individual camera 12 .
- the process advances to block 88 , where the computing device 24 determines if the person has been focusing on the same image for a predetermined period of time.
- the computing device 24 may determine if the person has been focusing on the same image for a predetermined period of time in any suitable manner.
- the computing device 24 may maintain a timer which resets every time the eye tracking module 28 associates the indication of the detected eye position with a particular image on the display screen 16 .
- the predetermined period of time may be in the range of approximately 1.5 to 2 seconds. According to other embodiments, the predetermined period of time may be more than 2 seconds or less than 1.5 seconds.
- the process advances to block 90 , where the camera resolution module; 30 generates a change of resolution instruction and sends the change of resolution instruction to the associated camera 12 (i.e., the camera 12 associated with the viewed image).
- the change of resolution instruction is an instruction to change the resolution of the associated camera 12 from the low resolution to a high resolution.
- the process advances to block 92 , where the associated camera 12 receives the change of resolution instruction, and the resolution selection module 18 causes the associated camera 12 to switch from the low resolution to the high resolution. From block 92 , the process advances to block 94 , where the associated camera 12 now captures a high resolution image.
- the process advances to block 96 , where the image resolution module 42 of the associated camera 12 determines if the high resolution image captured by the associated camera 12 includes a particular object of interest (e.g., a human face). From block 96 , the process advances to either block 98 or to block 108 .
- a particular object of interest e.g., a human face
- the process advances from block 96 to block 98 , where the image resolution module 42 defines a location of the object of interest within the high resolution image (i.e., location information), and generates two images associated with the high resolution image.
- the first associated image is a high resolution image of the object of interest portion of the high resolution image captured by the associated camera 12 .
- the second associated image is a low resolution image of the high resolution image captured by the associated camera 12 .
- the process advances to block 100 , where the associated camera 12 sends the location information and the two associated images to the control system 14 via the network 20 (collectively the composite information).
- the process advances to block 102 , where the composite image module 44 receives the composite information and generates a composite image based on the received composite information.
- the composite image module 44 may generate a composite image in any suitable manner. For example, according to various embodiments, the composite image module 44 generates the composite image by superimposing the first associated image on the second associated image at the location defined by the image resolution module 42 .
- the composite image module 44 generates the composite image by deleting a portion of the second associated image corresponding to the location of the object of interest as determined by the image resolution module 42 .
- the composite image module 44 then inserts the first associated image onto the remaining portion of the second associated image at the location previously occupied by the deleted portion of the second associated image.
- the composite image module 44 generates the composite image by deleting a portion of the second associated image corresponding to the location of the object of interest as determined by the image resolution module 42 .
- the composite image module 44 then positions the remaining portion of the second associated image over the first associated image such that the location previously occupied by the deleted portion of the second associated image is aligned with the first associated image.
- the process advances to block 104 , where the control system 14 sends the composite image to the display device 16 via the display module 26 .
- the process advances to block 106 , where the display device 16 receives the composite image, and displays the composite image for viewing by the person or other user.
- the composite image may occupy a larger area of the display device 16 than any of the individual low resolution images.
- the process returns to block 82 when the person changes his or her focus from the composite image to a different image (e.g., a low resolution image) which is displayed on the display device 16 .
- a different image e.g., a low resolution image
- the process described at blocks 82 - 106 may be repeated any number of times.
- the execution of the process described in blocks 82 - 106 results in a low resolution image on the display device 16 being replaced with a composite image after the person is focused on the low resolution image for a predetermined period of time.
- only one camera 12 at a time is sending high resolution images (e.g., the object of interest portion of the high resolution image captured by the associated camera 12 ), thereby minimizing the bandwidth needed to effectively operate the system 40 .
- the utilization of the above-described method 80 lowers the needed bandwidth on the order of approximately 88% (from 38 Mbt/s to approximately 4.5 Mbt/s) when the high resolution image captured by the associated camera 12 includes the object of interest.
- the process advances from block 96 to block 108 , where the associated camera 12 sends the high resolution image to the control system 14 .
- the process advances to block 110 , where the control system 14 sends the high resolution image to the display device 16 via the display module 26 .
- the process advances to block 112 , where the display device 16 receives the high resolution image, and displays the high resolution image for viewing by the person or other user.
- the high resolution image may occupy a larger area of the display device 16 than any of the individual low resolution image.
- the process returns to block 82 when the person changes his or her focus from the high resolution image to a different image (e.g., a low resolution image) which is displayed on the display device 16 .
- a different image e.g., a low resolution image
- the process described at blocks 82 - 96 and 108 - 112 may be repeated any number of times.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
A system. The system includes a computing device configured for communication with a plurality of multiple resolution cameras and with a display device. The computing device includes a camera resolution module configured for instructing at least one of the multiple resolution cameras to operate at a first resolution at a first period of time and at a second resolution at a second period of time. The first resolution is different than the second resolution.
Description
- This application claims the benefit of the earlier filing date of U.S. Patent Application No. 60/959,820 filed on Jul. 17, 2007 and U.S. Patent Application No. 60/959,821 filed on Jul. 17, 2007. This application is related to the International Application entitled “Multiple Resolution Video Network With Context Based Control”, filed concurrently herewith.
- This invention was made with United States Government support in the form of Grant No. DAAD19-02-1-0389 from the Army Research Office. The United States Government may have certain rights in the invention.
- This application discloses an invention which is related, generally and in various embodiments, to a multiple resolution video network with eye tracking based control.
- Video networks are becoming more and more commonplace, including those utilizing digital cameras in the security and surveillance fields. For most security and surveillance applications, more optimal results are generally realized when more high resolution cameras are included in the video network. More cameras may be utilized to cover a larger geographic area, increase the number of views associated with a particular area, decrease the number of “blind” spots, etc.
- However, in many current video networks, especially those which include wireless channels, the maximum bandwidth of the video network often operates to limit the number of high resolution cameras which can be effectively included in the video network. The amount of bandwidth generally needed to transmit high resolution images (e.g., 640×480 pixels) from a high resolution camera at a high frame rate (e.g., 30 frames per second) and at a low compression rate percentage (e.g., 10%) is on the order of approximately nine Megabits per second. Thus, for a video network which includes a wireless channel, the video network may not be able to support a single high resolution camera, depending on the desired resolution, frame rate and compression rate percentage. For example, the approximately nine Megabits per second bandwidth needed in the above example far exceeds the capacity of current Bluetooth technology, which is only on the order of approximately three Megabits per second.
- The capacity problem is not limited to video networks which include wireless channels. For a video network which includes twelve high resolution cameras, the required bandwidth generally needed to concurrently transmit high resolution images (e.g., 640×480 pixels) from the twelve high resolution cameras at a high frame rate (e.g., 30 frames per second) and at a low compression rate percentage (e.g., 10%) is on the order of approximately one-hundred and eight Megabits per second, which exceeds the capacity of traditional Ethernet cable, which is only on the order of approximately one-hundred Megabits per second. Thus, depending on the desired resolution, frame rate and compression percentage, video networks utilizing traditional Ethernet cable are often limited to including fewer than twelve high resolution cameras in the video network.
- Additionally, for video networks which include a plurality of high resolution cameras, it is generally not practical for a person or persons to intently view each and every one of the high resolution images transmitted by the cameras. In general, each person is typically limited to intently viewing the images from only one camera at a time. Thus, the scheme of sending all images at a high resolution, a high frame rate, and a low compression percentage rate tends to be an inefficient use of valuable network bandwidth.
- In one general respect, this application discloses a system. According to various embodiments, the system includes a computing device configured for communication with a plurality of multiple resolution cameras and with a display device. The computing device includes a camera resolution module configured for instructing at least one of the multiple resolution cameras to operate at a first resolution at a first period of time and at a second resolution at a second period of time. The first resolution is different than the second resolution.
- In another general respect, this application discloses a method. The method is implemented at least in part by a computing device. According to various embodiments, the method includes receiving a first image from a multiple resolution camera at a first resolution, generating a change of resolution instruction, sending the change of resolution instruction to the multiple resolution camera, and receiving a second image from the multiple resolution camera at a second resolution. The second resolution is different than the first resolution.
- Aspects of the invention may be implemented by a computing device and/or a computer program stored on a computer-readable medium. The computer-readable medium may comprise a disk, a device, and/or a propagated signal.
- Various embodiments of the invention are described herein in by way of example in conjunction with the following figures, wherein like reference characters designate the same or similar elements.
-
FIG. 1 illustrates various embodiments of a system; -
FIG. 2 illustrates various embodiments of a display device of the system ofFIG. 1 ; -
FIG. 3 illustrates various embodiments of a control system of the system ofFIG. 1 ; -
FIG. 4 illustrates various embodiments of another system; -
FIG. 5 illustrates various embodiments of a method for controlling the data flow rate of a video network; and -
FIG. 6 illustrates various embodiments of another method for controlling the data flow rate of a video network. - It is to be understood that at least some of the figures and descriptions of the invention have been simplified to illustrate elements that are relevant for a clear understanding of the invention, while eliminating, for purposes of clarity, other elements that those of ordinary skill in the art will appreciate may also comprise a portion of the invention. However, because such elements are well known in the art, and because they do not facilitate a better understanding of the invention, a description of such elements is not provided herein.
-
FIG. 1 illustrates various embodiments of asystem 10. Thesystem 10 includes a plurality of multipleresolution video cameras 12, acontrol system 14 in communication with thecameras 12, and adisplay device 16 in communication with thecontrol system 14 For purposes of simplicity, only fourcameras 12 are shown inFIG. 1 . However, it will be appreciated that thesystem 10 may include any number ofcameras 12. - Each multiple
resolution video camera 12 is configured for operation at more than one resolution, and includes aresolution selection module 18 which is configured to switch thecamera 12 to a given resolution. Thecameras 12 may be embodied as any suitable multiple resolution cameras. For example, thecameras 12 may be embodied as cameras similar to network cameras manufactured by Axis Communications AB of Lund, Sweden. As will be explained in more detail hereinbelow, the resolution of eachcamera 12 may be dynamically controlled to switch from one resolution to another. Thus, a givencamera 12 may operate at a first resolution at a first time period, and at a second resolution at a second time period. For example, the givencamera 12 may operate at a resolution of 640×480 pixels at a first time period, and at a resolution of 320×240 pixels at a second time period. Although the above example describes the operation of the givencamera 12 in the context of a “high” resolution (640×480) and a “low” resolution (320×240), it will be appreciated that the multipleresolution video cameras 12 may be configured for operation at resolutions other than 640×480 and 320×240. In addition, it will be further appreciated that according to various embodiments, themultiple resolution cameras 12 may be configured for operation at more than two different resolutions (e.g., at high, medium, and low resolutions). - The
cameras 12 are configured to capture images (i.e., frames) at either the first resolution or at the second resolution, and to send the captured images to thecontrol system 14. Thecameras 12 are also configured to send the captured images to thecontrol system 14 at any suitable frame rate. For example, according to various embodiments, thecameras 12 may operate to send the captured images to thecontrol system 14 at a frame rate of thirty frames per second. As used herein, the phrase image can mean a single image (i.e., a single frame) or a plurality of images (i.e., a plurality of frames). According to other embodiments, as described in more detail hereinbelow with respect toFIG. 4 , thecameras 12 may also be configured to generate images associated with the captured images, and to send the associated images to thecontrol system 14. - As shown in
FIG. 1 , according to various embodiments, thecameras 12 are in communication with thecontrol system 14 via anetwork 20. In general, thecameras 12 and thecontrol system 14 each include hardware and/or software components for communicating with thenetwork 20 and with each other. Thecameras 12 and thecontrol system 14 may be structured and arranged to communicate through thenetwork 20 via wired and/or wireless pathways using various communication protocols (e.g., HTTP, TCP/IP, UDP, WAP, WiFi, Bluetooth) and/or to operate within or in concert with one or more other communications systems. - The
network 20 may include any type of delivery, system including, but not limited to, a local area network (e.g., Ethernet), a wide area network (e.g. the Internet and/or World Wide Web), a telephone network (e.g., analog, digital, wired, wireless, PSTN, ISDN, GSM, GPRS, and/or xDSL), a packet-switched network, a radio network, a television network, a cable network, a satellite network, and/or any other wired or wireless communications network configured to carry data. Thenetwork 20 may include elements, such as, for example, intermediate nodes, proxy servers, routers, switches, and adapters configured to direct and/or deliver data. - The
display device 16 may be embodied as any suitable display device. In general, thedisplay device 16 is configured to display the images sent by therespective cameras 12, and the images may be displayed by thedisplay device 16 in any suitable arrangement. As therespective cameras 12 may operate at more than one resolution, thedisplay device 16 may display a number of images at one resolution (e.g., a low resolution), and at least one other image at a different resolution (e.g., a high resolution). For example, as shown inFIG. 2 , a “high” resolution image may be displayed on the “left” side of thedisplay device 16 and “low” resolution images may be displayed on the “right” side of thedisplay device 16. According to other embodiments, a “high” resolution image may be displayed proximate a center of thedisplay device 16 and “low” resolution images may be displayed around or proximate the “high” resolution image. Thus, it is understood that thedisplay device 16 may display the images in many other arrangements. - As described in more detail hereinbelow with respect to
FIG. 4 , according to other embodiments, thedisplay device 16 may also be configured to display a composite image which includes a high resolution portion and a low resolution portion. - For purposes of simplicity, only one
display device 16 is shown inFIG. 1 . However, it is understood that thesystem 10 may include any number ofdisplay devices 16. For example, according to various embodiments, thesystem 10 may include twodisplay devices 16—one for displaying the “high” resolution image and the other for displaying the “low” resolution images. -
FIG. 3 illustrates various embodiments of thecontrol system 14 ofFIG. 1 . Thecontrol system 14 includes aneye tracking device 22, and acomputing device 24 which is in communication with the plurality ofcameras 12, thedisplay device 16, and theeye tracking device 22. For purposes of simplicity, only oneeye tracking device 22 and onecomputing device 24 are shown inFIG. 3 . However, it is understood that thecontrol system 14 may include any number ofeye tracking devices 22 and any number ofcomputing devices 24. For example, according to various embodiments, thesystem 10 may include oneeye tracking device 22 for eachdisplay device 16. - The
eye tracking device 22 may be embodied as any suitable eye tracking device. For example, according to various embodiments, theeye tracking device 22 may be embodied as or similar to the EyeTech TM2 model manufactured by EyeTech Digital Systems, Inc. of Mesa, Arizona. For such embodiments, a first infrared light 22 a of theeye tracking device 22 is positioned proximate a first edge (e.g., a “left” edge) of thedisplay device 16 and a secondinfrared light 22 b of theeye tracking device 22 is positioned proximate a second edge (e.g., a “right” edge) of thedisplay device 16. (SeeFIG. 2 ). In general, the first and secondinfrared lights display device 16. - The
computing device 24 includes adisplay module 26, aneye tracking module 28, and acamera resolution module 30. Thedisplay module 26 is in communication with thedisplay device 16, and is configured for delivering images sent from thecameras 12 to thedisplay device 16. As eachindividual camera 12 is configured for operation at more than one resolution, it is understood that thedisplay module 26 may deliver images of different resolutions to thedisplay device 16 at a given time, For example, in asystem 10 with fourcameras 12, thedisplay module 26 may deliver the images sent from one of the fourcameras 12 to thedisplay device 16 at a first resolution (e.g., at a “high” resolution) and the respective images sent from the other threecameras 12 to thedisplay device 16 at a second resolution (e.g., at a “low” resolution). - The
eye tracking module 28 is in communication with theeye tracking device 22, and is configured for associating anindividual camera 12 with a position of a person's eye (or eyes) who is viewing thedisplay device 16. In general, when the person focuses on one of the respective images on thedisplay device 16, the person's eye will be more focused on that image than on the other images. According to various embodiments, theeye tracking module 28 associates the position of the person's eye with a position on thedisplay device 16, associates the position on thedisplay device 16 with an image on thedisplay device 16, and associates the image on thedisplay device 16 with anindividual camera 12. - The
camera resolution module 30 is in communication with the plurality ofcameras 12, and is configured for dynamically instructing eachcamera 12 which resolution to operate at based on information determined by theeye tracking module 28. Such information includes which image on thedisplay device 16 the person's eye is focusing on, and whichcamera 12 sent the image. According to various embodiments, when a person's eye is focused on a particular image on thedisplay device 16, thecamera resolution module 30 instructs theresolution selection module 18 of theappropriate camera 12 to operate thecamera 12 at a high resolution. For eachcamera 12 which is not associated with the particular image, thecamera resolution module 30 instructs the appropriateresolution selection modules 18 to operate the correspondingcameras 12 at a low resolution. For such embodiments, the high resolution image sent by the givencamera 12 will be displayed at a high resolution on thedisplay device 16, and the respective low resolution images sent by theother cameras 12 will be displayed at a low resolution on thedisplay device 16. The format of the instruction to change acamera 12 from one resolution to another resolution may be realized in any suitable manner. For example, according to various embodiments, thecamera resolution module 30 may send a simple high or low signal (e.g., a “0” or a “1”) to theresolution selection module 18 of a givencamera 12 to initiate a change of the resolution of thecamera 12. - The
modules cameras 12 such that when the mediums are read, the functions described herein are performed. Similarly, themodules - According to various embodiments, the
respective modules 18 may reside at the correspondingcameras 12. Each of themodules computing device 24, at other devices within thesystem 10, or combinations thereof. For embodiments where thesystem 10 includes more than onecomputing device 24, themodules computing devices 24. According to various embodiments, the functionality of themodules -
FIG. 4 illustrates various embodiments of anothersystem 40. Thesystem 40 is similar to thesystem 10 ofFIG. 1 , but is different in the ways described hereinbelow. In thesystem 40 ofFIG. 4 , each of thecameras 12 further includes animage resolution module 42. According to various embodiments, each of theimage resolution modules 42 is configured to determine whether a high resolution image captured by the correspondingcamera 12 includes a particular object of interest. The respectiveimage resolution modules 42 may be configured to determine whether a variety of different objects of interest are included in a given high resolution image. For example, according to various embodiments, a human face may be an object of interest. The determination may be realized in any suitable manner. For example, according to various embodiments, facial recognition software (e.g., software functionally similar to Intel Open CV) residing at theimage resolution module 42 may be utilized to determine whether a high resolution image captured by the correspondingcamera 12 includes a human face. - Each of the
image resolution modules 42 is also configured to define a location of the object of interest within the high resolution image captured by the correspondingcamera 12 when theimage resolution module 42 determines that the high resolution image includes an object of interest. The location of the object of interest relative to the entire high resolution image captured by the correspondingcamera 12 may be defined in any suitable manner. For example, according to various embodiments, the relative location of the object of interest is defined by coordinates (e.g., the four corners of the object of interest, the center point and radius of the object of interest, etc.) associated with the object of interest. - Each of the
image resolution modules 42 is further configured to generate two images associated with the high resolution image captured by the correspondingcamera 12 when theimage resolution module 42 determines that the high resolution image includes an object of interest. The first associated image is a high resolution image of the object of interest portion (e.g., the portion defined by the coordinates) of the high resolution image captured by thecamera 12. The second associated image is a low resolution image of the high resolution image captured by thecamera 12. According to various embodiments, the location of the object of interest relative to the entire high resolution image captured by the correspondingcamera 12, and the two associated images generated by a givenimage resolution module 42, are sent to thecontrol system 14 in lieu of the high resolution image captured by the correspondingcamera 12. Collectively, the two associated images and the relative location of the object of interest may be considered to be composite information. - The
system 40 ofFIG. 4 is also different from thesystem 10 ofFIG. 1 in that thecomputing device 24 of thecontrol system 14 ofsystem 40 further includes acomposite image module 44. According to various embodiments, thecomposite image module 44 is configured to generate a composite image based on the composite information sent from a givencamera 12. Thecomposite image module 44 is in communication with thedisplay module 26, and is configured to send generated composite images to thedisplay module 26. Thesystem 40 ofFIG. 4 is also different from thesystem 10 ofFIG. 1 in that thedisplay module 26 is further configured to send a composite image generated by thecomposite image module 44 to thedisplay device 16. - The
composite image module 44 may generate a composite image in any suitable manner. For example, according to various embodiments, thecomposite image module 44 generates the composite image by superimposing the first associated image (i.e., the high resolution image of the object of interest portion of the high resolution image captured by the camera 12) on the second associated image (i.e., the low resolution image of the high resolution image captured by the camera 12) at the location determined by theimage resolution module 42. - According to other embodiments, the
composite image module 44 generates the composite image by deleting a portion of the second associated image corresponding to the location of the object of interest as determined by theimage resolution module 42. Thecomposite image module 44 then inserts the first associated image onto the remaining portion of the second associated image at the location previously occupied by the deleted portion of the second associated image. - According to yet other embodiments, the
composite image module 44 generates the composite image by deleting a portion of the second associated image corresponding to the location of the object of interest as determined by theimage resolution module 42. Thecomposite image module 44 then positions the remaining portion of the second associated image over the first associated image such that the location previously occupied by the deleted portion of the second associated image is aligned with the first associated image. - The
modules cameras 12 such that when the mediums are read, the functions described herein are performed. Similarly, the module 44 (e.g., software application, computer program) may be stored on another computer-readable medium (e.g., disk, device, and/or propagated signal) such that when a computer reads the medium, the functions described herein are performed. - According to various embodiments, the
respective modules 42 may reside at the correspondingcameras 12, and may be in communication with the correspondingresolution selection modules 18. Themodule 44 may be in communication with themodules computing device 24, at other devices within thesystem 40, or combinations thereof. For embodiments where thesystem 40 includes more than onecomputing device 24, themodule 40 may be distributed across a plurality ofcomputing devices 24. -
FIG. 5 illustrates various embodiments of amethod 50 for controlling the data flow rate of a video network. Themethod 50 may be implemented by various components of thesystem 10 ofFIG. 1 . For purposes of simplicity, themethod 50 will be described in the context of thesystem 10 ofFIG. 1 . - Prior to the start of the
process 50, each of thecameras 12 may be operating at a low resolution, and sending low resolution images to thecontrol system 14 via thenetwork 20. Thecontrol system 14 may he receiving the low resolution images, and sending the low resolution images to thedisplay device 16 via thedisplay module 26. Thedisplay device 16 may be receiving the low resolution images, and displaying the low resolution images for viewing by a person or other user. - The
process 50 starts atblock 52, where the person focuses on a particular low resolution image which is displayed on thedisplay device 16. Fromblock 52, the process advances to block 54, where theeye tracking device 22 detects the position of a person's eyes who is viewing thedisplay device 16, and sends an indication of the detected eye position to theeye tracking module 28. - From
block 54, the process advances to block 56, where theeye tracking module 28 associates the indication of the detected eye position with a position on thedisplay device 16, associates the position on thedisplay device 16 with an image on thedisplay device 16, and associates the image on thedisplay device 16 with anindividual camera 12. - From
block 56, the process advances to block 58, where thecomputing device 24 determines if the person has been focusing on the same image for a predetermined period of time. Thecomputing device 24 may determine if the person has been focusing on the same image for a predetermined period of time in any suitable manner. For example, according to various embodiments, thecomputing device 24 may maintain a timer which resets every time theeye tracking module 28 associates the indication of the detected eye position with a particular image on thedisplay screen 16. According to various embodiments, the predetermined period of time may be in the range of approximately 1.5 to 2 seconds. According to other embodiments, the predetermined period of time may be more than 2 seconds or less than 1.5 seconds. - At
block 58, if thecomputing device 24 determines that the person has been focusing on the same image for at least the predetermined period of time, the process advances to block 60, where thecamera resolution module 30 generates a change of resolution instruction and sends the change of resolution instruction to the associated camera 12 (i.e., thecamera 12 associated with the viewed image). The change of resolution instruction is an instruction to change the resolution of the associatedcamera 12 from the low resolution to a high resolution. - From
block 60, the process advances to block 62, where the associatedcamera 12 receives the change of resolution instruction, and theresolution selection module 18 causes the associatedcamera 12 to switch from the low resolution to the high resolution. Fromblock 62, the process advances to block 64, where the associatedcamera 12 now captures a high resolution image and sends the high resolution image to thecontrol system 14 via thenetwork 20. - From
block 64, the process advances to block 66, where thecontrol system 14 receives the high resolution image from the associatedcamera 12, and sends the high resolution image to thedisplay device 16 via the display modulo 26. Fromblock 66, the process advances to block 68, where thedisplay device 16 receives the high resolution image, and displays the high resolution image for viewing by the person or other user. As described hereinabove, the high resolution image may occupy a larger area of the display device than any of the individual low resolution images. - From
block 68, the process returns to block 52 when the person changes his or her focus from the high resolution image to a different image (e.g., a low resolution image) which is displayed on thedisplay device 16. The process described at blocks 52-68 may be repeated any number of times. - The execution of the process described in blocks 52-68 results a low resolution image on the
display device 16 being replaced with a high resolution image after the person is focused on the low resolution image for a predetermined period of time. Thus, only onecamera 12 at a time is sending high resolution images, thereby minimizing the bandwidth needed to effectively operate thesystem 10. For the fourcamera 12 example of thesystem 10 ofFIG. 1 , the utilization of the above-describedmethod 50 lowers the needed bandwidth on the order of approximately 75% (from 38 Mbt/s to approximately 9.5 Mbt/s). -
FIG. 6 illustrates various embodiments of anothermethod 80 for controlling the data flow rate of a video network. Themethod 80 may be implemented by various components of thesystem 40 ofFIG. 4 . For purposes of simplicity, the method will be described in the context of thesystem 40 ofFIG. 4 . - Prior to the start of the
process 80, each of thecameras 12 may be operating at a low resolution, and sending low resolution images to thecontrol system 14 via thenetwork 20. Thecontrol system 14 may be receiving the low resolution images, and sending the low resolution images to thedisplay device 16 via thedisplay module 26. Thedisplay device 16 may be receiving the low resolution images, and displaying the low resolution images for viewing by a person or other user. - The
process 80 starts atblock 82, where the person focuses on a particular low resolution image which is displayed on thedisplay device 16. Fromblock 82, the process advances to block 84, where theeye tracking device 22 detects the position of a person's eyes who is viewing thedisplay device 16, and sends an indication of the detected eye position to theeye tracking module 28. - From
block 84, the process advances to block 86, where theeye tracking module 28 associates the indication of the detected eye position with a position on thedisplay device 16, associates the position on thedisplay device 16 with an image on thedisplay device 16, and associates the image on thedisplay device 16 with anindividual camera 12. - From
block 86, the process advances to block 88, where thecomputing device 24 determines if the person has been focusing on the same image for a predetermined period of time. Thecomputing device 24 may determine if the person has been focusing on the same image for a predetermined period of time in any suitable manner. For example, according to various embodiments, thecomputing device 24 may maintain a timer which resets every time theeye tracking module 28 associates the indication of the detected eye position with a particular image on thedisplay screen 16. According to various embodiments, the predetermined period of time may be in the range of approximately 1.5 to 2 seconds. According to other embodiments, the predetermined period of time may be more than 2 seconds or less than 1.5 seconds. - At
block 88, if thecomputing device 24 determines that the person has been focusing on the same image for at least the predetermined period of time, the process advances to block 90, where the camera resolution module; 30 generates a change of resolution instruction and sends the change of resolution instruction to the associated camera 12 (i.e., thecamera 12 associated with the viewed image). The change of resolution instruction is an instruction to change the resolution of the associatedcamera 12 from the low resolution to a high resolution. - From
block 90, the process advances to block 92, where the associatedcamera 12 receives the change of resolution instruction, and theresolution selection module 18 causes the associatedcamera 12 to switch from the low resolution to the high resolution. Fromblock 92, the process advances to block 94, where the associatedcamera 12 now captures a high resolution image. - From
block 94, the process advances to block 96, where theimage resolution module 42 of the associatedcamera 12 determines if the high resolution image captured by the associatedcamera 12 includes a particular object of interest (e.g., a human face). Fromblock 96, the process advances to either block 98 or to block 108. - At
block 96, if theimage resolution module 42 determines that the high resolution image includes the particular object of interest, the process advances fromblock 96 to block 98, where theimage resolution module 42 defines a location of the object of interest within the high resolution image (i.e., location information), and generates two images associated with the high resolution image. The first associated image is a high resolution image of the object of interest portion of the high resolution image captured by the associatedcamera 12. The second associated image is a low resolution image of the high resolution image captured by the associatedcamera 12. Fromblock 98, the process advances to block 100, where the associatedcamera 12 sends the location information and the two associated images to thecontrol system 14 via the network 20 (collectively the composite information). - From
block 100, the process advances to block 102, where thecomposite image module 44 receives the composite information and generates a composite image based on the received composite information. Thecomposite image module 44 may generate a composite image in any suitable manner. For example, according to various embodiments, thecomposite image module 44 generates the composite image by superimposing the first associated image on the second associated image at the location defined by theimage resolution module 42. - According to other embodiments, the
composite image module 44 generates the composite image by deleting a portion of the second associated image corresponding to the location of the object of interest as determined by theimage resolution module 42. Thecomposite image module 44 then inserts the first associated image onto the remaining portion of the second associated image at the location previously occupied by the deleted portion of the second associated image. - According to yet other embodiments, the
composite image module 44 generates the composite image by deleting a portion of the second associated image corresponding to the location of the object of interest as determined by theimage resolution module 42. Thecomposite image module 44 then positions the remaining portion of the second associated image over the first associated image such that the location previously occupied by the deleted portion of the second associated image is aligned with the first associated image. - From
block 102, the process advances to block 104, where thecontrol system 14 sends the composite image to thedisplay device 16 via thedisplay module 26. Fromblock 104, the process advances to block 106, where thedisplay device 16 receives the composite image, and displays the composite image for viewing by the person or other user. The composite image may occupy a larger area of thedisplay device 16 than any of the individual low resolution images. - From 106, the process returns to block 82 when the person changes his or her focus from the composite image to a different image (e.g., a low resolution image) which is displayed on the
display device 16. The process described at blocks 82-106 may be repeated any number of times. - The execution of the process described in blocks 82-106 results in a low resolution image on the
display device 16 being replaced with a composite image after the person is focused on the low resolution image for a predetermined period of time. Thus, only onecamera 12 at a time is sending high resolution images (e.g., the object of interest portion of the high resolution image captured by the associated camera 12), thereby minimizing the bandwidth needed to effectively operate thesystem 40. For the fourcamera 12 example of thesystem 40 ofFIG. 4 , the utilization of the above-describedmethod 80 lowers the needed bandwidth on the order of approximately 88% (from 38 Mbt/s to approximately 4.5 Mbt/s) when the high resolution image captured by the associatedcamera 12 includes the object of interest. - At
block 96, if theimage resolution module 42 determines that the high resolution image does not include the particular object of interest, the process advances fromblock 96 to block 108, where the associatedcamera 12 sends the high resolution image to thecontrol system 14. Fromblock 108, the process advances to block 110, where thecontrol system 14 sends the high resolution image to thedisplay device 16 via thedisplay module 26. Fromblock 110, the process advances to block 112, where thedisplay device 16 receives the high resolution image, and displays the high resolution image for viewing by the person or other user. As described hereinabove, the high resolution image may occupy a larger area of thedisplay device 16 than any of the individual low resolution image. - From
block 112, the process returns to block 82 when the person changes his or her focus from the high resolution image to a different image (e.g., a low resolution image) which is displayed on thedisplay device 16. The process described at blocks 82-96 and 108-112 may be repeated any number of times. - The execution of the process described in blocks 82-96 and 108-112 results a low resolution image on the
display device 16 being replaced with a high resolution image after the person is focused on the low resolution image for a predetermined period of time. Thus, only onecamera 12 at a time is sending high resolution images, thereby minimizing the bandwidth needed to effectively operate thesystem 40. For the fourcamera 12 example of thesystem 40 ofFIG. 4 , the utilization of the above-describedmethod 80 lowers the needed bandwidth on the order of approximately 75% (from 38 Mbt/s to approximately 9.5 Mbt/s) when the high resolution image captured by the associatedcamera 12 does not include the object of interest. - Nothing in the above description is meant to limit the invention to any specific materials, geometry, or orientation of elements. Many part/orientation substitutions are contemplated within the scope of the invention and will be apparent to those skilled in the art. The embodiments described herein were presented by way of example only and should not be used to limit the scope of the invention.
- Although the invention has been described in terms of particular embodiments in this application, one of ordinary skill in the art, in light of the teachings herein, can generate additional embodiments and modifications without departing from the spirit of, or exceeding the scope of, the claimed invention. Accordingly, it is understood that the drawings and the descriptions herein are proffered only to facilitate comprehension of the invention and should not be construed to limit the scope thereof.
Claims (19)
1. A system, comprising:
a computing device configured for communication with a plurality of multiple resolution cameras and with a display device, wherein the computing device comprises:
a camera resolution module configured for instructing at least one of the multiple resolution cameras to operate at:
a first resolution at a first period of time; and
a second resolution at a second period of time, wherein the first resolution is different than the second resolution.
2. The system of claim 1 , wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first position of a person's eye with a first location on the display device.
3. The system of claim 1 , wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first position of a person's eye with a first image on the display device.
4. The system of claim 1 , wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first position of a person's eye with a first one of the multiple resolution cameras.
5. The system of claim 1 , wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first location on the display device with a first image on the display device.
6. The system of claim 1 , wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first location on the display device with a first one of the multiple resolution cameras.
7. The system of claim 1 , wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first image on the display device with a first one of the multiple resolution cameras.
8. The system of claim 1 , wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the camera resolution module is further configured for instructing the at least one of the multiple resolution cameras based on information received from the eye tracking module.
9. The system of claim 1 , wherein the computing device further comprises a display module configured for sending a plurality of images to the display device, wherein the plurality of images comprise:
a first image at the first resolution; and
a second image at the second resolution.
10. A method, implemented at least in part by a computing device, the method comprising:
receiving a first image from a multiple resolution camera at a first resolution;
generating a change of resolution instruction;
sending the change of resolution instruction to the multiple resolution camera; and
receiving a second image from the multiple resolution camera at a second resolution, wherein the second resolution is different than the first resolution,
11. The method of claim 10 , wherein generating the change of resolution instruction comprises generating the change of resolution instruction based on a position of a person's eye.
12. The method of claim 10 , wherein generating the change of resolution instruction comprises generating the change of resolution instruction after a position of a person's eye has remained fixed for a predetermined period of time.
13. The method of claim 10 , further comprising associating a position of a person's eyes with a location on a display device.
14. The method of claim 10 , further comprising associating a position of a person's eyes with an image on a display device.
15. The method of claim 10 , further comprising associating a position of a person's eyes with a multiple resolution camera.
16. The method of claim 10 , further comprising associating a location on a display device with an image on the display device.
17. The method of claim 10 , further comprising associating a location on a display device with a multiple resolution camera.
18. The method of claim 10 , further comprising associating an image on a display device with a multiple resolution camera.
19. The method of claim 10 , further comprising:
generating a second change of resolution instruction;
sending the second change of resolution instruction to the multiple resolution camera; and
receiving a third image from the multiple resolution camera at the first resolution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/669,685 US20100283843A1 (en) | 2007-07-17 | 2008-07-17 | Multiple resolution video network with eye tracking based control |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US95982107P | 2007-07-17 | 2007-07-17 | |
US95982007P | 2007-07-17 | 2007-07-17 | |
US12/669,685 US20100283843A1 (en) | 2007-07-17 | 2008-07-17 | Multiple resolution video network with eye tracking based control |
PCT/US2008/070375 WO2009012413A1 (en) | 2007-07-17 | 2008-07-17 | Multiple resolution video network with eye tracking based control |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100283843A1 true US20100283843A1 (en) | 2010-11-11 |
Family
ID=40260083
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/669,680 Active 2030-04-09 US9467647B2 (en) | 2007-07-17 | 2008-07-17 | Multiple resolution video network with context based control |
US12/669,685 Abandoned US20100283843A1 (en) | 2007-07-17 | 2008-07-17 | Multiple resolution video network with eye tracking based control |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/669,680 Active 2030-04-09 US9467647B2 (en) | 2007-07-17 | 2008-07-17 | Multiple resolution video network with context based control |
Country Status (2)
Country | Link |
---|---|
US (2) | US9467647B2 (en) |
WO (2) | WO2009012413A1 (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110037850A1 (en) * | 2009-08-12 | 2011-02-17 | Utechzone Co., Ltd. | Surveillance system for monitoring security personnel |
CN103997630A (en) * | 2014-06-13 | 2014-08-20 | 中国移动通信集团广东有限公司 | Intelligent primary code stream and secondary code stream switching method and system based on TD-LTE network |
WO2015057845A1 (en) * | 2013-10-18 | 2015-04-23 | Cornell University | Eye tracking system and methods for developing content |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US9596388B2 (en) | 2008-07-07 | 2017-03-14 | Gopro, Inc. | Camera housing with integrated expansion module |
CN110857065A (en) * | 2018-08-23 | 2020-03-03 | 现代自动车株式会社 | Apparatus for controlling display of vehicle, system having the same, and method thereof |
USD894256S1 (en) | 2018-08-31 | 2020-08-25 | Gopro, Inc. | Camera mount |
USD905786S1 (en) | 2018-08-31 | 2020-12-22 | Gopro, Inc. | Camera mount |
US10928711B2 (en) | 2018-08-07 | 2021-02-23 | Gopro, Inc. | Camera and camera mount |
USD991318S1 (en) | 2020-08-14 | 2023-07-04 | Gopro, Inc. | Camera |
USD997232S1 (en) | 2019-09-17 | 2023-08-29 | Gopro, Inc. | Camera |
WO2023177551A1 (en) * | 2022-03-18 | 2023-09-21 | Apple Inc. | Dynamic binning passthrough content |
US12041326B2 (en) | 2021-05-26 | 2024-07-16 | Gopro, Inc. | Camera housing with expansion module |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101778259A (en) * | 2009-12-25 | 2010-07-14 | 深圳中兴力维技术有限公司 | Network video monitoring system and method for improving video display efficiency thereof |
AU2011336647B2 (en) | 2010-12-02 | 2015-02-12 | Ultradent Products, Inc. | System and method of viewing and tracking stereoscopic video images |
US9843802B1 (en) | 2012-03-30 | 2017-12-12 | EMC IP Holding Company LLC | Method and system for dynamic compression module selection |
US9571698B1 (en) * | 2012-03-30 | 2017-02-14 | EMC IP Holding Company LLC | Method and system for dynamic compression module selection |
US9843702B1 (en) | 2012-03-30 | 2017-12-12 | EMC IP Holding Company LLC | Method and system for dynamic compression module selection |
WO2013180773A1 (en) | 2012-06-01 | 2013-12-05 | Ultradent Products, Inc. | Stereoscopic video imaging |
JP5979241B2 (en) * | 2012-10-18 | 2016-08-24 | 日本電気株式会社 | Camera system |
CN103248905A (en) * | 2013-03-22 | 2013-08-14 | 深圳市云立方信息科技有限公司 | Display device and visual display method for simulating 3D scene |
DE102013019604B4 (en) * | 2013-11-25 | 2018-06-14 | Smart Mobile Labs Gmbh | System consisting of a plurality of cameras and a central server, as well as procedures for operating the system |
EP3075146A4 (en) * | 2013-11-27 | 2017-07-19 | Ultradent Products, Inc. | Video interaction between physical locations |
KR102209066B1 (en) * | 2014-01-17 | 2021-01-28 | 삼성전자주식회사 | Method and apparatus for image composition using multiple focal length |
KR102104410B1 (en) * | 2014-01-20 | 2020-04-27 | 한화테크윈 주식회사 | Method of setting camera profile and apparatus of obtaining image |
US20160070460A1 (en) * | 2014-09-04 | 2016-03-10 | Adobe Systems Incorporated | In situ assignment of image asset attributes |
KR101782582B1 (en) * | 2015-12-04 | 2017-09-28 | 카페24 주식회사 | Method, Apparatus and System for Transmitting Video Based on Eye Tracking |
US20180077345A1 (en) * | 2016-09-12 | 2018-03-15 | Canon Kabushiki Kaisha | Predictive camera control system and method |
US11120675B2 (en) * | 2019-07-24 | 2021-09-14 | Pix Art Imaging Inc. | Smart motion detection device |
AU2022398348A1 (en) * | 2021-11-24 | 2024-06-06 | Phenix Real Time Solutions, Inc. | Eye gaze as a proxy of attention for video streaming services |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4513317A (en) * | 1982-09-28 | 1985-04-23 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Retinally stabilized differential resolution television display |
US20020049979A1 (en) * | 2000-05-18 | 2002-04-25 | Patrick White | Multiple camera video system which displays selected images |
US6781606B2 (en) * | 1999-05-20 | 2004-08-24 | Hewlett-Packard Development Company, L.P. | System and method for displaying images using foveal video |
US20040212677A1 (en) * | 2003-04-25 | 2004-10-28 | Uebbing John J. | Motion detecting camera system |
US20050078184A1 (en) * | 2003-10-10 | 2005-04-14 | Konica Minolta Holdings, Inc. | Monitoring system |
US7075567B2 (en) * | 2001-07-31 | 2006-07-11 | Hewlett-Packard Development Company, L.P. | Method and apparatus for controlling a plurality of image capture devices in a surveillance system |
US20060176951A1 (en) * | 2005-02-08 | 2006-08-10 | International Business Machines Corporation | System and method for selective image capture, transmission and reconstruction |
US20070076099A1 (en) * | 2005-10-03 | 2007-04-05 | Eyal Eshed | Device and method for hybrid resolution video frames |
US20070107029A1 (en) * | 2000-11-17 | 2007-05-10 | E-Watch Inc. | Multiple Video Display Configurations & Bandwidth Conservation Scheme for Transmitting Video Over a Network |
US8446509B2 (en) * | 2006-08-09 | 2013-05-21 | Tenebraex Corporation | Methods of creating a virtual window |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5852502A (en) * | 1996-05-31 | 1998-12-22 | American Digital Imaging, Inc. | Apparatus and method for digital camera and recorder having a high resolution color composite image output |
AU2003280516A1 (en) * | 2002-07-01 | 2004-01-19 | The Regents Of The University Of California | Digital processing of video images |
US7082572B2 (en) * | 2002-12-30 | 2006-07-25 | The Board Of Trustees Of The Leland Stanford Junior University | Methods and apparatus for interactive map-based analysis of digital video content |
FR2872661B1 (en) * | 2004-07-05 | 2006-09-22 | Eastman Kodak Co | MULTI-RESOLUTION VIEWING METHOD AND DEVICE |
EP2448247A1 (en) * | 2005-11-02 | 2012-05-02 | Olympus Corporation | An image processor for electronic camera |
US20070113242A1 (en) * | 2005-11-16 | 2007-05-17 | Fetkovich John E | Selective post-processing of compressed digital video |
-
2008
- 2008-07-17 US US12/669,680 patent/US9467647B2/en active Active
- 2008-07-17 US US12/669,685 patent/US20100283843A1/en not_active Abandoned
- 2008-07-17 WO PCT/US2008/070375 patent/WO2009012413A1/en active Application Filing
- 2008-07-17 WO PCT/US2008/070373 patent/WO2009012412A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4513317A (en) * | 1982-09-28 | 1985-04-23 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Retinally stabilized differential resolution television display |
US6781606B2 (en) * | 1999-05-20 | 2004-08-24 | Hewlett-Packard Development Company, L.P. | System and method for displaying images using foveal video |
US20020049979A1 (en) * | 2000-05-18 | 2002-04-25 | Patrick White | Multiple camera video system which displays selected images |
US20070107029A1 (en) * | 2000-11-17 | 2007-05-10 | E-Watch Inc. | Multiple Video Display Configurations & Bandwidth Conservation Scheme for Transmitting Video Over a Network |
US7075567B2 (en) * | 2001-07-31 | 2006-07-11 | Hewlett-Packard Development Company, L.P. | Method and apparatus for controlling a plurality of image capture devices in a surveillance system |
US20040212677A1 (en) * | 2003-04-25 | 2004-10-28 | Uebbing John J. | Motion detecting camera system |
US20050078184A1 (en) * | 2003-10-10 | 2005-04-14 | Konica Minolta Holdings, Inc. | Monitoring system |
US20060176951A1 (en) * | 2005-02-08 | 2006-08-10 | International Business Machines Corporation | System and method for selective image capture, transmission and reconstruction |
US20070076099A1 (en) * | 2005-10-03 | 2007-04-05 | Eyal Eshed | Device and method for hybrid resolution video frames |
US8446509B2 (en) * | 2006-08-09 | 2013-05-21 | Tenebraex Corporation | Methods of creating a virtual window |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9699360B2 (en) | 2008-07-07 | 2017-07-04 | Gopro, Inc. | Camera housing with integrated expansion module |
US10986253B2 (en) | 2008-07-07 | 2021-04-20 | Gopro, Inc. | Camera housing with expansion module |
US11025802B2 (en) | 2008-07-07 | 2021-06-01 | Gopro, Inc. | Camera housing with expansion module |
US10356291B2 (en) | 2008-07-07 | 2019-07-16 | Gopro, Inc. | Camera housing with integrated expansion module |
US9596388B2 (en) | 2008-07-07 | 2017-03-14 | Gopro, Inc. | Camera housing with integrated expansion module |
US20110037850A1 (en) * | 2009-08-12 | 2011-02-17 | Utechzone Co., Ltd. | Surveillance system for monitoring security personnel |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
WO2015057845A1 (en) * | 2013-10-18 | 2015-04-23 | Cornell University | Eye tracking system and methods for developing content |
CN103997630A (en) * | 2014-06-13 | 2014-08-20 | 中国移动通信集团广东有限公司 | Intelligent primary code stream and secondary code stream switching method and system based on TD-LTE network |
US11662651B2 (en) | 2018-08-07 | 2023-05-30 | Gopro, Inc. | Camera and camera mount |
US10928711B2 (en) | 2018-08-07 | 2021-02-23 | Gopro, Inc. | Camera and camera mount |
CN110857065A (en) * | 2018-08-23 | 2020-03-03 | 现代自动车株式会社 | Apparatus for controlling display of vehicle, system having the same, and method thereof |
USD905786S1 (en) | 2018-08-31 | 2020-12-22 | Gopro, Inc. | Camera mount |
USD894256S1 (en) | 2018-08-31 | 2020-08-25 | Gopro, Inc. | Camera mount |
USD989165S1 (en) | 2018-08-31 | 2023-06-13 | Gopro, Inc. | Camera mount |
USD1023115S1 (en) | 2018-08-31 | 2024-04-16 | Gopro, Inc. | Camera mount |
USD997232S1 (en) | 2019-09-17 | 2023-08-29 | Gopro, Inc. | Camera |
USD1024165S1 (en) | 2019-09-17 | 2024-04-23 | Gopro, Inc. | Camera |
USD991318S1 (en) | 2020-08-14 | 2023-07-04 | Gopro, Inc. | Camera |
USD1004676S1 (en) | 2020-08-14 | 2023-11-14 | Gopro, Inc. | Camera |
US12041326B2 (en) | 2021-05-26 | 2024-07-16 | Gopro, Inc. | Camera housing with expansion module |
WO2023177551A1 (en) * | 2022-03-18 | 2023-09-21 | Apple Inc. | Dynamic binning passthrough content |
Also Published As
Publication number | Publication date |
---|---|
WO2009012413A1 (en) | 2009-01-22 |
US20100231734A1 (en) | 2010-09-16 |
US9467647B2 (en) | 2016-10-11 |
WO2009012412A1 (en) | 2009-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9467647B2 (en) | Multiple resolution video network with context based control | |
US10972655B1 (en) | Advanced video conferencing systems and methods | |
US20010024233A1 (en) | Camera control system, camera server, camera client, control method, and storage medium | |
US10904446B1 (en) | Advanced video conferencing systems and methods | |
US20020135677A1 (en) | Image sensing control method and apparatus, image transmission control method, apparatus, and system, and storage means storing program that implements the method | |
JP2003111050A (en) | Video distribution server and video reception client system | |
KR20180001551A (en) | Personalized shopping mall system using virtual camera | |
JP2005117163A (en) | Camera server apparatus, control method thereof, computer program and computer-readable storage medium | |
WO2014094537A1 (en) | Immersion communication client and server, and method for obtaining content view | |
US20180338093A1 (en) | Eye-tracking-based image transmission method, device and system | |
US10951858B1 (en) | Advanced video conferencing systems and methods | |
US10805527B2 (en) | Image processing device and method | |
CN105721829A (en) | Surveillance Camera, Recording Apparatus For Surveillance, And Surveillance System | |
JP2005033570A (en) | Method and system for providing mobile body image | |
EP2721812A1 (en) | Method and system for encoding multi-view video content | |
CN112788233B (en) | Video shooting processing method and electronic equipment | |
US10965908B1 (en) | Advanced video conferencing systems and methods | |
US20030153351A1 (en) | System for use in a monitoring and management system based on the internet | |
WO2013102546A1 (en) | A method for video surveillance, a related system, a related surveillance server, and a related surveillance camera | |
Lee et al. | A real-time face tracking system based on a single PTZ camera | |
JP5072103B2 (en) | Angle of view control apparatus and angle of view control method | |
KR101193129B1 (en) | A real time omni-directional and remote surveillance system which is allowable simultaneous multi-user controls | |
JP2010263422A (en) | Information processing device, operating method for the same, and program | |
JP4230714B2 (en) | Information providing method, information processing apparatus, information collecting system, and content display program | |
WO2007035353A2 (en) | Audio visual communication system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCV | Information on status: appeal procedure |
Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER |
|
STCV | Information on status: appeal procedure |
Free format text: EXAMINER'S ANSWER TO APPEAL BRIEF MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: ON APPEAL -- AWAITING DECISION BY THE BOARD OF APPEALS |
|
STCV | Information on status: appeal procedure |
Free format text: BOARD OF APPEALS DECISION RENDERED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |