US20130121527A1 - Systems and methods for analysis of video content, event notification, and video content provision - Google Patents
Systems and methods for analysis of video content, event notification, and video content provision Download PDFInfo
- Publication number
- US20130121527A1 US20130121527A1 US13/524,571 US201213524571A US2013121527A1 US 20130121527 A1 US20130121527 A1 US 20130121527A1 US 201213524571 A US201213524571 A US 201213524571A US 2013121527 A1 US2013121527 A1 US 2013121527A1
- Authority
- US
- United States
- Prior art keywords
- video
- video data
- user
- interest
- segment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06K9/3241—
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19613—Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Definitions
- the present application discloses the use of video analysis technology (such as that described in part in U.S. Pat. No. 6,940,998 (the “'998 patent”), the disclosure of which is incorporated herein by reference) to analyze video data streams from cameras in real time, determine the occurrence of a significant event, and send a notice that may include a selected segment of the video data associated with the significant event.
- video analysis technology such as that described in part in U.S. Pat. No. 6,940,998 (the “'998 patent”), the disclosure of which is incorporated herein by reference
- the application also discloses the use of video analysis technology to determine whether video data includes content corresponding to a user preference for content and providing at least a portion of the video data including the content of interest to be made accessible the user.
- Video analytics technology which is the adaptation of advanced computer vision techniques to characterize video content, has been limited to highly sophisticated, expensive implementations for industry and government.
- existing video analytics techniques require large amounts of processing capacity in order to analyze video at or near real time.
- Present technologies available to consumers and small business users do not provide sophisticated, adaptable and practical solutions.
- Existing systems cannot be made to operate effectively on off-the-shelf personal computers due to the limitations of processing capacity associated with such platforms.
- existing closed circuit television systems are limited to dedicated network configurations.
- This activity can be characterized by long periods of inactivity punctuated by rare but sudden episodes of highly significant activity requiring the application of focus, careful consideration and judgment.
- significant events will in all likelihood go unnoticed by the user.
- These situations are thought to contribute to the slow adoption of “nanny-cam” systems. They also limit the ability of online content providers to create convenient video distribution services for new classes of mobile phones and similar communication and display devices.
- existing systems need to be installed in a dedicated network, they do not have the flexibility to accommodate the dynamics of a rapidly-developing or transient situation.
- existing systems typically send a video representation of observed location to one end-point.
- a user will want to have the ability to change the recipient of video data from an observed location.
- Existing video data storage and retrieval systems only characterize stored material by information or metadata provided along with the video data itself. This metadata is typically entered manually, and only provides a single high level tag for an entire clip or recording, and does not actually describe the content of each scene or frame of video.
- Existing systems do not “look inside” a video to observe the characteristics of the video content in order to classify the videos. Classification therefore requires that a human must discover what content a video contains, usually through watching the video or excerpts therefrom, and provide tags or other descriptive information to associate with the video data. This process can be time- and energy-intensive as well as extremely inefficient when dealing with large amounts of video data, such as can be encountered in cases of multiple, real-time streams of video data.
- a novel method and system for remote event notification over a data network involve receiving and analyzing video data, optionally consulting a profile to select a segment of interest associated with a significant event from the analyzed video data, optionally sending the segment of interest to a storage server, optionally further consulting the profile to encode the segment of interest at the storage server and to send data associated with the segment of interest to one or more end devices via the wireless network, optionally triggering the end device to download the encoded segment of interest from the storage server, and optionally displaying the segment of interest at the end device.
- the method involves analyzing image data within a video stream in real time or near real time, optionally consulting a profile to select a segment or segments of interest in the video stream, and optionally sending the segment of interest or the entire video stream to users of the network whose profiles match the segment of interest.
- FIG. 1 is a schematic illustration of an environment in which video analysis technology can be used for event notification.
- FIG. 2 is a flow chart of a disclosed method.
- FIG. 3 is a schematic representation of a disclosed system.
- FIG. 4 is a function chart of an element of a user profile.
- FIG. 5 is a block representation of an event notice sent to a user by the system of FIG. 3 .
- FIGS. 6 and 7 are flowcharts of disclosed methods.
- FIGS. 8A , 8 B, 9 A, 9 B, 10 A, and 10 B are schematic representations of video analysis and handling of detected events.
- FIG. 11 is a schematic illustration of a system in which selected video is transmitted to a central security station.
- FIG. 12 is a schematic representation of an embodiment of a system for observing an infant's sleeping area.
- FIG. 13 is a schematic illustration of a system of a content-based, video subscription service using video analysis.
- FIG. 14 is a schematic illustration of a method for conducting targeted advertising using video analysis.
- One application can be used to analyze video data streams from cameras in real time, determine the occurrence of a significant event, and send a notice that may include a selected segment of the video data associated with the significant event.
- the application also discloses the use of video analysis technology to determine whether video data includes content corresponding to a user preference for content and making at least a portion of the video data, including the content of interest, accessible to a user or other designated recipient.
- FIG. 1 An environment in which video analysis technology can be used for event notification is illustrated schematically in FIG. 1 .
- a user 90 may wish to receive notice of an event of interest to the user if and when it occurs at an observed location 10 .
- a video camera 20 is positioned to view the observed location 10 and can provide video data to a video analysis functional block 30 .
- Sensors 45 may also be associated with observed location 10 and output from such sensors 45 can be provided to video analysis functional block 30 .
- User preferences 40 can include information about the kinds of events occurring at observed location 10 that are of interest to the user 90 .
- Video analysis functional block 30 refers to user preferences 40 in conjunction with analyzing video data from video camera 20 to determine whether an event of interest to user 90 has taken place at observed location 10 .
- the event notice may include a segment of video data or some data representative of the segment of video data corresponding to the event of interest, and be provided to a communications interface 60 .
- Communications interface 60 can be communicatively coupled to a public data communications network 70 , such as the Internet, and thus can send the event notice via the network 70 to a user device 80 .
- User device 80 can display an event notice to user 90 .
- the event notice can also be sent to other recipients 85 .
- step 100 video data of observed location 10 is received from video camera 20 .
- step 105 data from other sensors associated with the observed location, such as a motion sensor, may be received.
- the video data is analyzed with reference to user preferences 40 and optionally the data received in step 105 .
- step 120 a determination is made as to whether an event of interest to user 90 has occurred at observed location 10 . If not, then the observation simply continues (receiving more video data.) If so, then at step 130 a portion of video data associated with the event of interest is selected and then at step 140 an event notice is generated which, as noted above, can include the selected portion of the video data.
- the event notice is then sent to the user device 80 at step 150 .
- control instructions may be sent to controllable devices at the observed location such as for switching on a light or locking a door.
- the observed location can be any setting that a user wishes to watch for events of interest.
- observed location 10 can be a user's home including any room, doorway, window or surrounding area such as an entrance area, sidewalk, driveway, adjacent roadway, postal box, yard, porch, patio, pool, garden, etc.
- Observed location 10 can also be a workplace, office, parking lot, reception area, loading dock, childcare center, school, storage facility, etc.
- the system can be programmed to detect when a person enters one of the aforementioned areas and to push associated video of the event of interest to a user device.
- the camera can be facing a door, for example a front or back door of a residence, in order to observe household traffic.
- Such an application can notify a user when a person opens an observed door, for example when a child comes home from school.
- a camera can face outward from a front or rear door for exterior detection.
- a user can be notified, for example, as soon as someone approaches within a specified distance of the observed door, and video of the event can be sent to the user.
- the video camera 20 can be any device capable of viewing an observed location 10 and providing video data representing a sequence of images of the observed location.
- Video camera 20 can be a device having a lens that focuses an image on an electronic tube (analog) or on a chip that converts light into electronic impulses.
- video camera 20 can output video data in digital form, but conversion of data from an analog camera into digital form can be performed in any suitable way as will be apparent to the artisan.
- the images may be in any part of the light spectrum including visible, infrared or ultraviolet.
- Video camera 20 can be a monochrome (black and white) or color camera.
- Video camera 20 can be set in a fixed position or placed on a pan-and-tilt device that allow the camera to be moved up, down, left and right.
- the lens may be a fixed lens or a zoom lens.
- Video analysis functional block 30 can be implemented in a variety of ways, but may advantageously be implemented using the techniques disclosed in the '998 patent performed on a suitable hardware device, for example, video analytics software operating on a conventional personal computer. Video analysis functional block 30 can be integrated into another piece of hardware, such as a network device, the video camera 20 , the notice generator functional block 50 , or the communications interface 60 .
- User preferences 40 may reflect any user-specified preferences for the following circumstances or events noteworthy to user 90 , such as the presence or movement of a person, vehicle or other object at the observed location 10 or particular area of the observed location 10 .
- notice generator block 50 can be implemented on software running on a conventional personal computer. Notice generator functional block 50 can be integrated into another piece of hardware, such as a network device, the video camera 20 , the video analysis functional block 30 , or the communications interface 60 .
- Communications interface 60 may be any suitable device capable of communicating data including an event notice to user device 80 via a network 70 .
- communications interface 60 may be a cable modem, DSL modem or other broadband data device and any associated device such as a router.
- User device 80 can include any device capable of receiving, from network 70 , and displaying or rendering to a user an event notice, including a personal computer with display, cellular telephone, smart phone, PDA, etc.
- a user 90 can be an individual or a group of individuals.
- the video camera 20 can be implemented as portable, digital video camera 22 or IP camera 24 .
- a microcomputer 41 or personal computer can store the user preferences. The video analysis can be performed on its processor or on a notice generator.
- a router 62 coupled to the microcomputer 41 serves as communication interface 60 and communicates via the network 70 , such as the Internet or worldwide web, with any of a variety of user devices 80 , including via a wireless communications network (such as a cellular network with a communications tower) to a lap top computer with a cellular modem 82 , a smart phone 84 , PDA 86 or cellular phone 88 .
- the user devices can be any other device able to be coupled directly or indirectly to the communications network, such as an IP video monitor 83 , computer terminal 87 or a laptop computer coupled to a wide area network.
- user preferences 40 can include information about the kinds of events that are of interest to the user.
- the user-defined profile can be used to cause an event notice to be sent when an object is removed from or placed in a defined observed location or area.
- Such an embodiment can be used, for example, to watch fences, gates, stations, or other public or private areas for the presence of any designated behavior.
- a camera can watch over a home office, artwork, office safe, or other valuables.
- User preferences 40 can also include information about the form of event notice the user wishes to receive (resolution, encoding, compression, duration), video data to be included with a notice, the destination(s) of the notice, and actions that may be taken at the observed location (lock a door, switch on a light).
- user preferences 40 can govern the interaction of cameras, users, user devices, recording and analytics. These interactions can be driven by the results of analysis, such as occurs at step 110 in FIG. 2 .
- User preferences 40 can be maintained in a user profile.
- User preferences 40 can be associated with individual cameras, recorders, analytics, users, recipients, user devices and types of responses. Groups of such individuals can also be defined, to which particular user preferences can be applied. Each member of the group can inherit the properties that pertain to that group. Membership in multiple groups is possible. For example, a camera group may have all cameras that are from a single manufacturer and are set up for video at CIF resolution at 15 frames per second, using MPEG-4 Simple Profile compression over the RTP protocol.
- An analytics group may include a set of “outdoor, parking lot” events such as a loitering person and an illegally parked vehicle, with clutter rejection set for sunlit illumination under snowy conditions.
- User profile 400 can include video analytics parameters 410 , notice video data parameters 420 , response parameters 430 , device control parameters 440 , notice destination parameters 450 , camera property parameters 460 , and user property parameters 470 .
- Video analytics properties 410 govern video analysis for event detection and recognition. These properties enable or disable the detection of specific events. Multiple video event detection capabilities can be provided such as the detection of a single person, a stationary object or a moving vehicle. Additionally, they specify control parameters for each event, such as the size of a stationary object or the direction of movement of a vehicle.
- Video analytics parameters 410 can include parameters provided to the video analysis technology to identify what types of objects are of interest (object type parameters 412 , for example a person, a vehicle or a type of object such as a package), what characteristic of each object is relevant (object parameters 414 , such as the size or shape of an object, whether a person is standing, sitting, lying, walking or running, whether a vehicle is stationary or moving, etc.), to specify the handling of the image data from the camera (video data parameters 416 , such as sensitivity, and clutter rejection to accommodate environmental effects such as rain, snow, waves) or illumination effects (shadow, glare, reflections), and to identify aspects of the observed location (location parameters 418 , such as whether there are different zones in the field of view of the camera to be handled differently, e.g., a yard, sidewalk, driveway or street). For example, location parameters 418 for an application using an outdoor camera viewing both a door and driveway could be set to send an event notice upon the detection of a car in the driveway zone, and the presence of a package
- the notice video data parameters 420 can include parameters provided to the video analysis technology specifying how video associated with an event of interest is recorded
- Recording parameters 422 can specify the bit rate of the recorded video data, what encoding standard is to be used and the resolution of the video.
- Scheduling parameters 424 specify a mapping between the date, time and properties. Scheduling parameters 424 can specify how recording parameters are to change based on the time or date, such as how to vary resolution and compression based on time of day and day of the week, and what event events are of interest during particular times. Camera properties such as resolution or compression quality may be modified based on the schedule. Similarly, the set of events detected and their properties can be changed on a schedule.
- Detected event parameters 426 specify how video data is to be treated based on the detected event, such as the resolution, compression, frame rate, quality, bit rate and exposure time to apply in the case of different detected events such as fast-moving objects, very slow-moving objects, very small objects, illuminated objects, etc.
- Detected event parameters can be modified for the entire frame or for parts of the frame based on an event that is detected, as disclosed in US Patent Application Publication 2006/0165386. For example, if a video analytics block determines that a frame sequence contains a person, then the user profile 400 associated with the video analytics block might be programmed to specify that the subject video sequence be compressed according to a compression scheme that preserves quality, even at the expense of storage space.
- the profile might be programmed to specify that the system record the video using a compression scheme that conserves a relatively large amount of storage space as compared to the raw video.
- the disclosed technology allows the properties of a camera to also be changed based both on a schedule, according to scheduling parameters 424 , and on events that are detected by a video analytics block, according to detected event parameters 426 .
- the exposure time for the camera can be adjusted during the nighttime hours to capture a non-blurred version of the license plate.
- the response parameters 430 can include parameters provided to the video analysis technology specifying actions to take when sending an event notice in response to the detection of an event of interest.
- Response parameters can include rules governing how notifications associated with detected events are disseminated to users and devices.
- dissemination rules 432 provide a mapping between an event or multiple events and actions resulting from these events. Actions can be any combination of electronic communication in the form of text, multimedia attachment, streaming video, or in the form of device control.
- Dissemination rules 432 can specify to whom and in what form a notice is to be sent.
- Response parameters 430 can be set, for example, to allow a friend or neighbor to observe a person's house when the person is out of town by setting the parameters to send notifications to the friend or neighbor as another recipient.
- Response parameters 430 can also include timeout parameters 434 specifying how long the system is to persist in notifying a user, request authorization parameters specifying when and from whom the system is to request authorization to send an event notice to a user or other recipient, etc.
- Timeout parameters 434 can specify mechanisms for clearing or resetting an event condition. A response may be as simple as a timeout after which all conditions are cleared or reset. Other examples of timeout parameters 434 include automated clearing of event conditions when any video is requested or viewed regardless of the user, or any device-initiated actions. Complex timeout parameters can require the user to interact with a local or remote device, or send electronic communication back to the system which would then authorize the action and clear the condition.
- Device control parameters 440 can include parameters provided to the video analysis technology specifying other actions to take, in addition to or in lieu of sending an alert, in response to the detection of an event of interest.
- the device control parameters 440 can specify, for example, whether a door gets locked, a light gets turned on or off, sirens or alarms sound, whether an alarm is to be reset, a radio signal or beacon gets transmitted, etc.
- An example interaction is to switch on an exterior light only when a person is detected, but not if a vehicle is detected, or to sound the doorbell when a person is detected within a certain distance from a door. Additionally, recording properties may be modified or further analysis may be triggered.
- the notice destination parameters 450 can include device parameters 452 provided to the video analysis technology for interacting with devices used to record, stage and view video and notifications.
- the notice destination parameters 450 can specify treatment of video for particular device requirements such as storage capacity, bandwidth, processing capacity, decoding capability, image display resolution, text display capabilities and protocols supported, such as email (POP, SMTP, IMAP), SMS (text messaging), RSS, web browser, media player, etc. These properties can be used to facilitate transmission to particular user devices, such as higher compression for transmission to low-bandwidth devices, such as wireless devices.
- the video analysis technology can refer to notice destination parameters 450 to implement scalable compression based on video analysis with an MPEG-4-like streaming framework for mobile content delivery.
- the notice destination parameters 450 can also include parameters provided to the video analysis technology specifying various notice priorities 454 , for example, based upon different notice conditions.
- the notice priorities 454 can have different associated levels of and modes of notice. For example, for a critical notice a user can specify that he or she be notified by a voice call to his or her cellular telephone. The call can contain a recorded message notifying the user of the associated notice condition. Another priority can be associated with a different level of notice, for example, an email or a text message.
- the user profile can specify that a selected segment of video associated with the notice condition be sent automatically to the user's mobile device. For example, a user can specify that in the case of an unexpected vehicle in the driveway, the system send a representation of the selected segment to the user's mobile device, for example, a cell phone, personal digital assistant, smart phone or a laptop computer.
- the camera property parameters 460 can include parameters provided to a camera to control camera capabilities 462 such as frame rate, quality, bit rate, colorspace, quantization, compression format, transport, and encryption. It can also specify protocols and mechanisms 464 for the control of the cameras, for example for pan tilt zoom control, and further including contrast, gain, exposure, white balance, and gamma settings.
- User property parameters 470 specify valid users for the system, their credentials, contact information and authorization mechanisms. User property parameters 470 also specify rights for viewing, camera control, administration (ability to modify profile properties), device control and dissemination control.
- the parameters in user profile 400 may be specified by user 90 .
- Default profiles may be defined from which a user may choose, and which a user may modify.
- a user may have a user profile 400 for each camera associated with an observed location and/or may have a different profile for different times of the day, days of the week, seasons of the year, etc.
- a user can save multiple profiles for multiple circumstances, for example, a vacation profile, a natural disaster profile, a normal profile, a guests profile, or others, to accommodate different circumstances.
- a user profile 400 may be stored locally with the device performing the video analysis, for example the video analysis functional block 30 of FIG. 1 .
- it may be stored in persistent storage on the same microcomputer on which video analysis software operates.
- user profile 400 may be stored remotely, provided that it is readily available as input for the video analysis.
- a suitable user interface may be provided to allow the user to define and modify a user profile 400 .
- FIG. 5 schematically illustrates an exemplary event notice 600 .
- the event notice 600 can include a message component 610 and a video data component 620 .
- Message component 610 can include text that conveys to a user relevant information about the event, such as “a vehicle has entered the driveway of your residence” or “a person has entered your backyard.” This textual information can be in any format suitable for the user device to which notice 600 is to be sent, such as an email, MMS, SMS, or page.
- Information can also be provided in another form, such as an audio message, which may be generated by text-to-speech conversion, to be conveyed by a call to a telephone (cellular, land line, voice over IP, etc.).
- Video data component 620 can include selected segments of video data associated with the event of interest. For example, in conjunction with the message “a person entered your backyard” contained in message component 610 , the video data component 620 could include video data for the time period starting with the person entering the backyard (or a field of view of the camera if it does not encompass the boundary of the backyard), and ending with the person leaving the backyard, or ending after some more limited period of time.
- the video data can be a modified version of the raw video data from a video camera, for example, at a lower frame rate, lower resolution, compressed and/or encoded. Again, the format of the video component is selected to be viewable for the user device to which the event notice 600 will be sent.
- the video data component 620 may alternatively be some other representation of the video data of potential interest to the user.
- the data may be in a form of one or more still images selected from among the video data frames to be representative of the video data. This may be appropriate, for example where the user device 80 can render a photo but not a video clip.
- the video data component 620 may be in the form of a link or other pointer to a network location from which the user may pull the video data of interest.
- the format of the event notice 600 can be determined by reference to the parameters in the user profile 400 and may depend on, for example, the capabilities of the user device(s) 80 to which event notice 600 is to be sent, the nature of the event, the portion of the observed location to which the event relates, etc.
- the destinations of the event notice 600 can be determined by reference to the parameters in the user profile 400 .
- user profile 400 may specify that an event notice 600 relating to a potential intrusion in the back yard of observed location 10 during a weekday should be sent to the user's PDA and to the user's computer at the user's workplace.
- FIG. 6 is a flowchart showing some analytical steps that can be included in step 110 of FIG. 2 .
- the video can be subjected initially to a rough analysis to detect the presence of motion by non-trivially-sized objects. If there is such motion, then in step 112 it is determined whether the motion takes place during a particular time window, such as in the evening or in the morning. If so, then in step 113 the video data segment associated with the movement can be subjected to further analysis, for example, facial recognition analysis. If no motion is detected, or if motion is detected but not in the relevant time window, then no further analysis is conducted and new video data is analyzed.
- These optional analyses can be specified in the user profile 400 , and can reduce the possibility of false or undesired event notices.
- FIG. 7 is a flowchart showing some analytical steps that can be included in step 110 of FIG. 2 based on video data received in step 100 and data received from other sensors in step 105 . If data is received from step 105 , such as output from a motion sensor indicating that motion of some object was detected, then in step 111 the video data can be initially subjected to a rough analysis to detect motion. If motion is detected, then in step 114 further analysis of the video data can be conducted to classify the moving object, e.g., a person, animal, pet, or vehicle. In step 115 a determination is made whether the object is a person.
- step 116 further analysis is performed to determine the identity of the person, e.g. by comparison to image data for known persons. If not, observation of the video data continues.
- step 117 a determination is made whether the person is authorized to be present at the observed location (or the portion viewed by the video camera), e.g., because the person is unknown, or because the person is known but does not have explicit authorization. If not, observation of the video data continues.
- FIGS. 8A and 8B illustrate how a user can be notified in a residential setting.
- the camera is trained on a monitored location, in this case the road fronting the residential property.
- a user can receive an event notice, for example, when a vehicle enters a camera's field of view.
- FIG. 8A when the monitored location is clear, the system sends no event notification.
- the car enters the field of view of the camera, as shown in FIG.
- the video analysis functional block which in this embodiment is integrated into the video camera, refers to user preferences in conjunction with analyzing the video data from the video camera to determine whether an event of interest to the user has taken place at the monitored location.
- the car approaching in front of the residence constitutes an event of interest according to the user profile; therefore, that determination is provided to a notice generator functional block which generates an event notice, provided to the user.
- the event notice includes a segment of video data corresponding to the event of interest, i.e., video of the car. This video of the car is displayed at the user device to the user.
- the user has specified in the user profile to send a segment of interest of the video data, rather than a mere image, in the case of the detection of a passing car due to the nature of the event of interest—the user is interested in information about the behavior of the car as well as its presence.
- a user profile can direct the system to notify a user when a new, stationary object is introduced into the field of view.
- FIGS. 9A and 9B illustrate how a user can be notified when a delivery has been made to a monitored location.
- the video camera is trained at a delivery drop-off point, in this case the front porch (the monitored location), to monitor for delivery of a package that is expected to be left at the drop-off point, and that will need to be recovered.
- the system sends no event notice to the user.
- the package is placed in the field of view of the camera, as illustrated in FIG.
- the video analysis functional block refers to user preferences in conjunction with analyzing the video data from the video camera to determine whether an event of interest to user has taken place at monitored location.
- the package sitting on the front porch constitutes an event of interest according to the user profile; therefore, that determination is provided to a notice generator functional block which generates an event notice, provided to the user.
- the event notice includes a frame from a segment of video data corresponding to the event of interest. This frame of the package is displayed at the user device to the user.
- the user has specified in the user profile that only a frame of the video showing the package be displayed at the user device, due to the lack of additional informational content associated with video of the package—merely the presence of the package is what is interesting to the user.
- such an application can be implemented to help conduct a shipping business efficiently, for example, so that personnel inside a warehouse can become aware of an approaching delivery or pick-up, and make preparations in order to expedite the process.
- FIGS. 10A and 10B illustrate how an “invisible fence” can be drawn around a monitored location, without the use of traditional motion sensors and/or door or window switches. This enables the perimeter of the fence to be controlled merely by adjusting the camera perspective, and therefore it can be placed anywhere.
- the invisible fence is used to monitor the front yard of a residence and to send an event notice to the user when a child has left the yard.
- the video data reflects the child's presence and the video analysis functional block does not determine the existence of an event of interest based on the user profile.
- FIG. 10B the child has left the yard, and video data of the monitored location reflecting this event is received from the video camera (step 100 of FIG.
- the video data is analyzed with reference to user preferences (step 110 , of FIG. 2 ), and the determination is made that an event of interest to the user has occurred at the monitored location (step 120 of FIG. 2 ). Therefore, a portion of the video data associated with the event of interest is selected (step 130 , FIG. 2 ) and then the event notice (“ALERT!”) is generated (step 140 , FIG. 2 ) and sent to the user device (step 150 , FIG. 2 ).
- the event notice (“ALERT!”
- a user can use an invisible fence application of the invention to find out if a vehicle stops in front of a house or drives by slowly, keep an eye on neighbors or strangers who park close to home, know when small children, elderly family or pets enter “off-limits” areas or leave the house.
- the user profile can be specifically tailored to fit a wide range of situations. For example, if a user has a “night owl” in the family or late morning snoozer, the user profile may be adapted to these specific household patterns.
- Another application of the inventive technology is for transitory monitoring. This can enable a fewer number of people to more effectively monitor a boundary or border for activity.
- such an application can be deployed along a national border, a toll station to watch for toll violators, or a turnstile to watch for turnstile violators.
- such an application can be used to detect a person walking the wrong way in an exit area.
- This application can also be used to accurately detect wrong way motion in circumstances of heavy traffic and crowding that would confuse or disable existing solutions.
- Such an application can also allow for the erection of an invisible (video) “fence” to establish transitory protection and monitoring zones around objects/areas of interest, for example for temporary applications.
- This application can be much more expedient to erect than an actual physical impediment such as a fence, and can be transparent to people in the area.
- This application can be employed in situations where the erection of a physical obstacle is undesirable or impractical, such as at a memorial or other attraction, the enjoyment of which would be degraded by a physical impediment.
- Such an application could also be used to create transitory boundaries over water or unsteady or unstable ground, such as swamp land, where the erection of a physical boundary is impractical or impossible.
- Such an application can be useful in areas, for example, such as wildlife preserves or reserves where construction is not allowed or would interfere with the ecology.
- Such an application can also be used, according to a user defined profile, to track the migration of wildlife, without influencing or interfering with such migrations. This can also be useful to determine populations of wildlife.
- This application can also be used underwater in conjunction with underwater cameras to track specific fish, or other sea life such as specific whales or dolphins.
- Such an application can be field-expedient in that it can be erected anywhere a wireless broadband or other data link can be established. Such an application can easily be moved to adapt to dynamic monitoring situations.
- FIG. 11 illustrates an embodiment that transmits video to a central station of a security service provider to aid in possible later identification of an intruder.
- various sensors corresponding to the sensors 45 of FIG. 1 , are incorporated into the monitored location.
- FIG. 11 also illustrates an alarm unit with status screen (labeled S) at the monitored location.
- video data of monitored location is received from video cameras 20 (step 100 , FIG. 2 ) along with data from the other sensors associated with the monitored location (step 105 , FIG. 2 ).
- the video data is analyzed with reference to user preferences and the data from the other sensors (step 110 , FIG. 2 ).
- a determination is made as to whether an event of interest to the user has occurred at monitored location and an event notice is generated which is then sent to the central station (corresponding to the other recipients 85 of FIG. 1 ).
- Portions of video corresponding to times immediately before and after an alarm trigger can also be sent.
- a user such as a private homeowner, can have a compressed excerpt with event notice sent directly to his or her wireless personal user device.
- the mode of the event notice can be controlled by the programming of the user defined profile. For example, a user can specify that he or she receive an email, voice message, SMS, or compressed video clip directed to his or her personal media-capable user device, depending on the time of day and day of the week.
- Related applications may also be used to send selected segments of incoming video data corresponding to a time period after the event of interest at the monitored location has been detected. Video segments can be sent from the central station to recipients in encoded, unencoded, compressed, uncompressed or another format, depending on the capabilities of the recipient and the need for quality in the transmitted video.
- a video analytics functional block can be deployed on a mobile processing platform, such as a notebook computer.
- the mobile processing platform can have an integrated wireless data connection, such as a cellular modem, and be connected to one or more video cameras.
- Such an application can be used, for example, by a small reconnaissance team, which may like to maintain an inconspicuous or undetected presence.
- a small reconnaissance team can allow such a small team to overwatch a much larger area than would otherwise be possible even with multiple cameras, in the absence of the video analytics.
- This application can reduce “vigilance fatigue,” and thereby extend the useful operational window of the reconnaissance team.
- Such an application like the other applications disclosed herein, can also be used with cameras having special capabilities such as night-vision cameras, thermal imaging devices, and infrared cameras.
- FIG. 12 illustrates how a wireless digital video camera can be used to watch an infant in a crib or other sleeping area, who may be left at home in the care of someone other than a parent.
- This embodiment can thus serve as a “nanny cam.”
- this embodiment can provide them peace of mind by analyzing video of the sleeping infant and to provide an event notice to the parents' smart phone if it is determined that the infant is in distress.
- the wireless connections to both the camera and the smart phone it would be impractical if not impossible to continuously stream video data to the phone. In this embodiment, only relevant portions of video are sent with the event notice.
- a further embodiment, illustrated in FIG. 13 is a content-based video subscription system that involves providing a user-definable profile describing the types of video content the user would like to have forwarded to him or her from a server running video analytics interoperable with the user's profile.
- the server can send the user information about the video (e.g. time, source, content) based on its content.
- the user can then choose whether to download the video from the server or not.
- the user can specify different user profiles associated with various transmission modes to accommodate the bandwidth and processing limitations of different receiving user devices.
- the user can specify that short, compressed clips of video containing news or coverage of selected sports teams be sent to a mobile device such as a PDA, smart phone, media-capable cellular phone, or other portable user device.
- the user device can be provided with the software necessary to play the compressed excerpts at an acceptable quality.
- the user's profile can specify which video sources the video analytics should monitor.
- the analytics can be directed to crawl the web for content, run periodic searches for content, watch video content source sites such as YouTube, Google Video, or others, and/or monitor specified blog postings, classes of blogs, advertising, classifieds, or auction services.
- a user can build a profile to monitor political blogs and news outlets for video featuring a particular candidate or particular issues.
- a user's profile can additionally specify, for example, that selections of video containing scenes from a particular movie producer, actor, writer, director, YouTube broadcaster, organization, or having a further or another association trigger the generation and transmission of an event notice, such as an email to the user's email account.
- the email can contain instructions on how to download and view the video, such as a hyperlink to the relevant video or selected segments of the video.
- the video or segments selected according to the user's profile can be cached at the central server.
- a user can define and maintain one or more profiles simultaneously.
- a user can maintain profiles in different statuses, such as active or inactive.
- a user can have more than one active profile simultaneously.
- a user profile can also be programmed to monitor a web cam or several web cams.
- a further application of the inventive technology can use analytics to push relevant video content and information related to and/or describing the video content to an online interface, such as a “dashboard,” that can allow a user to review, play, store and manage clips.
- an application can consult a user defined profile to analyze video from various sources including online subscription services, personal archives, clips and/or streams sent from friends or family, clips and/or content located at links sent to a designated email inbox, web cam content, YouTube or other video site content, search results, blogs, and other sources for desired content.
- Video content can then, depending on system limitations and the user profile, either be pushed to an online server equipped with a management architecture, or selected portions can be compressed and saved, or links can be assembled and provided for review.
- a secure token is created.
- This secure token is used to authorize recipients to interact with, and become a user on, the dashboard system based on privilege levels defined in the user profile. For instance, there can be multiple levels of authorization that allow the following access: a) the ability to view the event message alone, but not view video, b) the ability to view the message and play a short video clip within a pre- and post-event interval, the interval being defined in the profile, c) the ability to view event video and optionally live video from the camera responsible for the event, d) the ability to view event video and corresponding recorded video from spatially-nearby cameras for the duration of the event (past video), e) the ability to view live and recorded video from the event triggering camera and nearby cameras, and f) the ability to control an event camera capable of pan-tilt-zoom in order to view live video to examine the scene.
- This application of the disclosed technology involves embedded advertising.
- This application can send advertising content along with video to a subscriber based on scene content.
- the advertising can be associated with the content of the video in order to be more effective through “targeting.”
- Another application targets “live” advertising.
- This application uses video analytics to analyze video taken from the relevant target location to characterize the potential shoppers and vary the advertising message accordingly.
- This application employs one or more cameras, and optionally other sensors such as audio and/or ground-mounted pressure sensors.
- This application can also employ traditional motion sensors to gather additional traffic data.
- this application uses a computer processor such as a personal computer, configured to run video analytics software interoperable with a user profile.
- This application can also use a control module to direct messaging at one or more active advertising devices such as marquees, billboards, and flat-screen displays.
- storefront advertising is tailored according to the people outside a storefront or near a billboard. For example, as illustrated in FIG. 14 , advertising can be tailored based on the number of people present in the advertising zone, how long they have been there, and/or whether they are children or adults, or predominantly men or women.
- the same approach can be used after business hours to observe the storefront or other advertising zone. For example, the displayed “advertising message” can be adjusted to show a “lurker” that he or she is being observed, for example via some type of notice on a screen that is used to display ads.
- Messaging criteria for this application can be controlled through specifications in a user profile.
- This application can target messages at a mall or other commercial area.
- a refinement of this application can analyze video to determine the general reaction of targeted audiences to a particular message and to make adjustments accordingly. For example, if the analytics notices a positive reaction to a particular message, such as through physical manifestations such as smiles, laughter, or thoughtful consideration, the message can be sustained, continued, or otherwise pursued. If the analytics notices only quick glances, lack of attentiveness, or disinterest, the message can be changed.
- This application of video analysis is a channel-switcher that switches content channels (broadcast television, Internet-based content channels, etc.) based on content analysis and a user profile.
- Analytics can continuously scan channels for desired content and either switch automatically when that content is found or display an event notice providing the viewer the option to switch to the channel with the found content.
- This application can also be used to record desired content automatically, for example in a DVR application.
- This application can also be used to switch away from a channel when certain content, such as objectionable content, is found. For example, this application can be used to determine when violence is present and then tune away from the offending station, and additionally to lock the station out for a pre-selected period of time, such as 15 or 30 minutes. This application can therefore be desirable for use in ensuring appropriate content for younger viewers.
- Some embodiments include a processor and a related processor-readable medium having instructions or computer code thereon for performing various processor-implemented operations.
- processors can be implemented as hardware modules such as embedded microprocessors, microprocessors as part of a computer system, Application-Specific Integrated Circuits (“ASICs”), and Programmable Logic Devices (“PLDs”).
- ASICs Application-Specific Integrated Circuits
- PLDs Programmable Logic Devices
- Such processors can also be implemented as one or more software modules in programming languages as Java, C++, C, assembly, a hardware description language, or any other suitable programming language.
- a processor includes media and computer code (also can be referred to as code) specially designed and constructed for the specific purpose or purposes.
- processor-readable media include, but are not limited to: magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (“CD/DVDs”), Compact Disc-Read Only Memories (“CD-ROMs”), and holographic devices; magneto-optical storage media such as optical disks, and read-only memory (“ROM”) and random-access memory (“RAM”) devices.
- Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, and files containing higher-level instructions that are executed by a computer using an interpreter.
- an embodiment of the invention can be implemented using Java, C++, or other object oriented programming language and development tools.
- Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
- This application is a continuation of U.S. application Ser. No. 12/277,996, filed Nov. 25, 2008, entitled “Systems and Methods for Analysis of Video Content, Event Notification, and Video Content Provision,” which is a nonprovisional of U.S. Provisional Application Ser. No. 60/990,983, filed Nov. 29, 2007, entitled “Systems and Methods for Analysis of Video Content, Event Notification, and Video Content Provision,” each of which is incorporated herein by reference in its entirety.
- 1. Field of Technology
- The present application discloses the use of video analysis technology (such as that described in part in U.S. Pat. No. 6,940,998 (the “'998 patent”), the disclosure of which is incorporated herein by reference) to analyze video data streams from cameras in real time, determine the occurrence of a significant event, and send a notice that may include a selected segment of the video data associated with the significant event. The application also discloses the use of video analysis technology to determine whether video data includes content corresponding to a user preference for content and providing at least a portion of the video data including the content of interest to be made accessible the user.
- 2. Background of the Invention
- While the means to capture, transport, store, retrieve and display video in large-scale networks have advanced significantly in recent years, technologies available and practical for characterizing their content as one does for other data types have not kept pace. The video equivalent of the search function in a word processor has not been offered. Video analytics technology, which is the adaptation of advanced computer vision techniques to characterize video content, has been limited to highly sophisticated, expensive implementations for industry and government. Furthermore, existing video analytics techniques require large amounts of processing capacity in order to analyze video at or near real time. Present technologies available to consumers and small business users do not provide sophisticated, adaptable and practical solutions. Existing systems cannot be made to operate effectively on off-the-shelf personal computers due to the limitations of processing capacity associated with such platforms. Additionally, existing closed circuit television systems are limited to dedicated network configurations. This is due to the high bandwidth requirements associated with streaming live video. The requirement for a dedicated network inhibits distribution of collected video beyond a location close to the video imager or camera. Existing technologies for video transport require too much bandwidth to effectively be employed across readily-available networks having low bandwidth capacity, such as most wireless networks.
- Additionally, current systems for viewing live or recorded video require that the user know the location of, or the path to, the desired video stream on the network or within the closed circuit system and actively “pull” the video in order to view it. In the case of large, loosely organized libraries of live or recorded video, this task may be extraordinarily onerous, usually requiring viewing many scenes containing nothing of interest to the user. One recent advance has been to use the output of electronic sensors to trigger the transmission of video from a nearby camera. Some video systems even incorporate “video motion detection,” a technique that senses gross image changes, to initiate this action. These systems offer no way to determine the relevance of content or to distinguish between non-activity and events of interest. The distinction between what is of interest and what is not must be performed by a human. This activity can be characterized by long periods of inactivity punctuated by rare but sudden episodes of highly significant activity requiring the application of focus, careful consideration and judgment. In the case of real-time observation systems, significant events will in all likelihood go unnoticed by the user. These situations are thought to contribute to the slow adoption of “nanny-cam” systems. They also limit the ability of online content providers to create convenient video distribution services for new classes of mobile phones and similar communication and display devices.
- Because existing systems need to be installed in a dedicated network, they do not have the flexibility to accommodate the dynamics of a rapidly-developing or transient situation. In addition, existing systems typically send a video representation of observed location to one end-point. In some cases, a user will want to have the ability to change the recipient of video data from an observed location.
- Traditional closed-circuit TV systems require that a person sit at a display screen connected to a network in order to observe a location. If a user wants to be able to see what happened in his or her absence, he or she must watch the video of the period of his or her absence. This can be inconvenient, time consuming, and boring. To mitigate these effects, a user may choose to view the recordings at an increased play speed. This can increase the chances that something of significance will be missed. This situation limits the ability of the user, such as a homeowner or small business owner, to have peace of mind when the user must be away from the video display.
- Existing video data storage and retrieval systems only characterize stored material by information or metadata provided along with the video data itself. This metadata is typically entered manually, and only provides a single high level tag for an entire clip or recording, and does not actually describe the content of each scene or frame of video. Existing systems do not “look inside” a video to observe the characteristics of the video content in order to classify the videos. Classification therefore requires that a human must discover what content a video contains, usually through watching the video or excerpts therefrom, and provide tags or other descriptive information to associate with the video data. This process can be time- and energy-intensive as well as extremely inefficient when dealing with large amounts of video data, such as can be encountered in cases of multiple, real-time streams of video data.
- What is needed, then, is a video content description technology that enables distributed observation of user-defined video content across existing networks, such as the Internet and wireless communication infrastructure, and observation across multiple geographically-distributed sites. What is also needed is a system that automatically forwards video to interested personnel in response to the existence of noteworthy events and that allows flexibility to specify and change the recipient of video data. What is further needed is a system that can send notifications and information to users wherever they are.
- A novel method and system for remote event notification over a data network are disclosed. The method involves receiving and analyzing video data, optionally consulting a profile to select a segment of interest associated with a significant event from the analyzed video data, optionally sending the segment of interest to a storage server, optionally further consulting the profile to encode the segment of interest at the storage server and to send data associated with the segment of interest to one or more end devices via the wireless network, optionally triggering the end device to download the encoded segment of interest from the storage server, and optionally displaying the segment of interest at the end device.
- Also disclosed is a novel method for delivering personalized video content to users on a network. The method involves analyzing image data within a video stream in real time or near real time, optionally consulting a profile to select a segment or segments of interest in the video stream, and optionally sending the segment of interest or the entire video stream to users of the network whose profiles match the segment of interest.
-
FIG. 1 is a schematic illustration of an environment in which video analysis technology can be used for event notification. -
FIG. 2 is a flow chart of a disclosed method. -
FIG. 3 is a schematic representation of a disclosed system. -
FIG. 4 is a function chart of an element of a user profile. -
FIG. 5 is a block representation of an event notice sent to a user by the system ofFIG. 3 . -
FIGS. 6 and 7 are flowcharts of disclosed methods. -
FIGS. 8A , 8B, 9A, 9B, 10A, and 10B are schematic representations of video analysis and handling of detected events. -
FIG. 11 is a schematic illustration of a system in which selected video is transmitted to a central security station. -
FIG. 12 is a schematic representation of an embodiment of a system for observing an infant's sleeping area. -
FIG. 13 is a schematic illustration of a system of a content-based, video subscription service using video analysis. -
FIG. 14 is a schematic illustration of a method for conducting targeted advertising using video analysis. - Several applications of video analysis technology are disclosed below. One application can be used to analyze video data streams from cameras in real time, determine the occurrence of a significant event, and send a notice that may include a selected segment of the video data associated with the significant event. The application also discloses the use of video analysis technology to determine whether video data includes content corresponding to a user preference for content and making at least a portion of the video data, including the content of interest, accessible to a user or other designated recipient.
- An environment in which video analysis technology can be used for event notification is illustrated schematically in
FIG. 1 . Auser 90 may wish to receive notice of an event of interest to the user if and when it occurs at an observedlocation 10. Avideo camera 20 is positioned to view the observedlocation 10 and can provide video data to a video analysisfunctional block 30.Sensors 45 may also be associated with observedlocation 10 and output fromsuch sensors 45 can be provided to video analysisfunctional block 30.User preferences 40 can include information about the kinds of events occurring atobserved location 10 that are of interest to theuser 90. Video analysisfunctional block 30 refers touser preferences 40 in conjunction with analyzing video data fromvideo camera 20 to determine whether an event of interest touser 90 has taken place atobserved location 10. - If it is determined in video analysis
functional block 30 that an event of interest has occurred, that determination can be provided to a notice generatorfunctional block 50 which can generate an event notice to be provided to theuser 90. The event notice may include a segment of video data or some data representative of the segment of video data corresponding to the event of interest, and be provided to acommunications interface 60. Communications interface 60 can be communicatively coupled to a publicdata communications network 70, such as the Internet, and thus can send the event notice via thenetwork 70 to auser device 80.User device 80 can display an event notice touser 90. Optionally, the event notice can also be sent toother recipients 85. - A process for observing the observed
location 10 and sending event notices touser 90 is illustrated inFIG. 2 . Instep 100, video data of observedlocation 10 is received fromvideo camera 20. Optionally, instep 105, data from other sensors associated with the observed location, such as a motion sensor, may be received. Atstep 110 the video data is analyzed with reference touser preferences 40 and optionally the data received instep 105. In step 120 a determination is made as to whether an event of interest touser 90 has occurred atobserved location 10. If not, then the observation simply continues (receiving more video data.) If so, then at step 130 a portion of video data associated with the event of interest is selected and then atstep 140 an event notice is generated which, as noted above, can include the selected portion of the video data. The event notice is then sent to theuser device 80 atstep 150. Optionally, atstep 160, control instructions may be sent to controllable devices at the observed location such as for switching on a light or locking a door. - In the environment illustrated in
FIG. 1 , the observed location can be any setting that a user wishes to watch for events of interest. For example, observedlocation 10 can be a user's home including any room, doorway, window or surrounding area such as an entrance area, sidewalk, driveway, adjacent roadway, postal box, yard, porch, patio, pool, garden, etc.Observed location 10 can also be a workplace, office, parking lot, reception area, loading dock, childcare center, school, storage facility, etc. For example, the system can be programmed to detect when a person enters one of the aforementioned areas and to push associated video of the event of interest to a user device. In one such application, the camera can be facing a door, for example a front or back door of a residence, in order to observe household traffic. Such an application can notify a user when a person opens an observed door, for example when a child comes home from school. In another application, a camera can face outward from a front or rear door for exterior detection. In this application, a user can be notified, for example, as soon as someone approaches within a specified distance of the observed door, and video of the event can be sent to the user. - The
video camera 20 can be any device capable of viewing an observedlocation 10 and providing video data representing a sequence of images of the observed location.Video camera 20 can be a device having a lens that focuses an image on an electronic tube (analog) or on a chip that converts light into electronic impulses. Preferably,video camera 20 can output video data in digital form, but conversion of data from an analog camera into digital form can be performed in any suitable way as will be apparent to the artisan. The images may be in any part of the light spectrum including visible, infrared or ultraviolet.Video camera 20 can be a monochrome (black and white) or color camera.Video camera 20 can be set in a fixed position or placed on a pan-and-tilt device that allow the camera to be moved up, down, left and right. The lens may be a fixed lens or a zoom lens. - Video analysis
functional block 30 can be implemented in a variety of ways, but may advantageously be implemented using the techniques disclosed in the '998 patent performed on a suitable hardware device, for example, video analytics software operating on a conventional personal computer. Video analysisfunctional block 30 can be integrated into another piece of hardware, such as a network device, thevideo camera 20, the notice generatorfunctional block 50, or thecommunications interface 60. -
User preferences 40 may reflect any user-specified preferences for the following circumstances or events noteworthy touser 90, such as the presence or movement of a person, vehicle or other object at the observedlocation 10 or particular area of the observedlocation 10. Similarly,notice generator block 50 can be implemented on software running on a conventional personal computer. Notice generatorfunctional block 50 can be integrated into another piece of hardware, such as a network device, thevideo camera 20, the video analysisfunctional block 30, or thecommunications interface 60. - Communications interface 60 may be any suitable device capable of communicating data including an event notice to
user device 80 via anetwork 70. For example,communications interface 60 may be a cable modem, DSL modem or other broadband data device and any associated device such as a router. -
User device 80 can include any device capable of receiving, fromnetwork 70, and displaying or rendering to a user an event notice, including a personal computer with display, cellular telephone, smart phone, PDA, etc. Auser 90 can be an individual or a group of individuals. - Some of the possible implementations of the elements described above are illustrated in
FIG. 3 . Thevideo camera 20 can be implemented as portable,digital video camera 22 orIP camera 24. Amicrocomputer 41 or personal computer can store the user preferences. The video analysis can be performed on its processor or on a notice generator. Arouter 62 coupled to themicrocomputer 41 serves ascommunication interface 60 and communicates via thenetwork 70, such as the Internet or worldwide web, with any of a variety ofuser devices 80, including via a wireless communications network (such as a cellular network with a communications tower) to a lap top computer with acellular modem 82, asmart phone 84,PDA 86 orcellular phone 88. The user devices can be any other device able to be coupled directly or indirectly to the communications network, such as anIP video monitor 83,computer terminal 87 or a laptop computer coupled to a wide area network. - As described above,
user preferences 40 can include information about the kinds of events that are of interest to the user. For example, the user-defined profile can be used to cause an event notice to be sent when an object is removed from or placed in a defined observed location or area. Such an embodiment can be used, for example, to watch fences, gates, stations, or other public or private areas for the presence of any designated behavior. In one application, a camera can watch over a home office, artwork, office safe, or other valuables.User preferences 40 can also include information about the form of event notice the user wishes to receive (resolution, encoding, compression, duration), video data to be included with a notice, the destination(s) of the notice, and actions that may be taken at the observed location (lock a door, switch on a light). Additionally,user preferences 40 can govern the interaction of cameras, users, user devices, recording and analytics. These interactions can be driven by the results of analysis, such as occurs atstep 110 inFIG. 2 . -
User preferences 40 can be maintained in a user profile.User preferences 40 can be associated with individual cameras, recorders, analytics, users, recipients, user devices and types of responses. Groups of such individuals can also be defined, to which particular user preferences can be applied. Each member of the group can inherit the properties that pertain to that group. Membership in multiple groups is possible. For example, a camera group may have all cameras that are from a single manufacturer and are set up for video at CIF resolution at 15 frames per second, using MPEG-4 Simple Profile compression over the RTP protocol. An analytics group may include a set of “outdoor, parking lot” events such as a loitering person and an illegally parked vehicle, with clutter rejection set for sunlit illumination under snowy conditions. - An exemplary embodiment of a user profile is illustrated schematically in
FIG. 4 .User profile 400 can includevideo analytics parameters 410, noticevideo data parameters 420,response parameters 430,device control parameters 440,notice destination parameters 450,camera property parameters 460, anduser property parameters 470. -
Video analytics properties 410 govern video analysis for event detection and recognition. These properties enable or disable the detection of specific events. Multiple video event detection capabilities can be provided such as the detection of a single person, a stationary object or a moving vehicle. Additionally, they specify control parameters for each event, such as the size of a stationary object or the direction of movement of a vehicle.Video analytics parameters 410 can include parameters provided to the video analysis technology to identify what types of objects are of interest (objecttype parameters 412, for example a person, a vehicle or a type of object such as a package), what characteristic of each object is relevant (objectparameters 414, such as the size or shape of an object, whether a person is standing, sitting, lying, walking or running, whether a vehicle is stationary or moving, etc.), to specify the handling of the image data from the camera (video data parameters 416, such as sensitivity, and clutter rejection to accommodate environmental effects such as rain, snow, waves) or illumination effects (shadow, glare, reflections), and to identify aspects of the observed location (location parameters 418, such as whether there are different zones in the field of view of the camera to be handled differently, e.g., a yard, sidewalk, driveway or street). For example,location parameters 418 for an application using an outdoor camera viewing both a door and driveway could be set to send an event notice upon the detection of a car in the driveway zone, and the presence of a package near the door. - The notice
video data parameters 420 can include parameters provided to the video analysis technology specifying how video associated with an event of interest is recordedRecording parameters 422 can specify the bit rate of the recorded video data, what encoding standard is to be used and the resolution of the video.Scheduling parameters 424 specify a mapping between the date, time and properties.Scheduling parameters 424 can specify how recording parameters are to change based on the time or date, such as how to vary resolution and compression based on time of day and day of the week, and what event events are of interest during particular times. Camera properties such as resolution or compression quality may be modified based on the schedule. Similarly, the set of events detected and their properties can be changed on a schedule. - Detected
event parameters 426 specify how video data is to be treated based on the detected event, such as the resolution, compression, frame rate, quality, bit rate and exposure time to apply in the case of different detected events such as fast-moving objects, very slow-moving objects, very small objects, illuminated objects, etc. Detected event parameters can be modified for the entire frame or for parts of the frame based on an event that is detected, as disclosed in US Patent Application Publication 2006/0165386. For example, if a video analytics block determines that a frame sequence contains a person, then theuser profile 400 associated with the video analytics block might be programmed to specify that the subject video sequence be compressed according to a compression scheme that preserves quality, even at the expense of storage space. In contrast, if the same system determines that a video sequence contains a neighborhood cat, which is not of interest to the user, the profile might be programmed to specify that the system record the video using a compression scheme that conserves a relatively large amount of storage space as compared to the raw video. - The disclosed technology allows the properties of a camera to also be changed based both on a schedule, according to
scheduling parameters 424, and on events that are detected by a video analytics block, according to detectedevent parameters 426. For instance, in order to enable optimal conditions for capture of a license plate of a vehicle based on the detection of a vehicle and an estimate of its speed, the exposure time for the camera can be adjusted during the nighttime hours to capture a non-blurred version of the license plate. - The
response parameters 430 can include parameters provided to the video analysis technology specifying actions to take when sending an event notice in response to the detection of an event of interest. Response parameters can include rules governing how notifications associated with detected events are disseminated to users and devices. For example,dissemination rules 432 provide a mapping between an event or multiple events and actions resulting from these events. Actions can be any combination of electronic communication in the form of text, multimedia attachment, streaming video, or in the form of device control. Dissemination rules 432 can specify to whom and in what form a notice is to be sent.Response parameters 430 can be set, for example, to allow a friend or neighbor to observe a person's house when the person is out of town by setting the parameters to send notifications to the friend or neighbor as another recipient. -
Response parameters 430 can also includetimeout parameters 434 specifying how long the system is to persist in notifying a user, request authorization parameters specifying when and from whom the system is to request authorization to send an event notice to a user or other recipient, etc.Timeout parameters 434 can specify mechanisms for clearing or resetting an event condition. A response may be as simple as a timeout after which all conditions are cleared or reset. Other examples oftimeout parameters 434 include automated clearing of event conditions when any video is requested or viewed regardless of the user, or any device-initiated actions. Complex timeout parameters can require the user to interact with a local or remote device, or send electronic communication back to the system which would then authorize the action and clear the condition. -
Device control parameters 440 can include parameters provided to the video analysis technology specifying other actions to take, in addition to or in lieu of sending an alert, in response to the detection of an event of interest. Thedevice control parameters 440 can specify, for example, whether a door gets locked, a light gets turned on or off, sirens or alarms sound, whether an alarm is to be reset, a radio signal or beacon gets transmitted, etc. An example interaction is to switch on an exterior light only when a person is detected, but not if a vehicle is detected, or to sound the doorbell when a person is detected within a certain distance from a door. Additionally, recording properties may be modified or further analysis may be triggered. - The
notice destination parameters 450 can includedevice parameters 452 provided to the video analysis technology for interacting with devices used to record, stage and view video and notifications. Thenotice destination parameters 450 can specify treatment of video for particular device requirements such as storage capacity, bandwidth, processing capacity, decoding capability, image display resolution, text display capabilities and protocols supported, such as email (POP, SMTP, IMAP), SMS (text messaging), RSS, web browser, media player, etc. These properties can be used to facilitate transmission to particular user devices, such as higher compression for transmission to low-bandwidth devices, such as wireless devices. The video analysis technology can refer to noticedestination parameters 450 to implement scalable compression based on video analysis with an MPEG-4-like streaming framework for mobile content delivery. - The
notice destination parameters 450 can also include parameters provided to the video analysis technology specifyingvarious notice priorities 454, for example, based upon different notice conditions. Thenotice priorities 454 can have different associated levels of and modes of notice. For example, for a critical notice a user can specify that he or she be notified by a voice call to his or her cellular telephone. The call can contain a recorded message notifying the user of the associated notice condition. Another priority can be associated with a different level of notice, for example, an email or a text message. Additionally, the user profile can specify that a selected segment of video associated with the notice condition be sent automatically to the user's mobile device. For example, a user can specify that in the case of an unexpected vehicle in the driveway, the system send a representation of the selected segment to the user's mobile device, for example, a cell phone, personal digital assistant, smart phone or a laptop computer. - The
camera property parameters 460 can include parameters provided to a camera to controlcamera capabilities 462 such as frame rate, quality, bit rate, colorspace, quantization, compression format, transport, and encryption. It can also specify protocols andmechanisms 464 for the control of the cameras, for example for pan tilt zoom control, and further including contrast, gain, exposure, white balance, and gamma settings. -
User property parameters 470 specify valid users for the system, their credentials, contact information and authorization mechanisms.User property parameters 470 also specify rights for viewing, camera control, administration (ability to modify profile properties), device control and dissemination control. - The parameters in
user profile 400 may be specified byuser 90. Default profiles may be defined from which a user may choose, and which a user may modify. A user may have auser profile 400 for each camera associated with an observed location and/or may have a different profile for different times of the day, days of the week, seasons of the year, etc. A user can save multiple profiles for multiple circumstances, for example, a vacation profile, a natural disaster profile, a normal profile, a guests profile, or others, to accommodate different circumstances. - A
user profile 400 may be stored locally with the device performing the video analysis, for example the video analysisfunctional block 30 ofFIG. 1 . For example, it may be stored in persistent storage on the same microcomputer on which video analysis software operates. Alternatively,user profile 400 may be stored remotely, provided that it is readily available as input for the video analysis. A suitable user interface may be provided to allow the user to define and modify auser profile 400. - As discussed above, if analysis of the video data with reference to the parameters in the user profile determines that an event of interest has occurred at the observed location, an event notice can be generated and sent to the user.
FIG. 5 schematically illustrates anexemplary event notice 600. Theevent notice 600 can include amessage component 610 and avideo data component 620.Message component 610 can include text that conveys to a user relevant information about the event, such as “a vehicle has entered the driveway of your residence” or “a person has entered your backyard.” This textual information can be in any format suitable for the user device to whichnotice 600 is to be sent, such as an email, MMS, SMS, or page. Information can also be provided in another form, such as an audio message, which may be generated by text-to-speech conversion, to be conveyed by a call to a telephone (cellular, land line, voice over IP, etc.). -
Video data component 620 can include selected segments of video data associated with the event of interest. For example, in conjunction with the message “a person entered your backyard” contained inmessage component 610, thevideo data component 620 could include video data for the time period starting with the person entering the backyard (or a field of view of the camera if it does not encompass the boundary of the backyard), and ending with the person leaving the backyard, or ending after some more limited period of time. The video data can be a modified version of the raw video data from a video camera, for example, at a lower frame rate, lower resolution, compressed and/or encoded. Again, the format of the video component is selected to be viewable for the user device to which theevent notice 600 will be sent. Thevideo data component 620 may alternatively be some other representation of the video data of potential interest to the user. For example, the data may be in a form of one or more still images selected from among the video data frames to be representative of the video data. This may be appropriate, for example where theuser device 80 can render a photo but not a video clip. Alternatively, thevideo data component 620 may be in the form of a link or other pointer to a network location from which the user may pull the video data of interest. - As discussed above, the format of the
event notice 600, including the format ofmessage component 610 and thevideo data component 620, can be determined by reference to the parameters in theuser profile 400 and may depend on, for example, the capabilities of the user device(s) 80 to whichevent notice 600 is to be sent, the nature of the event, the portion of the observed location to which the event relates, etc. Similarly, the destinations of theevent notice 600 can be determined by reference to the parameters in theuser profile 400. For example,user profile 400 may specify that anevent notice 600 relating to a potential intrusion in the back yard of observedlocation 10 during a weekday should be sent to the user's PDA and to the user's computer at the user's workplace. - The analysis of video data can include many different analytical steps.
FIG. 6 is a flowchart showing some analytical steps that can be included instep 110 ofFIG. 2 . Instep 111, the video can be subjected initially to a rough analysis to detect the presence of motion by non-trivially-sized objects. If there is such motion, then instep 112 it is determined whether the motion takes place during a particular time window, such as in the evening or in the morning. If so, then instep 113 the video data segment associated with the movement can be subjected to further analysis, for example, facial recognition analysis. If no motion is detected, or if motion is detected but not in the relevant time window, then no further analysis is conducted and new video data is analyzed. These optional analyses can be specified in theuser profile 400, and can reduce the possibility of false or undesired event notices. - The particular video analysis to be performed can also be based upon other data input, such as input from other sensors as in
step 105 ofFIG. 2 .FIG. 7 is a flowchart showing some analytical steps that can be included instep 110 ofFIG. 2 based on video data received instep 100 and data received from other sensors instep 105. If data is received fromstep 105, such as output from a motion sensor indicating that motion of some object was detected, then instep 111 the video data can be initially subjected to a rough analysis to detect motion. If motion is detected, then instep 114 further analysis of the video data can be conducted to classify the moving object, e.g., a person, animal, pet, or vehicle. In step 115 a determination is made whether the object is a person. If so, then instep 116, further analysis is performed to determine the identity of the person, e.g. by comparison to image data for known persons. If not, observation of the video data continues. In step 117 a determination is made whether the person is authorized to be present at the observed location (or the portion viewed by the video camera), e.g., because the person is unknown, or because the person is known but does not have explicit authorization. If not, observation of the video data continues. - The following examples illustrate various ways in which the capabilities and functionality described above can be put to use.
- Monitoring Vehicles By Location
- Applications of the inventive technology can be used to detect whether cars/trucks approach or park in front of a residence, or in a small business setting, for example to monitor when a vehicle approaches a loading dock.
FIGS. 8A and 8B illustrate how a user can be notified in a residential setting. In this embodiment, the camera is trained on a monitored location, in this case the road fronting the residential property. In this setting, a user can receive an event notice, for example, when a vehicle enters a camera's field of view. As illustrated inFIG. 8A , when the monitored location is clear, the system sends no event notification. When the car enters the field of view of the camera, as shown inFIG. 8B , the video analysis functional block, which in this embodiment is integrated into the video camera, refers to user preferences in conjunction with analyzing the video data from the video camera to determine whether an event of interest to the user has taken place at the monitored location. In this embodiment, the car approaching in front of the residence constitutes an event of interest according to the user profile; therefore, that determination is provided to a notice generator functional block which generates an event notice, provided to the user. In this embodiment, the event notice includes a segment of video data corresponding to the event of interest, i.e., video of the car. This video of the car is displayed at the user device to the user. The user has specified in the user profile to send a segment of interest of the video data, rather than a mere image, in the case of the detection of a passing car due to the nature of the event of interest—the user is interested in information about the behavior of the car as well as its presence. - Monitoring Deliveries
- In another application, a user profile can direct the system to notify a user when a new, stationary object is introduced into the field of view.
FIGS. 9A and 9B illustrate how a user can be notified when a delivery has been made to a monitored location. As illustrated, the video camera is trained at a delivery drop-off point, in this case the front porch (the monitored location), to monitor for delivery of a package that is expected to be left at the drop-off point, and that will need to be recovered. As illustrated inFIG. 9A , when no package is present at the monitored location (no event of interest), the system sends no event notice to the user. When the package is placed in the field of view of the camera, as illustrated inFIG. 9B , the video analysis functional block refers to user preferences in conjunction with analyzing the video data from the video camera to determine whether an event of interest to user has taken place at monitored location. In this embodiment, the package sitting on the front porch constitutes an event of interest according to the user profile; therefore, that determination is provided to a notice generator functional block which generates an event notice, provided to the user. In this embodiment, the event notice includes a frame from a segment of video data corresponding to the event of interest. This frame of the package is displayed at the user device to the user. In this embodiment the user has specified in the user profile that only a frame of the video showing the package be displayed at the user device, due to the lack of additional informational content associated with video of the package—merely the presence of the package is what is interesting to the user. In another embodiment, such an application can be implemented to help conduct a shipping business efficiently, for example, so that personnel inside a warehouse can become aware of an approaching delivery or pick-up, and make preparations in order to expedite the process. - Invisible Fence
-
FIGS. 10A and 10B illustrate how an “invisible fence” can be drawn around a monitored location, without the use of traditional motion sensors and/or door or window switches. This enables the perimeter of the fence to be controlled merely by adjusting the camera perspective, and therefore it can be placed anywhere. In this embodiment, the invisible fence is used to monitor the front yard of a residence and to send an event notice to the user when a child has left the yard. As illustrated inFIG. 10A , when the child is present in the yard, the video data reflects the child's presence and the video analysis functional block does not determine the existence of an event of interest based on the user profile. InFIG. 10B , the child has left the yard, and video data of the monitored location reflecting this event is received from the video camera (step 100 ofFIG. 2 ). Next, the video data is analyzed with reference to user preferences (step 110, ofFIG. 2 ), and the determination is made that an event of interest to the user has occurred at the monitored location (step 120 ofFIG. 2 ). Therefore, a portion of the video data associated with the event of interest is selected (step 130,FIG. 2 ) and then the event notice (“ALERT!”) is generated (step 140,FIG. 2 ) and sent to the user device (step 150,FIG. 2 ). In other embodiments, a user can use an invisible fence application of the invention to find out if a vehicle stops in front of a house or drives by slowly, keep an eye on neighbors or strangers who park close to home, know when small children, elderly family or pets enter “off-limits” areas or leave the house. Additionally, the user profile can be specifically tailored to fit a wide range of situations. For example, if a user has a “night owl” in the family or late morning snoozer, the user profile may be adapted to these specific household patterns. - Transitory Monitoring
- Another application of the inventive technology is for transitory monitoring. This can enable a fewer number of people to more effectively monitor a boundary or border for activity. For example, such an application can be deployed along a national border, a toll station to watch for toll violators, or a turnstile to watch for turnstile violators. For example, such an application can be used to detect a person walking the wrong way in an exit area. This application can also be used to accurately detect wrong way motion in circumstances of heavy traffic and crowding that would confuse or disable existing solutions. Such an application can also allow for the erection of an invisible (video) “fence” to establish transitory protection and monitoring zones around objects/areas of interest, for example for temporary applications. This application can be much more expedient to erect than an actual physical impediment such as a fence, and can be transparent to people in the area. This application can be employed in situations where the erection of a physical obstacle is undesirable or impractical, such as at a memorial or other attraction, the enjoyment of which would be degraded by a physical impediment. Such an application could also be used to create transitory boundaries over water or unsteady or unstable ground, such as swamp land, where the erection of a physical boundary is impractical or impossible. Such an application can be useful in areas, for example, such as wildlife preserves or reserves where construction is not allowed or would interfere with the ecology. Such an application can also be used, according to a user defined profile, to track the migration of wildlife, without influencing or interfering with such migrations. This can also be useful to determine populations of wildlife. This application can also be used underwater in conjunction with underwater cameras to track specific fish, or other sea life such as specific whales or dolphins. Such an application can be field-expedient in that it can be erected anywhere a wireless broadband or other data link can be established. Such an application can easily be moved to adapt to dynamic monitoring situations.
- Notify Additional Recipients
- As noted above, applications of the inventive technology may be used to provide an event notice with or without selected segments of video to other recipients in addition to or instead of the user; based on the circumstances, for example, to law enforcement, neighbors, or relatives. For example, an event notice with an uncompressed video excerpt may be sent to the local police department.
FIG. 11 illustrates an embodiment that transmits video to a central station of a security service provider to aid in possible later identification of an intruder. In this embodiment, various sensors, corresponding to thesensors 45 ofFIG. 1 , are incorporated into the monitored location. This embodiment shows how at least the following sensors car be used: a bistatic beam sensor (labeled TX, C, M); a glass breakage sensor (labeled C, S); a simple electrical current contact sensor (labeled CT); and a proximity sensor (no label).FIG. 11 also illustrates an alarm unit with status screen (labeled S) at the monitored location. In this embodiment video data of monitored location is received from video cameras 20 (step 100,FIG. 2 ) along with data from the other sensors associated with the monitored location (step 105,FIG. 2 ). The video data is analyzed with reference to user preferences and the data from the other sensors (step 110,FIG. 2 ). A determination is made as to whether an event of interest to the user has occurred at monitored location and an event notice is generated which is then sent to the central station (corresponding to theother recipients 85 ofFIG. 1 ). - Portions of video corresponding to times immediately before and after an alarm trigger can also be sent. A user, such as a private homeowner, can have a compressed excerpt with event notice sent directly to his or her wireless personal user device. The mode of the event notice can be controlled by the programming of the user defined profile. For example, a user can specify that he or she receive an email, voice message, SMS, or compressed video clip directed to his or her personal media-capable user device, depending on the time of day and day of the week. Related applications may also be used to send selected segments of incoming video data corresponding to a time period after the event of interest at the monitored location has been detected. Video segments can be sent from the central station to recipients in encoded, unencoded, compressed, uncompressed or another format, depending on the capabilities of the recipient and the need for quality in the transmitted video.
- “Stake Outs”
- The inventive technology can also be used for portable and/or temporary “stake outs.” In such an application, a video analytics functional block can be deployed on a mobile processing platform, such as a notebook computer. The mobile processing platform can have an integrated wireless data connection, such as a cellular modem, and be connected to one or more video cameras. Such an application can be used, for example, by a small reconnaissance team, which may like to maintain an inconspicuous or undetected presence. Such an application can allow such a small team to overwatch a much larger area than would otherwise be possible even with multiple cameras, in the absence of the video analytics. This application can reduce “vigilance fatigue,” and thereby extend the useful operational window of the reconnaissance team. Such an application, like the other applications disclosed herein, can also be used with cameras having special capabilities such as night-vision cameras, thermal imaging devices, and infrared cameras.
- Infant Minder
-
FIG. 12 illustrates how a wireless digital video camera can be used to watch an infant in a crib or other sleeping area, who may be left at home in the care of someone other than a parent. This embodiment can thus serve as a “nanny cam.” When the infant's parents are out of the house this embodiment can provide them peace of mind by analyzing video of the sleeping infant and to provide an event notice to the parents' smart phone if it is determined that the infant is in distress. Because of the limitations of the wireless connections to both the camera and the smart phone, it would be impractical if not impossible to continuously stream video data to the phone. In this embodiment, only relevant portions of video are sent with the event notice. - Video Analysis for Selecting Content Matching User Preferences
- Several other applications of video analysis technology are described below. These applications involve analysis of video to select content matching user preferences.
- Consumer Video Subscription Service
- A further embodiment, illustrated in
FIG. 13 , is a content-based video subscription system that involves providing a user-definable profile describing the types of video content the user would like to have forwarded to him or her from a server running video analytics interoperable with the user's profile. Alternatively or additionally, the server can send the user information about the video (e.g. time, source, content) based on its content. In the embodiment in which the user receives information about videos based on content, the user can then choose whether to download the video from the server or not. - In the content-based video subscription embodiment the user can specify different user profiles associated with various transmission modes to accommodate the bandwidth and processing limitations of different receiving user devices. For example, the user can specify that short, compressed clips of video containing news or coverage of selected sports teams be sent to a mobile device such as a PDA, smart phone, media-capable cellular phone, or other portable user device. The user device can be provided with the software necessary to play the compressed excerpts at an acceptable quality. The user's profile can specify which video sources the video analytics should monitor. For example, the analytics can be directed to crawl the web for content, run periodic searches for content, watch video content source sites such as YouTube, Google Video, or others, and/or monitor specified blog postings, classes of blogs, advertising, classifieds, or auction services. For example, during a political election, as user can build a profile to monitor political blogs and news outlets for video featuring a particular candidate or particular issues. A user's profile can additionally specify, for example, that selections of video containing scenes from a particular movie producer, actor, writer, director, YouTube broadcaster, organization, or having a further or another association trigger the generation and transmission of an event notice, such as an email to the user's email account. The email can contain instructions on how to download and view the video, such as a hyperlink to the relevant video or selected segments of the video. The video or segments selected according to the user's profile can be cached at the central server. A user can define and maintain one or more profiles simultaneously. A user can maintain profiles in different statuses, such as active or inactive. A user can have more than one active profile simultaneously. A user profile can also be programmed to monitor a web cam or several web cams.
- Web-Hosted, User-Defined Content “Dashboard”
- A further application of the inventive technology can use analytics to push relevant video content and information related to and/or describing the video content to an online interface, such as a “dashboard,” that can allow a user to review, play, store and manage clips. Such an application can consult a user defined profile to analyze video from various sources including online subscription services, personal archives, clips and/or streams sent from friends or family, clips and/or content located at links sent to a designated email inbox, web cam content, YouTube or other video site content, search results, blogs, and other sources for desired content. Video content can then, depending on system limitations and the user profile, either be pushed to an online server equipped with a management architecture, or selected portions can be compressed and saved, or links can be assembled and provided for review.
- In one embodiment of the web-hosted, user-defined content dashboard application, a secure token is created. This secure token is used to authorize recipients to interact with, and become a user on, the dashboard system based on privilege levels defined in the user profile. For instance, there can be multiple levels of authorization that allow the following access: a) the ability to view the event message alone, but not view video, b) the ability to view the message and play a short video clip within a pre- and post-event interval, the interval being defined in the profile, c) the ability to view event video and optionally live video from the camera responsible for the event, d) the ability to view event video and corresponding recorded video from spatially-nearby cameras for the duration of the event (past video), e) the ability to view live and recorded video from the event triggering camera and nearby cameras, and f) the ability to control an event camera capable of pan-tilt-zoom in order to view live video to examine the scene.
- Targeted Advertising
- This application of the disclosed technology involves embedded advertising. This application can send advertising content along with video to a subscriber based on scene content. The advertising can be associated with the content of the video in order to be more effective through “targeting.” Another application targets “live” advertising. This application uses video analytics to analyze video taken from the relevant target location to characterize the potential shoppers and vary the advertising message accordingly. This application employs one or more cameras, and optionally other sensors such as audio and/or ground-mounted pressure sensors. This application can also employ traditional motion sensors to gather additional traffic data. Additionally, this application uses a computer processor such as a personal computer, configured to run video analytics software interoperable with a user profile. This application can also use a control module to direct messaging at one or more active advertising devices such as marquees, billboards, and flat-screen displays. In one application, storefront advertising is tailored according to the people outside a storefront or near a billboard. For example, as illustrated in
FIG. 14 , advertising can be tailored based on the number of people present in the advertising zone, how long they have been there, and/or whether they are children or adults, or predominantly men or women. In another, related application, the same approach can be used after business hours to observe the storefront or other advertising zone. For example, the displayed “advertising message” can be adjusted to show a “lurker” that he or she is being observed, for example via some type of notice on a screen that is used to display ads. Messaging criteria for this application can be controlled through specifications in a user profile. This application can target messages at a mall or other commercial area. A refinement of this application can analyze video to determine the general reaction of targeted audiences to a particular message and to make adjustments accordingly. For example, if the analytics notices a positive reaction to a particular message, such as through physical manifestations such as smiles, laughter, or thoughtful consideration, the message can be sustained, continued, or otherwise pursued. If the analytics notices only quick glances, lack of attentiveness, or disinterest, the message can be changed. - Channel Controller
- This application of video analysis is a channel-switcher that switches content channels (broadcast television, Internet-based content channels, etc.) based on content analysis and a user profile. Analytics can continuously scan channels for desired content and either switch automatically when that content is found or display an event notice providing the viewer the option to switch to the channel with the found content. This application can also be used to record desired content automatically, for example in a DVR application. This application can also be used to switch away from a channel when certain content, such as objectionable content, is found. For example, this application can be used to determine when violence is present and then tune away from the offending station, and additionally to lock the station out for a pre-selected period of time, such as 15 or 30 minutes. This application can therefore be desirable for use in ensuring appropriate content for younger viewers.
- Some embodiments include a processor and a related processor-readable medium having instructions or computer code thereon for performing various processor-implemented operations. Such processors can be implemented as hardware modules such as embedded microprocessors, microprocessors as part of a computer system, Application-Specific Integrated Circuits (“ASICs”), and Programmable Logic Devices (“PLDs”). Such processors can also be implemented as one or more software modules in programming languages as Java, C++, C, assembly, a hardware description language, or any other suitable programming language.
- A processor according to some embodiments includes media and computer code (also can be referred to as code) specially designed and constructed for the specific purpose or purposes. Examples of processor-readable media include, but are not limited to: magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (“CD/DVDs”), Compact Disc-Read Only Memories (“CD-ROMs”), and holographic devices; magneto-optical storage media such as optical disks, and read-only memory (“ROM”) and random-access memory (“RAM”) devices. Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, and files containing higher-level instructions that are executed by a computer using an interpreter. For example, an embodiment of the invention can be implemented using Java, C++, or other object oriented programming language and development tools. Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.
Claims (17)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/524,571 US20130121527A1 (en) | 2007-11-29 | 2012-06-15 | Systems and methods for analysis of video content, event notification, and video content provision |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US99098307P | 2007-11-29 | 2007-11-29 | |
US12/277,996 US8204273B2 (en) | 2007-11-29 | 2008-11-25 | Systems and methods for analysis of video content, event notification, and video content provision |
US13/524,571 US20130121527A1 (en) | 2007-11-29 | 2012-06-15 | Systems and methods for analysis of video content, event notification, and video content provision |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/277,996 Continuation US8204273B2 (en) | 2007-11-29 | 2008-11-25 | Systems and methods for analysis of video content, event notification, and video content provision |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130121527A1 true US20130121527A1 (en) | 2013-05-16 |
Family
ID=40675756
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/277,996 Active 2030-12-05 US8204273B2 (en) | 2007-11-29 | 2008-11-25 | Systems and methods for analysis of video content, event notification, and video content provision |
US13/524,571 Abandoned US20130121527A1 (en) | 2007-11-29 | 2012-06-15 | Systems and methods for analysis of video content, event notification, and video content provision |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/277,996 Active 2030-12-05 US8204273B2 (en) | 2007-11-29 | 2008-11-25 | Systems and methods for analysis of video content, event notification, and video content provision |
Country Status (1)
Country | Link |
---|---|
US (2) | US8204273B2 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090022362A1 (en) * | 2007-07-16 | 2009-01-22 | Nikhil Gagvani | Apparatus and methods for video alarm verification |
US20140136701A1 (en) * | 2012-11-13 | 2014-05-15 | International Business Machines Corporation | Distributed Control of a Heterogeneous Video Surveillance Network |
WO2015116914A1 (en) * | 2014-01-31 | 2015-08-06 | KeepTree, Inc. | System and method for delivery of a video content item in emergency situations |
US20150304370A1 (en) * | 2012-12-19 | 2015-10-22 | Empire Technology Development Llc | Cloud voice over internet protocol communication substitute for channel radio based communication |
WO2015164073A1 (en) * | 2014-04-24 | 2015-10-29 | Vivint, Inc. | Saving video clips on a storage of limited size based on priority |
US9208665B2 (en) | 2006-05-15 | 2015-12-08 | Checkvideo Llc | Automated, remotely-verified alarm system with intrusion and video surveillance and digital video recording |
US20160014175A1 (en) * | 2014-07-08 | 2016-01-14 | Microsoft Corporation | Stream processing utilizing virtual processing agents |
US20160105731A1 (en) * | 2014-05-21 | 2016-04-14 | Iccode, Inc. | Systems and methods for identifying and acquiring information regarding remotely displayed video content |
WO2016137635A1 (en) * | 2015-02-23 | 2016-09-01 | Vivint, Inc. | Techniques for identifying and indexing distinguishing features in a video feed |
WO2017120224A1 (en) * | 2016-01-05 | 2017-07-13 | 360fly, Inc. | Automated processing of panoramic video content |
WO2017120305A1 (en) * | 2016-01-05 | 2017-07-13 | 360fly, Inc. | Dynamic field of view adjustment for panoramic video content |
WO2018022507A1 (en) * | 2016-07-25 | 2018-02-01 | Facebook, Inc. | Presentation of content items synchonized with media display |
CN111279098A (en) * | 2017-09-28 | 2020-06-12 | 密歇根大学董事会 | Multi-mode power-split hybrid transmission with two planetary gear mechanisms |
Families Citing this family (194)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6940998B2 (en) * | 2000-02-04 | 2005-09-06 | Cernium, Inc. | System for automated screening of security cameras |
JP5213105B2 (en) * | 2008-01-17 | 2013-06-19 | 株式会社日立製作所 | Video network system and video data management method |
AU2008200926B2 (en) * | 2008-02-28 | 2011-09-29 | Canon Kabushiki Kaisha | On-camera summarisation of object relationships |
US8427552B2 (en) * | 2008-03-03 | 2013-04-23 | Videoiq, Inc. | Extending the operational lifetime of a hard-disk drive used in video data storage applications |
US9325951B2 (en) | 2008-03-03 | 2016-04-26 | Avigilon Patent Holding 2 Corporation | Content-aware computer networking devices with video analytics for reducing video storage and video communication bandwidth requirements of a video surveillance network camera system |
US20110292181A1 (en) * | 2008-04-16 | 2011-12-01 | Canesta, Inc. | Methods and systems using three-dimensional sensing for user interaction with applications |
US9141862B2 (en) * | 2008-09-26 | 2015-09-22 | Harris Corporation | Unattended surveillance device and associated methods |
US8633984B2 (en) * | 2008-12-18 | 2014-01-21 | Honeywell International, Inc. | Process of sequentially dubbing a camera for investigation and review |
US20100201815A1 (en) * | 2009-02-09 | 2010-08-12 | Vitamin D, Inc. | Systems and methods for video monitoring |
KR20110128322A (en) | 2009-03-03 | 2011-11-29 | 디지맥 코포레이션 | Narrowcasting from public displays, and related arrangements |
WO2010124062A1 (en) | 2009-04-22 | 2010-10-28 | Cernium Corporation | System and method for motion detection in a surveillance video |
EP2452489B1 (en) * | 2009-07-08 | 2020-06-17 | Honeywell International Inc. | Systems and methods for managing video data |
US20110234829A1 (en) * | 2009-10-06 | 2011-09-29 | Nikhil Gagvani | Methods, systems and apparatus to configure an imaging device |
CA2776909A1 (en) | 2009-10-07 | 2011-04-14 | Telewatch Inc. | Video analytics method and system |
WO2011041903A1 (en) * | 2009-10-07 | 2011-04-14 | Telewatch Inc. | Video analytics with pre-processing at the source end |
US20110115931A1 (en) * | 2009-11-17 | 2011-05-19 | Kulinets Joseph M | Image management system and method of controlling an image capturing device using a mobile communication device |
US20110115930A1 (en) * | 2009-11-17 | 2011-05-19 | Kulinets Joseph M | Image management system and method of selecting at least one of a plurality of cameras |
US20110133930A1 (en) * | 2009-12-09 | 2011-06-09 | Honeywell International Inc. | Filtering video events in a secured area using loose coupling within a security system |
US9143739B2 (en) | 2010-05-07 | 2015-09-22 | Iwatchlife, Inc. | Video analytics with burst-like transmission of video data |
US20110285863A1 (en) * | 2010-05-23 | 2011-11-24 | James Burke | Live television broadcasting system for the internet |
US8688806B2 (en) | 2010-06-11 | 2014-04-01 | Tellabs Operations, Inc. | Procedure, apparatus, system, and computer program for collecting data used for analytics |
US9906838B2 (en) | 2010-07-12 | 2018-02-27 | Time Warner Cable Enterprises Llc | Apparatus and methods for content delivery and message exchange across multiple content delivery networks |
CA2748060A1 (en) | 2010-08-04 | 2012-02-04 | Iwatchlife Inc. | Method and system for making video calls |
CA2748059A1 (en) | 2010-08-04 | 2012-02-04 | Iwatchlife Inc. | Method and system for initiating communication via a communication network |
US8780162B2 (en) | 2010-08-04 | 2014-07-15 | Iwatchlife Inc. | Method and system for locating an individual |
US8909617B2 (en) * | 2011-01-26 | 2014-12-09 | Hulu, LLC | Semantic matching by content analysis |
KR101482025B1 (en) * | 2011-02-25 | 2015-01-13 | 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 | Augmented reality presentations |
CN104765801A (en) | 2011-03-07 | 2015-07-08 | 科宝2股份有限公司 | Systems and methods for analytic data gathering from image providers at event or geographic location |
JP5769468B2 (en) * | 2011-03-30 | 2015-08-26 | キヤノン株式会社 | Object detection system and object detection method |
CN103502980B (en) * | 2011-04-11 | 2016-12-07 | 英特尔公司 | There is content transfer and the Next Generation Television machine of interactive selection ability |
US20120265616A1 (en) * | 2011-04-13 | 2012-10-18 | Empire Technology Development Llc | Dynamic advertising content selection |
WO2013020165A2 (en) | 2011-08-05 | 2013-02-14 | HONEYWELL INTERNATIONAL INC. Attn: Patent Services | Systems and methods for managing video data |
US9087363B2 (en) * | 2011-08-30 | 2015-07-21 | Genband Us Llc | Methods, systems, and computer readable media for managing multiple personas within end user applications |
US8560933B2 (en) | 2011-10-20 | 2013-10-15 | Microsoft Corporation | Merging and fragmenting graphical objects |
US9071740B1 (en) | 2011-10-28 | 2015-06-30 | Google Inc. | Modular camera system |
US9197686B1 (en) | 2012-01-06 | 2015-11-24 | Google Inc. | Backfill of video stream |
US9537968B1 (en) | 2012-01-06 | 2017-01-03 | Google Inc. | Communication of socket protocol based data over a storage protocol based interface |
US20130188044A1 (en) * | 2012-01-19 | 2013-07-25 | Utechzone Co., Ltd. | Intelligent monitoring system with automatic notification and intelligent monitoring device thereof |
US20130278715A1 (en) * | 2012-03-16 | 2013-10-24 | Mark Nutsch | System and method for discreetly collecting 3d immersive/panoramic imagery |
RU2484529C1 (en) * | 2012-03-21 | 2013-06-10 | Общество с ограниченной ответственностью "Синезис" | Method of ranking video data |
US20150106738A1 (en) * | 2012-04-17 | 2015-04-16 | Iwatchlife Inc. | System and method for processing image or audio data |
US9380197B2 (en) | 2012-07-13 | 2016-06-28 | Intel Corporation | Techniques for video analytics of captured video content |
CA2822217A1 (en) | 2012-08-02 | 2014-02-02 | Iwatchlife Inc. | Method and system for anonymous video analytics processing |
US10117309B1 (en) * | 2012-08-17 | 2018-10-30 | Kuna Systems Corporation | Internet protocol security camera with behavior detection |
US9213781B1 (en) | 2012-09-19 | 2015-12-15 | Placemeter LLC | System and method for processing image data |
US9197861B2 (en) | 2012-11-15 | 2015-11-24 | Avo Usa Holding 2 Corporation | Multi-dimensional virtual beam detection for video analytics |
US9565226B2 (en) * | 2013-02-13 | 2017-02-07 | Guy Ravine | Message capturing and seamless message sharing and navigation |
US11039108B2 (en) | 2013-03-15 | 2021-06-15 | James Carey | Video identification and analytical recognition system |
US11743431B2 (en) * | 2013-03-15 | 2023-08-29 | James Carey | Video identification and analytical recognition system |
US9294712B2 (en) | 2013-03-20 | 2016-03-22 | Google Inc. | Interpolated video tagging |
US9472090B2 (en) * | 2013-04-23 | 2016-10-18 | Canary Connect, Inc. | Designation and notifying backup users for location-based monitoring |
US20140354820A1 (en) * | 2013-05-03 | 2014-12-04 | Daniel Danialian | System and method for live surveillance property monitoring |
US9264474B2 (en) | 2013-05-07 | 2016-02-16 | KBA2 Inc. | System and method of portraying the shifting level of interest in an object or location |
US8953040B1 (en) | 2013-07-26 | 2015-02-10 | SkyBell Technologies, Inc. | Doorbell communication and electrical systems |
US11004312B2 (en) | 2015-06-23 | 2021-05-11 | Skybell Technologies Ip, Llc | Doorbell communities |
US10672238B2 (en) | 2015-06-23 | 2020-06-02 | SkyBell Technologies, Inc. | Doorbell communities |
US9172920B1 (en) | 2014-09-01 | 2015-10-27 | SkyBell Technologies, Inc. | Doorbell diagnostics |
US9196133B2 (en) | 2013-07-26 | 2015-11-24 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9769435B2 (en) | 2014-08-11 | 2017-09-19 | SkyBell Technologies, Inc. | Monitoring systems and methods |
US9230424B1 (en) | 2013-12-06 | 2016-01-05 | SkyBell Technologies, Inc. | Doorbell communities |
US9179108B1 (en) | 2013-07-26 | 2015-11-03 | SkyBell Technologies, Inc. | Doorbell chime systems and methods |
US9342936B2 (en) | 2013-07-26 | 2016-05-17 | SkyBell Technologies, Inc. | Smart lock systems and methods |
US10204467B2 (en) | 2013-07-26 | 2019-02-12 | SkyBell Technologies, Inc. | Smart lock systems and methods |
US9113052B1 (en) | 2013-07-26 | 2015-08-18 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9094584B2 (en) | 2013-07-26 | 2015-07-28 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US10708404B2 (en) | 2014-09-01 | 2020-07-07 | Skybell Technologies Ip, Llc | Doorbell communication and electrical systems |
US8937659B1 (en) | 2013-07-26 | 2015-01-20 | SkyBell Technologies, Inc. | Doorbell communication and electrical methods |
US20180343141A1 (en) | 2015-09-22 | 2018-11-29 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US10044519B2 (en) | 2015-01-05 | 2018-08-07 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9165444B2 (en) | 2013-07-26 | 2015-10-20 | SkyBell Technologies, Inc. | Light socket cameras |
US11909549B2 (en) | 2013-07-26 | 2024-02-20 | Skybell Technologies Ip, Llc | Doorbell communication systems and methods |
US9179107B1 (en) | 2013-07-26 | 2015-11-03 | SkyBell Technologies, Inc. | Doorbell chime systems and methods |
US9113051B1 (en) | 2013-07-26 | 2015-08-18 | SkyBell Technologies, Inc. | Power outlet cameras |
US11651665B2 (en) | 2013-07-26 | 2023-05-16 | Skybell Technologies Ip, Llc | Doorbell communities |
US9118819B1 (en) | 2013-07-26 | 2015-08-25 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US11764990B2 (en) | 2013-07-26 | 2023-09-19 | Skybell Technologies Ip, Llc | Doorbell communications systems and methods |
US9237318B2 (en) | 2013-07-26 | 2016-01-12 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US10733823B2 (en) | 2013-07-26 | 2020-08-04 | Skybell Technologies Ip, Llc | Garage door communication systems and methods |
US10440165B2 (en) | 2013-07-26 | 2019-10-08 | SkyBell Technologies, Inc. | Doorbell communication and electrical systems |
US8872915B1 (en) | 2013-07-26 | 2014-10-28 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9247219B2 (en) | 2013-07-26 | 2016-01-26 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9058738B1 (en) | 2013-07-26 | 2015-06-16 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9172921B1 (en) | 2013-12-06 | 2015-10-27 | SkyBell Technologies, Inc. | Doorbell antenna |
US9142214B2 (en) | 2013-07-26 | 2015-09-22 | SkyBell Technologies, Inc. | Light socket cameras |
US9235943B2 (en) | 2013-07-26 | 2016-01-12 | Joseph Frank Scalisi | Remote identity verification of lodging guests |
US9060103B2 (en) | 2013-07-26 | 2015-06-16 | SkyBell Technologies, Inc. | Doorbell security and safety |
US20170263067A1 (en) | 2014-08-27 | 2017-09-14 | SkyBell Technologies, Inc. | Smart lock systems and methods |
US9049352B2 (en) | 2013-07-26 | 2015-06-02 | SkyBell Technologies, Inc. | Pool monitor systems and methods |
US9013575B2 (en) | 2013-07-26 | 2015-04-21 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9197867B1 (en) | 2013-12-06 | 2015-11-24 | SkyBell Technologies, Inc. | Identity verification using a social network |
US9179109B1 (en) | 2013-12-06 | 2015-11-03 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9172922B1 (en) | 2013-12-06 | 2015-10-27 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US8941736B1 (en) | 2013-07-26 | 2015-01-27 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9065987B2 (en) | 2013-07-26 | 2015-06-23 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US8947530B1 (en) | 2013-07-26 | 2015-02-03 | Joseph Frank Scalisi | Smart lock systems and methods |
US8823795B1 (en) * | 2013-07-26 | 2014-09-02 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9053622B2 (en) | 2013-07-26 | 2015-06-09 | Joseph Frank Scalisi | Light socket cameras |
US9736284B2 (en) | 2013-07-26 | 2017-08-15 | SkyBell Technologies, Inc. | Doorbell communication and electrical systems |
US11889009B2 (en) | 2013-07-26 | 2024-01-30 | Skybell Technologies Ip, Llc | Doorbell communication and electrical systems |
US9160987B1 (en) | 2013-07-26 | 2015-10-13 | SkyBell Technologies, Inc. | Doorbell chime systems and methods |
US9060104B2 (en) | 2013-07-26 | 2015-06-16 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US20150055832A1 (en) * | 2013-08-25 | 2015-02-26 | Nikolay Vadimovich PTITSYN | Method for video data ranking |
US10523903B2 (en) | 2013-10-30 | 2019-12-31 | Honeywell International Inc. | Computer implemented systems frameworks and methods configured for enabling review of incident data |
US9251416B2 (en) * | 2013-11-19 | 2016-02-02 | Xerox Corporation | Time scale adaptive motion detection |
US9799183B2 (en) | 2013-12-06 | 2017-10-24 | SkyBell Technologies, Inc. | Doorbell package detection systems and methods |
US9253455B1 (en) | 2014-06-25 | 2016-02-02 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9743049B2 (en) | 2013-12-06 | 2017-08-22 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9786133B2 (en) | 2013-12-06 | 2017-10-10 | SkyBell Technologies, Inc. | Doorbell chime systems and methods |
US10643271B1 (en) * | 2014-01-17 | 2020-05-05 | Glenn Joseph Bronson | Retrofitting legacy surveillance systems for traffic profiling and monetization |
US9237315B2 (en) * | 2014-03-03 | 2016-01-12 | Vsk Electronics Nv | Intrusion detection with directional sensing |
US9533413B2 (en) | 2014-03-13 | 2017-01-03 | Brain Corporation | Trainable modular robotic apparatus and methods |
US9987743B2 (en) | 2014-03-13 | 2018-06-05 | Brain Corporation | Trainable modular robotic apparatus and methods |
JP2017525064A (en) | 2014-05-30 | 2017-08-31 | プレイスメーター インコーポレイテッドPlacemeter Inc. | System and method for activity monitoring using video data |
US20170244937A1 (en) * | 2014-06-03 | 2017-08-24 | Gopro, Inc. | Apparatus and methods for aerial video acquisition |
US20160042621A1 (en) * | 2014-06-13 | 2016-02-11 | William Daylesford Hogg | Video Motion Detection Method and Alert Management |
US10687029B2 (en) | 2015-09-22 | 2020-06-16 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US9888216B2 (en) | 2015-09-22 | 2018-02-06 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US20170085843A1 (en) | 2015-09-22 | 2017-03-23 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US11184589B2 (en) | 2014-06-23 | 2021-11-23 | Skybell Technologies Ip, Llc | Doorbell communication systems and methods |
CN104079881B (en) * | 2014-07-01 | 2017-09-12 | 中磊电子(苏州)有限公司 | The relative monitoring method of supervising device |
US10140827B2 (en) | 2014-07-07 | 2018-11-27 | Google Llc | Method and system for processing motion event notifications |
US9501915B1 (en) | 2014-07-07 | 2016-11-22 | Google Inc. | Systems and methods for analyzing a video stream |
US9420331B2 (en) | 2014-07-07 | 2016-08-16 | Google Inc. | Method and system for categorizing detected motion events |
US9997036B2 (en) | 2015-02-17 | 2018-06-12 | SkyBell Technologies, Inc. | Power outlet cameras |
USD782495S1 (en) | 2014-10-07 | 2017-03-28 | Google Inc. | Display screen or portion thereof with graphical user interface |
JP5866539B1 (en) * | 2014-11-21 | 2016-02-17 | パナソニックIpマネジメント株式会社 | Communication system and sound source reproduction method in communication system |
US10133935B2 (en) * | 2015-01-13 | 2018-11-20 | Vivint, Inc. | Doorbell camera early detection |
US10586114B2 (en) * | 2015-01-13 | 2020-03-10 | Vivint, Inc. | Enhanced doorbell camera interactions |
US10635907B2 (en) * | 2015-01-13 | 2020-04-28 | Vivint, Inc. | Enhanced doorbell camera interactions |
US10742938B2 (en) | 2015-03-07 | 2020-08-11 | Skybell Technologies Ip, Llc | Garage door communication systems and methods |
US20200082679A1 (en) | 2015-03-20 | 2020-03-12 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US11575537B2 (en) | 2015-03-27 | 2023-02-07 | Skybell Technologies Ip, Llc | Doorbell communication systems and methods |
US11381686B2 (en) | 2015-04-13 | 2022-07-05 | Skybell Technologies Ip, Llc | Power outlet cameras |
US10043078B2 (en) * | 2015-04-21 | 2018-08-07 | Placemeter LLC | Virtual turnstile system and method |
US11334751B2 (en) | 2015-04-21 | 2022-05-17 | Placemeter Inc. | Systems and methods for processing video data for activity monitoring |
US11641452B2 (en) | 2015-05-08 | 2023-05-02 | Skybell Technologies Ip, Llc | Doorbell communication systems and methods |
US9544485B2 (en) | 2015-05-27 | 2017-01-10 | Google Inc. | Multi-mode LED illumination system |
US10380431B2 (en) | 2015-06-01 | 2019-08-13 | Placemeter LLC | Systems and methods for processing video streams |
US9554063B2 (en) | 2015-06-12 | 2017-01-24 | Google Inc. | Using infrared images of a monitored scene to identify windows |
US9454820B1 (en) | 2015-06-12 | 2016-09-27 | Google Inc. | Using a scene illuminating infrared emitter array in a video monitoring camera for depth determination |
US9626849B2 (en) | 2015-06-12 | 2017-04-18 | Google Inc. | Using scene information from a security camera to reduce false security alerts |
US9235899B1 (en) | 2015-06-12 | 2016-01-12 | Google Inc. | Simulating an infrared emitter array in a video monitoring camera to construct a lookup table for depth determination |
US9489745B1 (en) | 2015-06-12 | 2016-11-08 | Google Inc. | Using depth maps of a scene to identify movement of a video camera |
US9386230B1 (en) | 2015-06-12 | 2016-07-05 | Google Inc. | Day and night detection based on one or more of illuminant detection, lux level detection, and tiling |
US9886620B2 (en) | 2015-06-12 | 2018-02-06 | Google Llc | Using a scene illuminating infrared emitter array in a video monitoring camera to estimate the position of the camera |
US9613423B2 (en) | 2015-06-12 | 2017-04-04 | Google Inc. | Using a depth map of a monitored scene to identify floors, walls, and ceilings |
US9361011B1 (en) | 2015-06-14 | 2016-06-07 | Google Inc. | Methods and systems for presenting multiple live video feeds in a user interface |
US20180047269A1 (en) | 2015-06-23 | 2018-02-15 | SkyBell Technologies, Inc. | Doorbell communities |
US9840003B2 (en) | 2015-06-24 | 2017-12-12 | Brain Corporation | Apparatus and methods for safe navigation of robotic devices |
US10706702B2 (en) | 2015-07-30 | 2020-07-07 | Skybell Technologies Ip, Llc | Doorbell package detection systems and methods |
US9805567B2 (en) | 2015-09-14 | 2017-10-31 | Logitech Europe S.A. | Temporal video streaming and summaries |
US10299017B2 (en) | 2015-09-14 | 2019-05-21 | Logitech Europe S.A. | Video searching for filtered and tagged motion |
US20170076156A1 (en) * | 2015-09-14 | 2017-03-16 | Logitech Europe S.A. | Automatically determining camera location and determining type of scene |
WO2017046704A1 (en) * | 2015-09-14 | 2017-03-23 | Logitech Europe S.A. | User interface for video summaries |
WO2017120375A1 (en) * | 2016-01-05 | 2017-07-13 | Wizr Llc | Video event detection and notification |
JP6663229B2 (en) * | 2016-01-20 | 2020-03-11 | キヤノン株式会社 | Information processing apparatus, information processing method, and program |
KR102586962B1 (en) * | 2016-04-07 | 2023-10-10 | 한화비전 주식회사 | Surveillance system and controlling method thereof |
US10506237B1 (en) | 2016-05-27 | 2019-12-10 | Google Llc | Methods and devices for dynamic adaptation of encoding bitrate for video streaming |
US10043332B2 (en) | 2016-05-27 | 2018-08-07 | SkyBell Technologies, Inc. | Doorbell package detection systems and methods |
US10489016B1 (en) * | 2016-06-20 | 2019-11-26 | Amazon Technologies, Inc. | Identifying and recommending events of interest in real-time media content |
US10957171B2 (en) * | 2016-07-11 | 2021-03-23 | Google Llc | Methods and systems for providing event alerts |
US10380429B2 (en) | 2016-07-11 | 2019-08-13 | Google Llc | Methods and systems for person detection in a video feed |
US10192415B2 (en) | 2016-07-11 | 2019-01-29 | Google Llc | Methods and systems for providing intelligent alerts for events |
US10180615B2 (en) | 2016-10-31 | 2019-01-15 | Google Llc | Electrochromic filtering in a camera |
JP2018163460A (en) * | 2017-03-24 | 2018-10-18 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
US10938687B2 (en) * | 2017-03-29 | 2021-03-02 | Accenture Global Solutions Limited | Enabling device under test conferencing via a collaboration platform |
US11228549B2 (en) | 2017-04-14 | 2022-01-18 | International Business Machines Corporation | Mobile device sending format translation based on message receiver's environment |
US10599950B2 (en) | 2017-05-30 | 2020-03-24 | Google Llc | Systems and methods for person recognition data management |
US11783010B2 (en) | 2017-05-30 | 2023-10-10 | Google Llc | Systems and methods of person recognition in video streams |
US10909825B2 (en) | 2017-09-18 | 2021-02-02 | Skybell Technologies Ip, Llc | Outdoor security systems and methods |
US10664688B2 (en) | 2017-09-20 | 2020-05-26 | Google Llc | Systems and methods of detecting and responding to a visitor to a smart home environment |
US11134227B2 (en) | 2017-09-20 | 2021-09-28 | Google Llc | Systems and methods of presenting appropriate actions for responding to a visitor to a smart home environment |
JP2021119426A (en) * | 2018-04-24 | 2021-08-12 | ソニーグループ株式会社 | Information processing device, information processing method and program |
JP2019193089A (en) * | 2018-04-24 | 2019-10-31 | 東芝テック株式会社 | Video analysis device |
EP3830802B1 (en) * | 2018-07-30 | 2024-08-28 | Carrier Corporation | Method for activating an alert when an object is left proximate a room entryway |
WO2020132104A1 (en) * | 2018-12-18 | 2020-06-25 | Kenneth Liu | Systems and methods for crowdsourced incident data distribution |
US11599392B1 (en) * | 2019-08-14 | 2023-03-07 | Kuna Systems Corporation | Hybrid cloud/camera AI computer vision system |
US12056922B2 (en) | 2019-04-26 | 2024-08-06 | Samsara Inc. | Event notification system |
US11787413B2 (en) | 2019-04-26 | 2023-10-17 | Samsara Inc. | Baseline event detection system |
US11080568B2 (en) | 2019-04-26 | 2021-08-03 | Samsara Inc. | Object-model based event detection system |
US10999374B2 (en) | 2019-04-26 | 2021-05-04 | Samsara Inc. | Event detection system |
US11494921B2 (en) | 2019-04-26 | 2022-11-08 | Samsara Networks Inc. | Machine-learned model based event detection |
US11074790B2 (en) | 2019-08-24 | 2021-07-27 | Skybell Technologies Ip, Llc | Doorbell communication systems and methods |
US11893795B2 (en) | 2019-12-09 | 2024-02-06 | Google Llc | Interacting with visitors of a connected home environment |
US11675042B1 (en) | 2020-03-18 | 2023-06-13 | Samsara Inc. | Systems and methods of remote object tracking |
US10904446B1 (en) | 2020-03-30 | 2021-01-26 | Logitech Europe S.A. | Advanced video conferencing systems and methods |
US10951858B1 (en) | 2020-03-30 | 2021-03-16 | Logitech Europe S.A. | Advanced video conferencing systems and methods |
US10972655B1 (en) | 2020-03-30 | 2021-04-06 | Logitech Europe S.A. | Advanced video conferencing systems and methods |
US10965908B1 (en) | 2020-03-30 | 2021-03-30 | Logitech Europe S.A. | Advanced video conferencing systems and methods |
US11436906B1 (en) * | 2020-05-18 | 2022-09-06 | Sidhya V Peddinti | Visitor detection, facial recognition, and alert system and processes for assisting memory-challenged patients to recognize entryway visitors |
US11341786B1 (en) | 2020-11-13 | 2022-05-24 | Samsara Inc. | Dynamic delivery of vehicle event data |
EP4068178A1 (en) * | 2021-03-30 | 2022-10-05 | Sony Group Corporation | An electronic device and related methods for monitoring objects |
US20220335795A1 (en) * | 2021-04-16 | 2022-10-20 | Dice Corporation | Hyperlinked digital video alarm electronic document |
US12052530B2 (en) * | 2021-04-23 | 2024-07-30 | Arlo Technologies, Inc. | Electronic monitoring system using video notification |
Family Cites Families (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4857912A (en) * | 1988-07-27 | 1989-08-15 | The United States Of America As Represented By The Secretary Of The Navy | Intelligent security assessment system |
JPH04195397A (en) * | 1990-11-27 | 1992-07-15 | Matsushita Electric Ind Co Ltd | Road trouble monitor device |
EP0631683B1 (en) * | 1992-03-20 | 2001-08-01 | Commonwealth Scientific And Industrial Research Organisation | An object monitoring system |
US5519669A (en) * | 1993-08-19 | 1996-05-21 | At&T Corp. | Acoustically monitored site surveillance and security system for ATM machines and other facilities |
US5708423A (en) * | 1995-05-09 | 1998-01-13 | Sensormatic Electronics Corporation | Zone-Based asset tracking and control system |
US7386372B2 (en) * | 1995-06-07 | 2008-06-10 | Automotive Technologies International, Inc. | Apparatus and method for determining presence of objects in a vehicle |
US6181867B1 (en) * | 1995-06-07 | 2001-01-30 | Intervu, Inc. | Video storage and retrieval system |
US5801618A (en) * | 1996-02-08 | 1998-09-01 | Jenkins; Mark | Vehicle alarm and lot monitoring system |
US5875305A (en) * | 1996-10-31 | 1999-02-23 | Sensormatic Electronics Corporation | Video information management system which provides intelligent responses to video data content features |
US6625383B1 (en) * | 1997-07-11 | 2003-09-23 | Mitsubishi Denki Kabushiki Kaisha | Moving picture collection and event detection apparatus |
US6618074B1 (en) | 1997-08-01 | 2003-09-09 | Wells Fargo Alarm Systems, Inc. | Central alarm computer for video security system |
US6091771A (en) | 1997-08-01 | 2000-07-18 | Wells Fargo Alarm Services, Inc. | Workstation for video security system |
US6069655A (en) | 1997-08-01 | 2000-05-30 | Wells Fargo Alarm Services, Inc. | Advanced video security system |
US6154133A (en) * | 1998-01-22 | 2000-11-28 | Ross & Baruzzini, Inc. | Exit guard system |
US7015806B2 (en) | 1999-07-20 | 2006-03-21 | @Security Broadband Corporation | Distributed monitoring for a video security system |
JP2001103357A (en) | 1999-10-01 | 2001-04-13 | Matsushita Electric Ind Co Ltd | Electronic camera |
US6940998B2 (en) * | 2000-02-04 | 2005-09-06 | Cernium, Inc. | System for automated screening of security cameras |
US6975220B1 (en) | 2000-04-10 | 2005-12-13 | Radia Technologies Corporation | Internet based security, fire and emergency identification and communication system |
US6411209B1 (en) | 2000-12-06 | 2002-06-25 | Koninklijke Philips Electronics N.V. | Method and apparatus to select the best video frame to transmit to a remote station for CCTV based residential security monitoring |
US6625838B2 (en) * | 2001-01-12 | 2003-09-30 | O-Cedar Brands, Inc. | Mop with self-contained wringer sleeve |
GB0102355D0 (en) | 2001-01-30 | 2001-03-14 | Mygard Plc | Security system |
KR100404885B1 (en) | 2001-02-16 | 2003-11-10 | 삼성전자주식회사 | Apparatus for remote surveillance using mobile video phone |
US7113090B1 (en) | 2001-04-24 | 2006-09-26 | Alarm.Com Incorporated | System and method for connecting security systems to a wireless device |
US6400265B1 (en) | 2001-04-24 | 2002-06-04 | Microstrategy, Inc. | System and method for monitoring security systems by using video images |
US7203620B2 (en) * | 2001-07-03 | 2007-04-10 | Sharp Laboratories Of America, Inc. | Summarization of video content |
US7342489B1 (en) | 2001-09-06 | 2008-03-11 | Siemens Schweiz Ag | Surveillance system control unit |
US7085401B2 (en) | 2001-10-31 | 2006-08-01 | Infowrap Systems Ltd. | Automatic object extraction |
US20060165386A1 (en) | 2002-01-08 | 2006-07-27 | Cernium, Inc. | Object selective video recording |
US20040143602A1 (en) | 2002-10-18 | 2004-07-22 | Antonio Ruiz | Apparatus, system and method for automated and adaptive digital image/video surveillance for events and configurations using a rich multimedia relational database |
US7460148B1 (en) | 2003-02-19 | 2008-12-02 | Rockwell Collins, Inc. | Near real-time dissemination of surveillance video |
KR100463777B1 (en) | 2003-03-11 | 2004-12-29 | 삼성전자주식회사 | Bar type portable wireless terminal and rotary type hinge device thereof |
US20040183668A1 (en) | 2003-03-20 | 2004-09-23 | Campbell Robert Colin | Interactive video monitoring (IVM) process |
US20050132424A1 (en) * | 2003-10-21 | 2005-06-16 | Envivo Pharmaceuticals, Inc. | Transgenic flies expressing Abeta42-Dutch |
US20050132414A1 (en) * | 2003-12-02 | 2005-06-16 | Connexed, Inc. | Networked video surveillance system |
US7106193B2 (en) * | 2003-12-23 | 2006-09-12 | Honeywell International, Inc. | Integrated alarm detection and verification device |
US7697026B2 (en) * | 2004-03-16 | 2010-04-13 | 3Vr Security, Inc. | Pipeline architecture for analyzing multiple video streams |
US7486183B2 (en) | 2004-05-24 | 2009-02-03 | Eaton Corporation | Home system and method for sending and displaying digital images |
US7209035B2 (en) | 2004-07-06 | 2007-04-24 | Catcher, Inc. | Portable handheld security device |
US7612666B2 (en) | 2004-07-27 | 2009-11-03 | Wael Badawy | Video based monitoring system |
US7801642B2 (en) | 2004-08-18 | 2010-09-21 | Walgreen Co. | System and method for checking the accuracy of a prescription fill |
JP4182936B2 (en) | 2004-08-31 | 2008-11-19 | ソニー株式会社 | Playback apparatus and display method |
US7944469B2 (en) * | 2005-02-14 | 2011-05-17 | Vigilos, Llc | System and method for using self-learning rules to enable adaptive security monitoring |
US7403116B2 (en) | 2005-02-28 | 2008-07-22 | Westec Intelligent Surveillance, Inc. | Central monitoring/managed surveillance system and method |
WO2006101477A1 (en) | 2005-03-15 | 2006-09-28 | Chubb International Holdings Limited | Nuisance alarm filter |
US7437755B2 (en) * | 2005-10-26 | 2008-10-14 | Cisco Technology, Inc. | Unified network and physical premises access control server |
US7555146B2 (en) | 2005-12-28 | 2009-06-30 | Tsongjy Huang | Identification recognition system for area security |
US20070177800A1 (en) | 2006-02-02 | 2007-08-02 | International Business Machines Corporation | Method and apparatus for maintaining a background image model in a background subtraction system using accumulated motion |
TWI325124B (en) | 2006-05-10 | 2010-05-21 | Realtek Semiconductor Corp | Motion detection method and related apparatus |
US7956735B2 (en) * | 2006-05-15 | 2011-06-07 | Cernium Corporation | Automated, remotely-verified alarm system with intrusion and video surveillance and digital video recording |
JP4767833B2 (en) | 2006-12-15 | 2011-09-07 | 富士通株式会社 | Electronic equipment and camera module unit |
US7679507B2 (en) | 2007-05-16 | 2010-03-16 | Honeywell International Inc. | Video alarm verification |
EP2174310A4 (en) * | 2007-07-16 | 2013-08-21 | Cernium Corp | Apparatus and methods for video alarm verification |
WO2010124062A1 (en) | 2009-04-22 | 2010-10-28 | Cernium Corporation | System and method for motion detection in a surveillance video |
US20110234829A1 (en) | 2009-10-06 | 2011-09-29 | Nikhil Gagvani | Methods, systems and apparatus to configure an imaging device |
US8937658B2 (en) | 2009-10-15 | 2015-01-20 | At&T Intellectual Property I, L.P. | Methods, systems, and products for security services |
-
2008
- 2008-11-25 US US12/277,996 patent/US8204273B2/en active Active
-
2012
- 2012-06-15 US US13/524,571 patent/US20130121527A1/en not_active Abandoned
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9208665B2 (en) | 2006-05-15 | 2015-12-08 | Checkvideo Llc | Automated, remotely-verified alarm system with intrusion and video surveillance and digital video recording |
US9600987B2 (en) | 2006-05-15 | 2017-03-21 | Checkvideo Llc | Automated, remotely-verified alarm system with intrusion and video surveillance and digitial video recording |
US9208666B2 (en) | 2006-05-15 | 2015-12-08 | Checkvideo Llc | Automated, remotely-verified alarm system with intrusion and video surveillance and digital video recording |
US9922514B2 (en) | 2007-07-16 | 2018-03-20 | CheckVideo LLP | Apparatus and methods for alarm verification based on image analytics |
US9208667B2 (en) | 2007-07-16 | 2015-12-08 | Checkvideo Llc | Apparatus and methods for encoding an image with different levels of encoding |
US20090022362A1 (en) * | 2007-07-16 | 2009-01-22 | Nikhil Gagvani | Apparatus and methods for video alarm verification |
US8804997B2 (en) | 2007-07-16 | 2014-08-12 | Checkvideo Llc | Apparatus and methods for video alarm verification |
US20140136701A1 (en) * | 2012-11-13 | 2014-05-15 | International Business Machines Corporation | Distributed Control of a Heterogeneous Video Surveillance Network |
US20140132763A1 (en) * | 2012-11-13 | 2014-05-15 | International Business Machines Corporation | Distributed Control of a Heterogeneous Video Surveillance Network |
US9681103B2 (en) * | 2012-11-13 | 2017-06-13 | International Business Machines Corporation | Distributed control of a heterogeneous video surveillance network |
US9681104B2 (en) * | 2012-11-13 | 2017-06-13 | International Business Machines Corporation | Distributed control of a heterogeneous video surveillance network |
US20150304370A1 (en) * | 2012-12-19 | 2015-10-22 | Empire Technology Development Llc | Cloud voice over internet protocol communication substitute for channel radio based communication |
WO2015116914A1 (en) * | 2014-01-31 | 2015-08-06 | KeepTree, Inc. | System and method for delivery of a video content item in emergency situations |
WO2015164073A1 (en) * | 2014-04-24 | 2015-10-29 | Vivint, Inc. | Saving video clips on a storage of limited size based on priority |
US10999372B2 (en) | 2014-04-24 | 2021-05-04 | Vivint, Inc. | Saving video clips on a storage of limited size based on priority |
US10425479B2 (en) | 2014-04-24 | 2019-09-24 | Vivint, Inc. | Saving video clips on a storage of limited size based on priority |
US20160105731A1 (en) * | 2014-05-21 | 2016-04-14 | Iccode, Inc. | Systems and methods for identifying and acquiring information regarding remotely displayed video content |
US20160014175A1 (en) * | 2014-07-08 | 2016-01-14 | Microsoft Corporation | Stream processing utilizing virtual processing agents |
US10554709B2 (en) * | 2014-07-08 | 2020-02-04 | Microsoft Technology Licensing, Llc | Stream processing utilizing virtual processing agents |
US9886633B2 (en) | 2015-02-23 | 2018-02-06 | Vivint, Inc. | Techniques for identifying and indexing distinguishing features in a video feed |
US10963701B2 (en) | 2015-02-23 | 2021-03-30 | Vivint, Inc. | Techniques for identifying and indexing distinguishing features in a video feed |
WO2016137635A1 (en) * | 2015-02-23 | 2016-09-01 | Vivint, Inc. | Techniques for identifying and indexing distinguishing features in a video feed |
US9781349B2 (en) | 2016-01-05 | 2017-10-03 | 360fly, Inc. | Dynamic field of view adjustment for panoramic video content |
WO2017120305A1 (en) * | 2016-01-05 | 2017-07-13 | 360fly, Inc. | Dynamic field of view adjustment for panoramic video content |
WO2017120224A1 (en) * | 2016-01-05 | 2017-07-13 | 360fly, Inc. | Automated processing of panoramic video content |
WO2018022507A1 (en) * | 2016-07-25 | 2018-02-01 | Facebook, Inc. | Presentation of content items synchonized with media display |
US10643264B2 (en) | 2016-07-25 | 2020-05-05 | Facebook, Inc. | Method and computer readable medium for presentation of content items synchronized with media display |
CN111279098A (en) * | 2017-09-28 | 2020-06-12 | 密歇根大学董事会 | Multi-mode power-split hybrid transmission with two planetary gear mechanisms |
Also Published As
Publication number | Publication date |
---|---|
US8204273B2 (en) | 2012-06-19 |
US20090141939A1 (en) | 2009-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8204273B2 (en) | Systems and methods for analysis of video content, event notification, and video content provision | |
US9208667B2 (en) | Apparatus and methods for encoding an image with different levels of encoding | |
US9058738B1 (en) | Doorbell communication systems and methods | |
US9094584B2 (en) | Doorbell communication systems and methods | |
US20180027260A1 (en) | Video analytics with pre-processing at the source end | |
US8872915B1 (en) | Doorbell communication systems and methods | |
US9065987B2 (en) | Doorbell communication systems and methods | |
US10514837B1 (en) | Systems and methods for security data analysis and display | |
US20110261202A1 (en) | Method and System for an Integrated Safe City Environment including E-City Support | |
US11102027B2 (en) | Doorbell communication systems and methods | |
US20140071273A1 (en) | Recognition Based Security | |
US20120194676A1 (en) | Video analytics method and system | |
CA2716705A1 (en) | Broker mediated video analytics method and system | |
US11741825B2 (en) | Digital video alarm temporal monitoring computer system | |
US20190370559A1 (en) | Auto-segmentation with rule assignment | |
US11153637B2 (en) | Sharing video footage from audio/video recording and communication devices to smart TV devices | |
US20220368556A1 (en) | Doorbell communication systems and methods | |
US20090153660A1 (en) | Surveillance system and method including active alert function | |
EP2229777A1 (en) | Systems and methods for analysis of video content, event notification, and video content provision | |
US11765324B1 (en) | Security light-cam with cloud-based video management system | |
US20230419801A1 (en) | Event detection, event notification, data retrieval, and associated devices, systems, and methods | |
Lee et al. | Reuse Your Old Smartphone: Automatic Surveillance Camera Application | |
Akoma et al. | Intelligent video surveillance system | |
Ashade | Accessing the Application of Mobile Video Surveillance Systems: Via Network Closed Circuit Television (CCTV) Cameras | |
FR2850830A3 (en) | Audio/video interactive system for e.g. civil security, has central server to wirelessly transmit information through Internet, and assure visual and audio signal transmissions by centralized zone |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CERNIUM CORPORATION, VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHAMBERS, CRAIG A.;GAGVANI, NIKHIL;ROBERTSON, PHILIP;AND OTHERS;REEL/FRAME:028436/0734 Effective date: 20081125 |
|
AS | Assignment |
Owner name: CHECKVIDEO LLC, VIRGINIA Free format text: BILL OF SALE, ASSIGNMENT AND ASSUMPTION AGREEMENT;ASSIGNOR:CERNIUM CORPORATION;REEL/FRAME:030378/0597 Effective date: 20130415 |
|
AS | Assignment |
Owner name: CAPITALSOURCE BANK, MARYLAND Free format text: SECURITY AGREEMENT;ASSIGNORS:KASTLE SYSTEMS INTERNATIONAL LLC;CHECKVIDEO LLC;REEL/FRAME:030743/0501 Effective date: 20130618 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: CHECKVIDEO LLC, VIRGINIA Free format text: CONFIRMATORY ASSIGNMENT;ASSIGNOR:CERNIUM CORPORATION;REEL/FRAME:033793/0471 Effective date: 20130618 |