WO2001041428A1 - Personality-based intelligent camera system - Google Patents

Personality-based intelligent camera system Download PDF

Info

Publication number
WO2001041428A1
WO2001041428A1 PCT/EP2000/011296 EP0011296W WO0141428A1 WO 2001041428 A1 WO2001041428 A1 WO 2001041428A1 EP 0011296 W EP0011296 W EP 0011296W WO 0141428 A1 WO0141428 A1 WO 0141428A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
features
representation
camera system
controllable
Prior art date
Application number
PCT/EP2000/011296
Other languages
French (fr)
Inventor
Daniel Pelletier
Damian M. Lyons
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to EP00983139A priority Critical patent/EP1157545A1/en
Priority to JP2001541236A priority patent/JP2003516049A/en
Publication of WO2001041428A1 publication Critical patent/WO2001041428A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the present invention relates generally to the field of video signal processing, and more particularly to camera systems for use in video conferencing and other applications.
  • Pan-tilt- zoom (PTZ) cameras suitable for tracking a person or other object of interest are an important aspect of many video-camera-based systems such as video conferencing systems and video surveillance systems.
  • PTZ pan-tilt- zoom
  • video conferencing systems it is often desirable to frame the head and shoulders of a particular conference participant in the resultant output video signal
  • video surveillance system it may be desirable to frame the entire body of, e.g., a person entering or leaving a restricted area monitored by the system.
  • Intelligent camera systems that can be controlled by the content of the resulting video or audio signals are also known. A significant problem with conventional intelligent camera systems is that such systems generally to not provide an adequate user interface.
  • the invention provides a personality-based camera system that includes an anthropomo ⁇ hic representation, e.g., a creature-like representation, having controllable facial features associated therewith.
  • an anthropomo ⁇ hic representation e.g., a creature-like representation, having controllable facial features associated therewith.
  • the features of the anthropomo ⁇ hic representation are controlled to provide a user interface with a camera embedded in or otherwise associated with the representation.
  • the features are controlled such that particular configurations of the features are indicative of an operating state of the camera system.
  • the camera may be, e.g., a zoom camera embedded in an eye or other feature of the representation.
  • the representation includes a head and a body
  • the controllable features are facial features, e.g., a controllable pair of eyebrows, a controllable pair of eyelids, and a controllable mouth.
  • the controllable features may be implemented using, e.g., mechanical motor controls or screen-based animation controls.
  • Pan and tilt controls are also provided for the head of the anthropomo ⁇ hic representation, such that the embedded camera has pan, tilt and zoom capability.
  • the controllable features of the anthropomo ⁇ hic representation are adjusted in accordance with an interaction protocol which specifies particular configurations of the features for corresponding operating states of the camera system.
  • the camera associated with the anthropomo ⁇ hic representation controls and negotiates the interaction with the user.
  • Additional cameras which cover activity in the corresponding room, e.g., for video presence applications, may be configured to operate independently of the anthropomo ⁇ hic camera, thereby allowing the functions of user interface and content coverage to be separated.
  • the invention controls the features of the anthropomo ⁇ hic representation in an intelligent manner so as to facilitate human interaction with the personality-based camera system.
  • the anthropomo ⁇ hic representation provides a user interface that is natural and easy to understand.
  • FIG. 1 is a block diagram of a general implementation of a personality-based intelligent camera system in accordance with an illustrative embodiment of the invention.
  • FIG. 2 is a block diagram of a more specific implementation of the personality-based intelligent camera system of FIG. 1.
  • FIG. 3 is a block diagram of an exemplary alternative implementation of the personality-based camera system of FIG. 1.
  • FIG. 4 illustrates the outward appearance of an example of a personality-based intelligent camera system in accordance with the invention.
  • the present invention provides an improved intelligent camera system in which a camera is embedded in or otherwise associated with a creature-like anthropomo ⁇ hic representation.
  • the system is configured to provide control of features of the creature-like device so as to facilitate user interaction with the device.
  • the camera system may include the ability to pan, tilt and zoom, and can be used in applications such as home-based video conferencing, video surveillance, input and output for control of appliances or other devices, etc.
  • System software executed by a computer or other processor-based device associated with the camera system is used to determine what actions the camera system makes as appropriate for a given application.
  • the camera system may include conventional tracking capabilities in order to keep a speaker or other object of interest in the video frame, e.g., using multimodal input to attempt to keep the most interesting activity in the frame at all times.
  • the invention provides a significantly improved user interface that is able to guide users in a natural and intuitive way toward the best way of interacting with the system.
  • the camera system is less threatening to users, and such users are therefore more likely to speak to or otherwise interact with it.
  • a camera system may be given a dog-like personality in accordance with the invention, such that users will be more likely to speak and interact with it as if it were a dog.
  • on/off state may be indicated by eyes shut, camera zoom by eyes squinting or wide open, searching by eyes moving back and forth rapidly, confusion by a rotation of the eyes, comprehension by a head nod, etc.
  • Many other states of expression that map well from known emotions to internal states of the system can also be used.
  • the system may be configured to provide consistency in the sense that once a particular form of interaction is learned by the user, it should work essentially the same way each time. Appropriate feedback may be provided to ensure that the user understands that the system is indeed behaving consistently.
  • FIG. 1 shows a personality-based intelligent camera system 100 in accordance with an illustrative embodiment of the invention.
  • the system 100 includes a zoom camera 102 which is controlled by a camera controller 104.
  • the camera 102 is associated with a particular anthropomo ⁇ hic representation, e.g.. a creature-like representation having certain general features, e.g., a head with pan and tilt capability and having facial features such as brows, a mouth and eyelids.
  • the creature may be human, animal, alien, insect, etc.
  • the camera 102 may be embedded in, mounted on or otherwise affixed to the anthropomo ⁇ hic representation, e.g., forming at least part of an eye or other element of the representation.
  • the invention allows such features to be controlled in an intelligent manner so as to facilitate human interaction with the personality-based camera system 100.
  • the anthropomo ⁇ hic representation provides a user interface that is natural and easy to understand.
  • the camera 102 generates an analog video signal that is supplied as an input to an analog-to-digital (ND) converter 106.
  • the A/D converter 106 converts the analog video signal from the camera 102 to a digital video signal that is suitable for processing by a set of system software 108.
  • the system software 108 generates camera control commands that are supplied to the camera controller 104 and used in generating appropriate camera adjustment signals for application to the camera 102.
  • the system software 108 also generates motor control commands that are applied to a motor controller 1 10.
  • the motor controller 1 10 generates motor movement signals for head control motor(s) 112 and facial feature motor(s) 114. These motors 112 and 114 control physical movement of the head and facial features, respectively, of the above-noted anthropomo ⁇ hic representation associated with the camera 102.
  • FIG. 2 shows a personality-based intelligent camera system 200 that represents a more specific implementation of the FIG. 1 system.
  • the system 200 includes a zoom camera 202, a camera controller 204, an A/D converter 206, system software 208 and a motor controller 210.
  • the camera 202 generates an analog video signal that is supplied to A/D converter 206 for conversion to a digital video signal.
  • the digital video signal is supplied to system software 208, which generates appropriate camera control commands for application to camera controller 204.
  • the system software 208 also generates motor control commands that are applied to motor controller 210.
  • the zoom camera 202 is embedded in a head 215 of a particular anthropomo ⁇ hic representation, e.g., a human-like, animal-like or other creaturelike representation.
  • the camera may be a part of an eye or other element of the anthropomo ⁇ hic representation.
  • a number or facial features including a pair of eyebrows 221, 222, a pair of eyelids 223, 224, and a mouth 225.
  • the head 215 pans and tilts in the directions shown.
  • the eyebrows 221, 222 tilt and move up or down in the direction of the arrows.
  • the eyelids 223, 224 open and close, and the mouth has a single degree of freedom, allowing the center of the lips to rise in a frown, or to drop in a smile.
  • the particular state of a given set of facial features is indicative of a particular state of the camera system. For example, eyelids shut could represent the system being in an off state, eyebrows up could represent a request for input from a user, etc. It should be emphasized that this particular arrangement of features in exemplary only, and numerous other arrangements could of course be used.
  • the motor controller 210 supplies motor movement signals to a set of six motors 230, 232, 234, 236, 238 and 230 to control the operation of the head 215 and its associated facial features.
  • FIG. 3 shows a personality-based intelligent camera system 300 that represents a possible alternative implementation of the FIG. 1 system.
  • the system 300 includes a zoom camera 302, a camera controller 304, an A/D converter 306, system software 308 and a motor controller 310.
  • the camera 302 generates an analog video signal that is supplied to A/D converter 306 for conversion to a digital video signal.
  • the digital video signal is supplied to system software 308, which generates appropriate camera control commands for application to camera controller 304.
  • the system software 308 also generates motor control commands that are applied to motor controller 310.
  • the motor controller 210 in this embodiment provides motor movement signals to head control motor(s) 312 that controls the physical movement, e.g., pan and tilt, of a head 316 of a particular anthropomo ⁇ hic representation.
  • the camera 302 may be embedded in or otherwise associated with the head 316 or other suitable portion of this anthropomo ⁇ hic representation.
  • the anthropomo ⁇ hic representation includes a liquid crystal display (LCD) screen 318 for providing facial feature animation.
  • the screen 318 may be mounted on or otherwise associated with the head 316, and may be configured to provide feature animation similar to that provided by the mechanical motor-based controls described in conjunction with system 200 of FIG. 2.
  • the system software 308 provides screen graphics commands to an LCD screen driver 314, and the driver 314 generates corresponding LCD pixel control signals for providing the desired animation on the screen 318.
  • FIG. 4 shows an example of the outward appearance of a personality-based intelligent camera system 400 in accordance with an illustrative embodiment of the invention.
  • the outward appearance of the system 400 is an anthropomo ⁇ hic representation of a human, animal or other creature having a head 415 and a body 417.
  • Associated with the head 415 are a number of facial features, including eyebrows 421, 422, eyelids 423, 424 and a mouth 425.
  • the head 415 and associated facial features may be controlled using, e.g., the motor-based control techniques described in conjunction with FIG. 2, the screen-based control techniques described in conjunction with FIG. 3, or a combination of these and other control techniques.
  • a zoom camera such as camera 102, 202 or 302 in the previously-described embodiments, may be embedded, mounted or otherwise installed in any desired location on the anthropomo ⁇ hic representation, e.g., as part of an eye or other feature of the head 415, on the body 417, or in any other location. For this example, it will be assumed that the zoom camera is embedded in one of the eyes of the head 415.
  • the system 400 also includes a 7 head-mounted indicator 430, and a pair of side-mounted cameras 432, 434 with corresponding indicators 436, 438.
  • the side-mounted cameras 432, 434 are also referred to herein as "video presence" cameras.
  • Video presence expands the concept of videoconferencing.
  • Traditional videoconferencing generally suggests a communication-based interaction where one set of participants is trying to talk to another set of participants across an audio/video link.
  • Video presence expands that concept to include any activity that uses the audio/video link to try to create a sense of "being present" at a remote location. This can include such activities as: L A party where half the participants are at one end of the link and half at the other.
  • video presence concept as used herein can include any other type of activity, normally thought to be outside the definition of video conferencing, for which the audio/video link could be used.
  • each of the indicators 430, 436 and 438 may be, e.g., a suitable light source that is configured to provide indicator information to a user.
  • indicator lights may be illuminated or extinguished based on particular states of the system 400, e.g., indicators 430, 436 and 438 may be illuminated when the corresponding head-based and side-mounted cameras, respectively, are operating, and extinguished when these cameras are off.
  • the FIG. 4 system thus includes multiple cameras, with the creature-like anthropomo ⁇ hic representation configured with an embedded camera in its head and two mounted cameras that appear to be "held” by the creature-like representation.
  • the embedded camera functions as the personality-based camera
  • the other cameras function as conventional tracking cameras for following speakers or other objects of interest.
  • scenes can be shot in a quasi-professional manner, e.g., one camera zooms in for a tight shot of the speaker, while another camera gets a wide, establishing shot of the room.
  • the close-up camera can be moved, allowing for sophisticated direction techniques that mimic live standard television director models.
  • the system 400 may include appropriate audio, speech or other processing and generating circuitry, for providing audio, speech or other audibly-perceptible information to the user.
  • a conventional sound module may be inco ⁇ orated into the camera system in order to allow for the generation of acknowledging chi ⁇ s, squeaks and other noises.
  • the following is an exemplary interaction protocol that may be implemented in the above-described camera system 400 of FIG. 4 or other personality-based intelligent camera system in accordance with the invention. It should be noted that this protocol is designed for use in a video presence or videoconferencing-like application.
  • the protocol assumes that a single camera is embedded in an eye of the creature-like representation, and two video presence cameras are mounted on either side, as illustrated in FIG. 4.
  • the video presence cameras are assumed to include shutters which can open and close over the camera lenses.
  • the camera system interacts with a set-top box which includes a receiver and an LCD display, and a capability for establishing a network connection, e.g., a telephone or Internet connection, for sending and receiving communications with remote devices.
  • the set top box receiver LCD display may be used to show textual information that indicates whether video is being recorded, e.g., to a VCR or other storage device, or transmitted to a remote location.
  • Audio too low or gesture unrecognized the system shows a confused look and the eyes narrow in a squint. 6. Dialing. When the system is dialing, it displays a waiting behavior. looking up left, then down center, then up right, blinking its eyes when down and center and raising eyebrows. The movement is preferably minimal, but regular, like the ticking of a clock, and may be timed to occur every second on the second.
  • the automatic tracking system when the automatic tracking system has detected a pattern in a conversation allows the system to predict where to turn the embedded camera next, the embedded camera turns in that direction before the video presence cameras. This allows for a person off camera to respond to one who is on camera. As soon as the first response occurs, the embedded camera turns to indicate acknowledgement of the response. If the response continues and the second speaker becomes the primary speaker, then the video presence cameras follow shortly thereafter. If the response was just a confirming utterance, the embedded camera returns to the main speaker.
  • the video presence cameras may be turned to one side of the room, the device may be receiving commands from a user on the other side of the room.
  • the embedded camera is turned to the user in control of the system, and this direction indicates who is in control of the system.
  • Overload If, for some reason, the camera system is failing to understand activity in the room, e.g., too much movement, too much dispersed audio, etc., the system can indicate that the user should take control of the camera. In this case, the head “spins" (pans and tilts in coordination), the eyebrows move up and down rapidly.
  • Manual Mode In manual mode, the system shows that it is paying close attention to user commands.
  • the embedded camera focus of attention is turned toward the controlling user.
  • the brows raise slightly, showing interest and attention, and a slight smile is generated.
  • a special concentrating noise is generated.
  • Pointing One of the capabilities of manual mode is to allow the user to point to something in the room and instruct the camera to follow the pointing.
  • the device turns on a bright, focused red light in the head and turns the head toward the suspected pointing location, showing the user controlling the system where the device thinks the camera should be aimed. If this is not correct, the user can then adjust the point, using the feedback to adjust the location.
  • the red light turns off and the device gives a slight head nod to show that this is the place the camera will remain.
  • the exemplary systems 100, 200, 300 and 400 described above can be used in a wide variety of other applications.
  • these systems can be used in video presence and video surveillance applications.
  • the invention can be used with image capture devices other than a zoom camera, including, e.g., pan-tilt-zoom (PTZ) cameras.
  • PTZ pan-tilt-zoom
  • the term "camera” as used herein is therefore intended to include any type of image capture device which can be used in conjunction with a personality-based system.
  • elements or groups of elements of the exemplar ⁇ ' camera systems described above may represent corresponding elements of an otherwise conventional computer, set-top box, etc., as well as portions or combinations of these and other processing devices.
  • some or all of the functions of individual elements of the systems 100, 200 and 300 may be combined into a single device.
  • one or more of these system elements may be implemented as an application specific integrated circuit (ASIC) or circuit card to be inco ⁇ orated into a computer, television, set-top box or other processing device.
  • ASIC application specific integrated circuit
  • the system software may be configured to be executed by a microprocessor, central processing unit, microcontroller or any other data processing element that may implemented in the system.
  • the systems 100, 200 and 300 may be configured to include an electronic memory, an optical or magnetic disk-based memory, a tape-based memory, as well as combinations or portions of these and other types of storage devices, for use in storage and execution of the corresponding system software.
  • the computer or other processing device used for executing the system software may be co-located with the anthropomo ⁇ hic representation and associated camera, or remotely located with signals supplied passed between the camera, motor controller, etc. and the processing device via conventional remote control arrangements.
  • the above-described embodiments of the invention are intended to be illustrative only.
  • the invention can be used with a wide variety of different creature-like representations with many different combinations of controlled features.
  • the invention is also applicable to systems with multiple cameras, systems with PTZ cameras, and systems with other types and arrangements of image capture devices.
  • the invention can utilize many different types of techniques to detect and track an object of interest, and to provide feedback to users.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Studio Devices (AREA)

Abstract

A camera system includes a camera embedded in or otherwise associated with a creature-like anthropomorphic representation having controllable features. The features are controlled such that particular configurations of the features are indicative of an operating state of the camera system. The camera may be, e.g., a zoom camera embedded in an eye or other feature of the representation. In an illustrative embodiment, the representation includes a head and a body, and the controllable features are facial features, e.g., a controllable pair of eyebrows, a controllable pair of eyelids, and a controllable mouth. The controllable features may be implemented using, e.g., mechanical motor controls or screen-based animation controls. Pan and tilt controls are also provided for the head of the anthropomorphic representation, such that the embedded camera has pan, tilt and zoom capability. The system may include additional cameras, e.g., video presence cameras, so as to provide a variety of different camera angles. The controllable features of the anthropomorphic representation are adjusted in accordance with an interaction protocol which specifies particular configurations of the features for corresponding operating states of the camera system, thereby providing a natural and easy-to-understand user interface.

Description

Personality-based intelligent camera system.
Field of the Invention
The present invention relates generally to the field of video signal processing, and more particularly to camera systems for use in video conferencing and other applications.
Background of the Invention
Camera systems that include tracking capabilities are well known. For example, pan-tilt- zoom (PTZ) cameras suitable for tracking a person or other object of interest are an important aspect of many video-camera-based systems such as video conferencing systems and video surveillance systems. For example, in a video conferencing system, it is often desirable to frame the head and shoulders of a particular conference participant in the resultant output video signal, while in a video surveillance system, it may be desirable to frame the entire body of, e.g., a person entering or leaving a restricted area monitored by the system. Intelligent camera systems that can be controlled by the content of the resulting video or audio signals are also known. A significant problem with conventional intelligent camera systems is that such systems generally to not provide an adequate user interface. For example, conventional systems generally do not encourage user interaction, nor do they mitigate the stress-inducing effect of a moving camera "eye" watching the user, nor do they provide a natural intuitive interface for suggesting various operating states of the system and appropriate user input. This problem is particularly apparent in home-based applications, such as home-based video conferencing, video surveillance, input and output for control of appliances or other devices, etc. Many users that come into contact with a camera system in such an environment are often not sufficiently familiar with its operation to interact with it in a meaningful manner. A need therefore exists for an improved camera system which incorporates a more natural and easy-to-understand user interface than that provided by conventional systems, so as to facilitate human interaction with the system in applications such as home-based video conferencing, video surveillance, control of devices, or any other application in which an intelligent camera system can be implemented. Summary of the Invention
The invention provides a personality-based camera system that includes an anthropomoφhic representation, e.g., a creature-like representation, having controllable facial features associated therewith. In accordance with the invention, the features of the anthropomoφhic representation are controlled to provide a user interface with a camera embedded in or otherwise associated with the representation. The features are controlled such that particular configurations of the features are indicative of an operating state of the camera system. The camera may be, e.g., a zoom camera embedded in an eye or other feature of the representation. In an illustrative embodiment of the invention, the representation includes a head and a body, and the controllable features are facial features, e.g., a controllable pair of eyebrows, a controllable pair of eyelids, and a controllable mouth. The controllable features may be implemented using, e.g., mechanical motor controls or screen-based animation controls. Pan and tilt controls are also provided for the head of the anthropomoφhic representation, such that the embedded camera has pan, tilt and zoom capability. The controllable features of the anthropomoφhic representation are adjusted in accordance with an interaction protocol which specifies particular configurations of the features for corresponding operating states of the camera system.
In accordance with the invention, the camera associated with the anthropomoφhic representation controls and negotiates the interaction with the user.
Additional cameras, which cover activity in the corresponding room, e.g., for video presence applications, may be configured to operate independently of the anthropomoφhic camera, thereby allowing the functions of user interface and content coverage to be separated.
Advantageously, the invention controls the features of the anthropomoφhic representation in an intelligent manner so as to facilitate human interaction with the personality-based camera system. The anthropomoφhic representation provides a user interface that is natural and easy to understand. These and other features and advantages of the present invention will become more apparent from the accompanying drawings and the following detailed description.
Brief Description of the Drawings
FIG. 1 is a block diagram of a general implementation of a personality-based intelligent camera system in accordance with an illustrative embodiment of the invention. FIG. 2 is a block diagram of a more specific implementation of the personality-based intelligent camera system of FIG. 1.
FIG. 3 is a block diagram of an exemplary alternative implementation of the personality-based camera system of FIG. 1. FIG. 4 illustrates the outward appearance of an example of a personality-based intelligent camera system in accordance with the invention.
Detailed Description of the Invention
The present invention provides an improved intelligent camera system in which a camera is embedded in or otherwise associated with a creature-like anthropomoφhic representation. The system is configured to provide control of features of the creature-like device so as to facilitate user interaction with the device. The camera system may include the ability to pan, tilt and zoom, and can be used in applications such as home-based video conferencing, video surveillance, input and output for control of appliances or other devices, etc. System software executed by a computer or other processor-based device associated with the camera system is used to determine what actions the camera system makes as appropriate for a given application. The camera system may include conventional tracking capabilities in order to keep a speaker or other object of interest in the video frame, e.g., using multimodal input to attempt to keep the most interesting activity in the frame at all times.
By implementing a camera system with a creature-like personality, the invention provides a significantly improved user interface that is able to guide users in a natural and intuitive way toward the best way of interacting with the system. As a result, the camera system is less threatening to users, and such users are therefore more likely to speak to or otherwise interact with it. For example, a camera system may be given a dog-like personality in accordance with the invention, such that users will be more likely to speak and interact with it as if it were a dog. More specifically, they would generally be likely to use well-defined commands, separated from normal speech, to call it by name before expecting it to do something, to use broad, easily-recognizable gestures when interacting with it, to allow for mistakes and accommodate for system failings, and would enjoy it and understand its operation easily and with a sense of fun. By using simple speech or other utterances, e.g., "uh-huh," barking sounds, etc., the system could indicate acknowledgment of commands and instructions. This tends to even further reduce user stress in interacting with the camera system. In addition, by using common creature-like facial feature expressions to show internal states of the camera system, the system could intuitively express complicated user interface issues. For example, on/off state may be indicated by eyes shut, camera zoom by eyes squinting or wide open, searching by eyes moving back and forth rapidly, confusion by a rotation of the eyes, comprehension by a head nod, etc. Many other states of expression that map well from known emotions to internal states of the system can also be used. The system may be configured to provide consistency in the sense that once a particular form of interaction is learned by the user, it should work essentially the same way each time. Appropriate feedback may be provided to ensure that the user understands that the system is indeed behaving consistently.
FIG. 1 shows a personality-based intelligent camera system 100 in accordance with an illustrative embodiment of the invention. The system 100 includes a zoom camera 102 which is controlled by a camera controller 104. As will be described in greater detail in conjunction with FIGS. 2 and 4, the camera 102 is associated with a particular anthropomoφhic representation, e.g.. a creature-like representation having certain general features, e.g., a head with pan and tilt capability and having facial features such as brows, a mouth and eyelids. The creature may be human, animal, alien, insect, etc. More specifically, the camera 102 may be embedded in, mounted on or otherwise affixed to the anthropomoφhic representation, e.g., forming at least part of an eye or other element of the representation.
Advantageously, the invention allows such features to be controlled in an intelligent manner so as to facilitate human interaction with the personality-based camera system 100. As noted previously, the anthropomoφhic representation provides a user interface that is natural and easy to understand. The camera 102 generates an analog video signal that is supplied as an input to an analog-to-digital (ND) converter 106. The A/D converter 106 converts the analog video signal from the camera 102 to a digital video signal that is suitable for processing by a set of system software 108. The system software 108 generates camera control commands that are supplied to the camera controller 104 and used in generating appropriate camera adjustment signals for application to the camera 102. The system software 108 also generates motor control commands that are applied to a motor controller 1 10. The motor controller 1 10 generates motor movement signals for head control motor(s) 112 and facial feature motor(s) 114. These motors 112 and 114 control physical movement of the head and facial features, respectively, of the above-noted anthropomoφhic representation associated with the camera 102.
FIG. 2 shows a personality-based intelligent camera system 200 that represents a more specific implementation of the FIG. 1 system. The system 200 includes a zoom camera 202, a camera controller 204, an A/D converter 206, system software 208 and a motor controller 210. As in the FIG. 1 system, the camera 202 generates an analog video signal that is supplied to A/D converter 206 for conversion to a digital video signal. The digital video signal is supplied to system software 208, which generates appropriate camera control commands for application to camera controller 204. The system software 208 also generates motor control commands that are applied to motor controller 210.
In the system 200, the zoom camera 202 is embedded in a head 215 of a particular anthropomoφhic representation, e.g., a human-like, animal-like or other creaturelike representation. For example, the camera may be a part of an eye or other element of the anthropomoφhic representation. As shown in FIG. 2, associated with the head 215 are a number or facial features including a pair of eyebrows 221, 222, a pair of eyelids 223, 224, and a mouth 225. The head 215 pans and tilts in the directions shown. The eyebrows 221, 222 tilt and move up or down in the direction of the arrows. The eyelids 223, 224 open and close, and the mouth has a single degree of freedom, allowing the center of the lips to rise in a frown, or to drop in a smile. Advantageously, the particular state of a given set of facial features is indicative of a particular state of the camera system. For example, eyelids shut could represent the system being in an off state, eyebrows up could represent a request for input from a user, etc. It should be emphasized that this particular arrangement of features in exemplary only, and numerous other arrangements could of course be used. The motor controller 210 supplies motor movement signals to a set of six motors 230, 232, 234, 236, 238 and 230 to control the operation of the head 215 and its associated facial features. More specifically, motor movement signals applied by motor controller 210 to head pan motor 230 and head tilt motor 232 control the pan and tilt, respectively, of the head 215. Motor movement signals applied by motor controller 210 to eyebrow tilt motor 234 and eyebrow height motor 236 control the tilt and up/down motion, respectively, of the pair of eyebrows 221 , 222. Similarly, motor movement signals applied by motor controller 210 to eyelid up/down motor 238 control the opening and closing of the eyelids 223, 224, and motor movement signals applied by motor controller 210 to mouth motor 240 control the operation of the mouth 225. FIG. 3 shows a personality-based intelligent camera system 300 that represents a possible alternative implementation of the FIG. 1 system. The system 300 includes a zoom camera 302, a camera controller 304, an A/D converter 306, system software 308 and a motor controller 310. As in the systems of FIGS. 1 and 2, the camera 302 generates an analog video signal that is supplied to A/D converter 306 for conversion to a digital video signal. The digital video signal is supplied to system software 308, which generates appropriate camera control commands for application to camera controller 304. The system software 308 also generates motor control commands that are applied to motor controller 310. The motor controller 210 in this embodiment provides motor movement signals to head control motor(s) 312 that controls the physical movement, e.g., pan and tilt, of a head 316 of a particular anthropomoφhic representation. The camera 302 may be embedded in or otherwise associated with the head 316 or other suitable portion of this anthropomoφhic representation. In the system 300, the anthropomoφhic representation includes a liquid crystal display (LCD) screen 318 for providing facial feature animation. For example, the screen 318 may be mounted on or otherwise associated with the head 316, and may be configured to provide feature animation similar to that provided by the mechanical motor-based controls described in conjunction with system 200 of FIG. 2. The system software 308 provides screen graphics commands to an LCD screen driver 314, and the driver 314 generates corresponding LCD pixel control signals for providing the desired animation on the screen 318.
FIG. 4 shows an example of the outward appearance of a personality-based intelligent camera system 400 in accordance with an illustrative embodiment of the invention. The outward appearance of the system 400 is an anthropomoφhic representation of a human, animal or other creature having a head 415 and a body 417. Associated with the head 415 are a number of facial features, including eyebrows 421, 422, eyelids 423, 424 and a mouth 425. The head 415 and associated facial features may be controlled using, e.g., the motor-based control techniques described in conjunction with FIG. 2, the screen-based control techniques described in conjunction with FIG. 3, or a combination of these and other control techniques. A zoom camera, such as camera 102, 202 or 302 in the previously-described embodiments, may be embedded, mounted or otherwise installed in any desired location on the anthropomoφhic representation, e.g., as part of an eye or other feature of the head 415, on the body 417, or in any other location. For this example, it will be assumed that the zoom camera is embedded in one of the eyes of the head 415. The system 400 also includes a 7 head-mounted indicator 430, and a pair of side-mounted cameras 432, 434 with corresponding indicators 436, 438. The side-mounted cameras 432, 434 are also referred to herein as "video presence" cameras.
The term "video presence" as used herein expands the concept of videoconferencing. Traditional videoconferencing generally suggests a communication-based interaction where one set of participants is trying to talk to another set of participants across an audio/video link. Video presence expands that concept to include any activity that uses the audio/video link to try to create a sense of "being present" at a remote location. This can include such activities as: L A party where half the participants are at one end of the link and half at the other.
2. An evening together, where participants see and hear each other, but might not be actively engaged in conversation.
3. Situations in which communication is less speech-focused, e.g., showing a family member a new baby, new piece of furniture, etc. 4. A shared experience, e.g., where one participant is bringing another "along" on a trip. 5. Shared television watching, where each participant gets to see and hear the reactions of other participants to a shared broadcast event.
It should be noted that these are examples only, and the "video presence" concept as used herein can include any other type of activity, normally thought to be outside the definition of video conferencing, for which the audio/video link could be used.
Associated with each of the indicators 430, 436 and 438 may be, e.g., a suitable light source that is configured to provide indicator information to a user. For example, indicator lights may be illuminated or extinguished based on particular states of the system 400, e.g., indicators 430, 436 and 438 may be illuminated when the corresponding head-based and side-mounted cameras, respectively, are operating, and extinguished when these cameras are off.
The FIG. 4 system thus includes multiple cameras, with the creature-like anthropomoφhic representation configured with an embedded camera in its head and two mounted cameras that appear to be "held" by the creature-like representation. In this arrangement, the embedded camera functions as the personality-based camera, and the other cameras function as conventional tracking cameras for following speakers or other objects of interest. For example, by using two completely independent cameras, scenes can be shot in a quasi-professional manner, e.g., one camera zooms in for a tight shot of the speaker, while another camera gets a wide, establishing shot of the room. When the wide shot corresponds to a live feed, the close-up camera can be moved, allowing for sophisticated direction techniques that mimic live standard television director models.
The system 400, as well as the other systems described herein, may include appropriate audio, speech or other processing and generating circuitry, for providing audio, speech or other audibly-perceptible information to the user. For example, a conventional sound module may be incoφorated into the camera system in order to allow for the generation of acknowledging chiφs, squeaks and other noises.
The following is an exemplary interaction protocol that may be implemented in the above-described camera system 400 of FIG. 4 or other personality-based intelligent camera system in accordance with the invention. It should be noted that this protocol is designed for use in a video presence or videoconferencing-like application. The protocol assumes that a single camera is embedded in an eye of the creature-like representation, and two video presence cameras are mounted on either side, as illustrated in FIG. 4. The video presence cameras are assumed to include shutters which can open and close over the camera lenses. It is also assumed for this example that the camera system interacts with a set-top box which includes a receiver and an LCD display, and a capability for establishing a network connection, e.g., a telephone or Internet connection, for sending and receiving communications with remote devices. These assumptions are for puφoses of illustration only, and should not be construed as limiting the scope of the invention in any way. 1. Off Position. The system is completely powered down and clearly shows that no video is being transmitted or processed. The eyelids close over the eye camera and shutters close over the two video presence cameras. All indicator lights are off.
2. On/Wake Up. The eyelids open over the eye camera and the head turns to meet the user. The shutters over the two video presence cameras remain closed, showing that no video is being transmitted.
3. Sending Video to Receiver. When live video is being transmitted to the receiver in the set top box, the shutters over the video presence cameras open, and the indicator lights are illuminated. The set top box receiver LCD display may be used to show textual information that indicates whether video is being recorded, e.g., to a VCR or other storage device, or transmitted to a remote location.
4. Acknowledgement. If a command has been heard and understood, the system gives a quick nod, using the head tilt degree of freedom. A confirming chiφ sound is generated. 5. Misunderstanding. If a command is misunderstood, the system tries to show the user how to help it understand better. The following are some examples of possible misunderstandings :
(a) Did not recognize command — the system displays a confused expression. (b) Recognized command, but can't execute now, e.g., "turn right" when already fully right — head shakes "no" using the pan degree of freedom and a negative beep sound is generated.
(c) Audio too low or gesture unrecognized — the system shows a confused look and the eyes narrow in a squint. 6. Dialing. When the system is dialing, it displays a waiting behavior. looking up left, then down center, then up right, blinking its eyes when down and center and raising eyebrows. The movement is preferably minimal, but regular, like the ticking of a clock, and may be timed to occur every second on the second.
7. Connected. The system displays a smile and a successful chiφ is issued. The LCD on the set top box now shows the transmitting status.
8. Failure to Connect/Retry/Waiting to Retry/Busy/Unanswered Ring. The system displays a frown and a confused look. The frown is subtle, so as not to appear too angry. The set top box LCD gives text information regarding the source of the problem.
9. Receiving a Call From Outside. An attention-getting sound is generated. While the system is trying to get the user's attention, its eyebrows are raised and its eyelids are wide open.
10. On and Connected. Several states exist when the device is on and connected and transmitting to the set top box. They indicate transitions between automatic and manual modes, focus of attention and others. These are detailed as follows: 11. Automatic Mode. In this state, the system is operating under its own automatic camera control, as opposed to responding to direct commands from the user. In this state the system displays a neutral expression, e.g., eyebrows neither raised nor lowered, eyelids at 7/8 open, mouth neither smiling nor frowning. A low, almost inaudible neutral automatic mode audio signal may be generated. When in automatic mode, the embedded camera is in general aimed in the same direction as the currently transmitting video presence cameras. However, when the automatic tracking system has detected a pattern in a conversation allows the system to predict where to turn the embedded camera next, the embedded camera turns in that direction before the video presence cameras. This allows for a person off camera to respond to one who is on camera. As soon as the first response occurs, the embedded camera turns to indicate acknowledgement of the response. If the response continues and the second speaker becomes the primary speaker, then the video presence cameras follow shortly thereafter. If the response was just a confirming utterance, the embedded camera returns to the main speaker.
12. Focus of attention. Though the video presence cameras may be turned to one side of the room, the device may be receiving commands from a user on the other side of the room. In this case, the embedded camera is turned to the user in control of the system, and this direction indicates who is in control of the system. 13. Overload. If, for some reason, the camera system is failing to understand activity in the room, e.g., too much movement, too much dispersed audio, etc., the system can indicate that the user should take control of the camera. In this case, the head "spins" (pans and tilts in coordination), the eyebrows move up and down rapidly.
14. Manual Mode. In manual mode, the system shows that it is paying close attention to user commands. The embedded camera focus of attention is turned toward the controlling user. The brows raise slightly, showing interest and attention, and a slight smile is generated. A special concentrating noise is generated.
15. Pointing. One of the capabilities of manual mode is to allow the user to point to something in the room and instruct the camera to follow the pointing. In this case, the device turns on a bright, focused red light in the head and turns the head toward the suspected pointing location, showing the user controlling the system where the device thinks the camera should be aimed. If this is not correct, the user can then adjust the point, using the feedback to adjust the location. When the system has been in the same place for a short time, e.g., 3-5 seconds, the red light turns off and the device gives a slight head nod to show that this is the place the camera will remain.
The above-described interaction protocol is exemplary only, and is directed to a particular implementation of the camera system. Similar protocols suitable for use with other embodiments of the invention can be configured in a straightforward manner.
Although particularly well suited for use in home-based video conferencing applications, the exemplary systems 100, 200, 300 and 400 described above can be used in a wide variety of other applications. For example, these systems can be used in video presence and video surveillance applications. It will also be apparent that the invention can be used with image capture devices other than a zoom camera, including, e.g., pan-tilt-zoom (PTZ) cameras. The term "camera" as used herein is therefore intended to include any type of image capture device which can be used in conjunction with a personality-based system.
It should be noted that elements or groups of elements of the exemplar}' camera systems described above may represent corresponding elements of an otherwise conventional computer, set-top box, etc., as well as portions or combinations of these and other processing devices. Moreover, some or all of the functions of individual elements of the systems 100, 200 and 300 may be combined into a single device. For example, one or more of these system elements may be implemented as an application specific integrated circuit (ASIC) or circuit card to be incoφorated into a computer, television, set-top box or other processing device.
The system software may be configured to be executed by a microprocessor, central processing unit, microcontroller or any other data processing element that may implemented in the system. In addition, it should be noted that the systems 100, 200 and 300 may be configured to include an electronic memory, an optical or magnetic disk-based memory, a tape-based memory, as well as combinations or portions of these and other types of storage devices, for use in storage and execution of the corresponding system software. Furthermore, the computer or other processing device used for executing the system software may be co-located with the anthropomoφhic representation and associated camera, or remotely located with signals supplied passed between the camera, motor controller, etc. and the processing device via conventional remote control arrangements.
It should again be emphasized that the above-described embodiments of the invention are intended to be illustrative only. For example, the invention can be used with a wide variety of different creature-like representations with many different combinations of controlled features. In addition, although illustrated using a system with a single zoom camera, the invention is also applicable to systems with multiple cameras, systems with PTZ cameras, and systems with other types and arrangements of image capture devices. Moreover, the invention can utilize many different types of techniques to detect and track an object of interest, and to provide feedback to users. These and numerous other embodiments within the scope of the following claims will be apparent to those skilled in the art.

Claims

CLAIMS:
1. A camera system (100, 200, 300) comprising: a camera (102, 202, 302) associated with an anthropomoφhic representation having controllable features (116, 118, 215, 221-225, 316, 318); and a controller (108, 110, 208, 210, 308, 310) coupled to the camera for generating control signals for controlling the features of the anthropomoφhic representation such that a particular configuration of the features is indicative of an operating state of the camera system.
2. The camera system of claim 1 wherein the camera comprises a zoom camera having adjustable zoom settings.
3. The camera system of claim 1 wherein the camera is embedded in a particular feature of the anthropomoφhic representation.
4. The camera system of claim 1 wherein the anthropomoφhic representation comprises a creature-like representation.
5. The camera system of claim 1 wherein the anthropomoφhic representation comprises a head (116, 215, 316, 415) and a body (417), and the controllable features of the anthropomoφhic representation comprise facial features associated with the head.
6. The camera system of claim 5 wherein the controllable facial features comprise at least one of a controllable pair of eyebrows (221 , 222, 421, 422), a controllable pair of eyelids (223, 224, 423, 424), and a controllable mouth (225, 425).
7. The camera system of claim 5 wherein the camera is embedded in the head of the anthropomoφhic representation, and the controller is further operative to generate control signals for controlling pan and tilt of the head.
8. The camera system of claim 1 further comprising a plurality of cameras, with at least one of the cameras being associated with the controllable features of the anthropomoφhic representation, and at least one additional camera (432, 434) comprising a video presence camera.
9. The camera system of claim 1 wherein the controller comprises system software (108, 208, 308) and a motor controller (110, 210, 310), the system software generating control commands which are applied to the motor controller.
10. The camera system of claim 9 wherein the motor controller drives one or more head control motors (1 12, 230, 232, 312) for controlling pan and tilt of a head of the anthropomoφhic representation.
11. The camera system of claim 9 wherein the controllable features of the anthropomoφhic representation comprise at least one of a controllable pair of eyebrows, a controllable pair of eyelids, and a controllable mouth, and the motor controller drives at least one of an eyebrow tilt motor (234) for controlling tilt of the eyebrows, an eyebrow height motor (236) for controlling height of the eyebrows, an eyelid motor (238) for controlling upward and downward movement of the eyelids, and a mouth motor (240) for controlling the mouth.
12. The camera system of claim 9 wherein the controllable features of the anthropomoφhic representation are implemented at least in part in a display screen (318) of the camera system, and wherein the system software generates control signals which are applied to a display screen driver for controlling the features.
13. The camera system of claim 9 wherein the system software generates control signals for the camera, and processes a video signal generated by the camera.
14. The camera system of claim 9 wherein the software system generates control signals for the camera and the controllable features of the anthropomoφhic representation in accordance with an interaction protocol which specifies particular configurations of the features for corresponding operating states of the camera system.
15. A method of implementing a camera system (100, 200, 300), the method comprising the steps of: associating a camera (102, 202. 302) with an anthropomoφhic representation having controllable features (1 16, 118, 215, 221-225, 316, 318); and generating control signals for controlling the features of the anthropomoφhic representation such that a particular configuration of the features is indicative of an operating state of the camera system.
16. An article of manufacture comprising a storage medium for storing one or more programs of a set of system software for controlling a camera system (100, 200. 300), wherein the one or more programs when executed by a processor of the camera system implement the step of: generating control signals for controlling features (116, 1 18, 215, 221-225,
316, 318) of an anthropomoφhic representation associated with a camera (102, 202, 302) of the camera system, such that a particular configuration of the features is indicative of an operating state of the camera system.
PCT/EP2000/011296 1999-11-29 2000-11-10 Personality-based intelligent camera system WO2001041428A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP00983139A EP1157545A1 (en) 1999-11-29 2000-11-10 Personality-based intelligent camera system
JP2001541236A JP2003516049A (en) 1999-11-29 2000-11-10 Personality-based intelligent camera system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US45023099A 1999-11-29 1999-11-29
US09/450,230 1999-11-29

Publications (1)

Publication Number Publication Date
WO2001041428A1 true WO2001041428A1 (en) 2001-06-07

Family

ID=23787271

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2000/011296 WO2001041428A1 (en) 1999-11-29 2000-11-10 Personality-based intelligent camera system

Country Status (4)

Country Link
EP (1) EP1157545A1 (en)
JP (1) JP2003516049A (en)
TW (1) TW519826B (en)
WO (1) WO2001041428A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003071788A1 (en) * 2002-02-25 2003-08-28 Koninklijke Philips Electronics N.V. Automatically switched camera system with indicator for notifying the next subject of the camera system
WO2008073283A2 (en) 2006-12-07 2008-06-19 Sensormatic Electronics Corporation Video surveillance system having communication acknowledgement nod
US7705877B2 (en) 2004-01-28 2010-04-27 Hewlett-Packard Development Company, L.P. Method and system for display of facial features on nonplanar surfaces
EP2820840A4 (en) * 2012-03-02 2015-12-30 H4 Eng Inc Multifunction automatic video recording device
US9253376B2 (en) 2011-12-23 2016-02-02 H4 Engineering, Inc. Portable video recording system with automatic camera orienting and velocity regulation of the orienting for recording high quality video of a freely moving subject
US9565349B2 (en) 2012-03-01 2017-02-07 H4 Engineering, Inc. Apparatus and method for automatic video recording
US9723192B1 (en) 2012-03-02 2017-08-01 H4 Engineering, Inc. Application dependent video recording device architecture
CN108490645A (en) * 2018-06-06 2018-09-04 仁怀市五马小学 It is a kind of that there are the glasses for quickly capturing function
CN110014431A (en) * 2017-11-28 2019-07-16 丰田自动车株式会社 Communication device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI507028B (en) * 2010-02-02 2015-11-01 Hon Hai Prec Ind Co Ltd Controlling system and method for ptz camera, adjusting apparatus for ptz camera including the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4819076A (en) * 1987-08-14 1989-04-04 Briggs John A Dual-camera photographic system
WO1998051078A1 (en) * 1997-05-07 1998-11-12 Telbotics Inc. Teleconferencing robot with swiveling video monitor
DE19729508A1 (en) * 1997-07-10 1999-01-14 Dirk Pohl Communication apparatus especially digital telephone for ISDN

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4819076A (en) * 1987-08-14 1989-04-04 Briggs John A Dual-camera photographic system
WO1998051078A1 (en) * 1997-05-07 1998-11-12 Telbotics Inc. Teleconferencing robot with swiveling video monitor
DE19729508A1 (en) * 1997-07-10 1999-01-14 Dirk Pohl Communication apparatus especially digital telephone for ISDN

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6982748B2 (en) 2002-02-25 2006-01-03 Koninklijke Philips Electronics N.V. Automatically switched camera system with indicator for notifying the next subject of the camera system
WO2003071788A1 (en) * 2002-02-25 2003-08-28 Koninklijke Philips Electronics N.V. Automatically switched camera system with indicator for notifying the next subject of the camera system
US7705877B2 (en) 2004-01-28 2010-04-27 Hewlett-Packard Development Company, L.P. Method and system for display of facial features on nonplanar surfaces
WO2008073283A2 (en) 2006-12-07 2008-06-19 Sensormatic Electronics Corporation Video surveillance system having communication acknowledgement nod
WO2008073283A3 (en) * 2006-12-07 2008-09-12 Sensormatic Electronics Corp Video surveillance system having communication acknowledgement nod
AU2007332837B2 (en) * 2006-12-07 2010-11-11 Sensormatic Electronics, LLC Video surveillance system having communication acknowledgement nod
CN101568944B (en) * 2006-12-07 2012-11-28 传感电子公司 Video surveillance system having communication acknowledgement nod
US8947526B2 (en) 2006-12-07 2015-02-03 Sensormatic Electronics, LLC Video surveillance system having communication acknowledgement nod
US9253376B2 (en) 2011-12-23 2016-02-02 H4 Engineering, Inc. Portable video recording system with automatic camera orienting and velocity regulation of the orienting for recording high quality video of a freely moving subject
US9565349B2 (en) 2012-03-01 2017-02-07 H4 Engineering, Inc. Apparatus and method for automatic video recording
US9800769B2 (en) 2012-03-01 2017-10-24 H4 Engineering, Inc. Apparatus and method for automatic video recording
EP2820840A4 (en) * 2012-03-02 2015-12-30 H4 Eng Inc Multifunction automatic video recording device
US9723192B1 (en) 2012-03-02 2017-08-01 H4 Engineering, Inc. Application dependent video recording device architecture
US9313394B2 (en) 2012-03-02 2016-04-12 H4 Engineering, Inc. Waterproof electronic device
CN110014431A (en) * 2017-11-28 2019-07-16 丰田自动车株式会社 Communication device
CN108490645A (en) * 2018-06-06 2018-09-04 仁怀市五马小学 It is a kind of that there are the glasses for quickly capturing function

Also Published As

Publication number Publication date
EP1157545A1 (en) 2001-11-28
TW519826B (en) 2003-02-01
JP2003516049A (en) 2003-05-07

Similar Documents

Publication Publication Date Title
US10915171B2 (en) Method and apparatus for communication between humans and devices
US8063929B2 (en) Managing scene transitions for video communication
US8154583B2 (en) Eye gazing imaging for video communications
US8159519B2 (en) Personal controls for personal video communications
US8237771B2 (en) Automated videography based communications
WO2021103920A1 (en) Audio output device switching method and device
WO2008150427A1 (en) Multi-camera residential communication system
WO2008153822A2 (en) A residential video communication system
EP1157545A1 (en) Personality-based intelligent camera system
JP2022169645A (en) Device and program, or the like
US11511410B2 (en) Artificial intelligence (AI) robot and control method thereof
WO2022228068A1 (en) Image acquisition method, apparatus, and system for electronic device, and electronic device
US7984010B2 (en) Action agenda determining apparatus
JP2004234631A (en) System for managing interaction between user and interactive embodied agent, and method for managing interaction of interactive embodied agent with user
US20190080458A1 (en) Interactive observation device
JP2000349920A (en) Intention transmitter
KR20080075932A (en) Apparatus and control method for digital aquarium for recognition of owner by voice and image and for conversational interface and mutually connection
US11949948B2 (en) Playback control based on image capture
Pentland Smart rooms, desks and clothes
JP2003333561A (en) Monitor screen displaying method, terminal, and video conference system
KR20240061773A (en) Electronic device and method for controlling the electronic device
CN113766188A (en) Intelligent monitoring system with UVC camera
JP2002305779A (en) State display device for electric device, state display method for the electric device and state display program for the electric device
JP2003204531A (en) Method for switching display and display apparatus in two-way interactive system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

WWE Wipo information: entry into national phase

Ref document number: 2000983139

Country of ref document: EP

ENP Entry into the national phase

Ref country code: JP

Ref document number: 2001 541236

Kind code of ref document: A

Format of ref document f/p: F

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 2000983139

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 2000983139

Country of ref document: EP