WO2015068302A1 - Dispositif de surveillance, système de surveillance et procédé pour fournir des informations - Google Patents

Dispositif de surveillance, système de surveillance et procédé pour fournir des informations Download PDF

Info

Publication number
WO2015068302A1
WO2015068302A1 PCT/JP2013/080432 JP2013080432W WO2015068302A1 WO 2015068302 A1 WO2015068302 A1 WO 2015068302A1 JP 2013080432 W JP2013080432 W JP 2013080432W WO 2015068302 A1 WO2015068302 A1 WO 2015068302A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
unit
storage unit
information storage
image
Prior art date
Application number
PCT/JP2013/080432
Other languages
English (en)
Japanese (ja)
Inventor
具徳 野村
鈴木 基之
Original Assignee
日立マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立マクセル株式会社 filed Critical 日立マクセル株式会社
Priority to PCT/JP2013/080432 priority Critical patent/WO2015068302A1/fr
Publication of WO2015068302A1 publication Critical patent/WO2015068302A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Definitions

  • the present invention relates to a monitoring device, a monitoring system, and an information providing method for detecting a person and performing a predetermined process.
  • Patent Document 1 proposes a television receiver capable of storing a video or audio message and reproducing it as necessary.
  • Patent Document 2 proposes a television receiver that includes a human sensor, determines whether there is a person in the room, and turns off the power when the person is not watching.
  • the usability is not sufficient for the user in various usage environments and usage situations. For example, when a person is detected while absent, a family member and other family members who want to message or another person other than the family member (suspicious person) are not discriminated, and processing according to the discrimination result is not performed, so that the usability is bad.
  • An object of the present invention is to provide a monitoring device, a monitoring system, and an information providing method capable of performing processing according to a detected person when a person is detected while the user is away and improving usability. .
  • the monitoring device includes a human detection unit that detects a person, a camera unit that captures an image of the person, and an individual information storage unit that registers and stores a face image of a specific person in advance.
  • the human detection unit detects a person
  • the camera unit captures an image of the person
  • an individual information storage unit registers and stores a face image of a specific person in advance.
  • the face image captured by the camera unit is compared with the face image stored in the individual information storage unit, and the detected human face is recognized. Based on the recognition result, different processing is performed by a person registered in the individual information storage unit and an unregistered person. If the person is a registered person, the message processing is performed. If the person is not registered, the suspicious person handling process is performed.
  • the notification result is notified to a preset notification destination.
  • the block block diagram which shows one Example of the monitoring system of this invention.
  • the block diagram which shows the structural example of the television receiver 1 as a monitoring apparatus.
  • the block diagram which shows the structural example of a mail server.
  • the whole flowchart which shows the answering machine operation
  • movement The flowchart which shows the modification of an answering machine operation setting process.
  • the block block diagram which shows the 2nd Example of the monitoring system.
  • the block diagram which shows the structural example of a video recording server. 12 is a detailed flowchart of an answering machine operation setting process according to the second embodiment.
  • FIG. 1 is a block diagram showing an embodiment of the monitoring system of the present invention.
  • the monitoring system includes a television receiver 1, a mobile terminal device 2, a mail server 3, routers 4 and 5, an external network 6, a base station 7, and a broadcast station 8.
  • the television receiver 1 and the mobile terminal device 2 are connected to the routers 4 and 5 via a wireless LAN (Local Area Network) or a wired LAN, respectively. Can send and receive. Further, the television receiver 1 can receive and watch a digital broadcast transmitted from the broadcast station 8.
  • a wireless LAN Local Area Network
  • wired LAN Wireless Local Area Network
  • the routers 4 and 5 have a wireless LAN function or a wired LAN function based on Wi-Fi (Wireless Fidelity: trademark) standards such as IEEE802.11a / b / n, and can be connected to the external network 6 via a communication line.
  • Wi-Fi Wireless Fidelity: trademark
  • the portable terminal device 2 is a mail server from the base station 7 via the external network 6 by long-distance wireless communication such as W-CDMA (Wideband Code Division Multiple Access) or GSM (registered trademark) (Global System for Mobile communications). 3 can send and receive emails.
  • W-CDMA Wideband Code Division Multiple Access
  • GSM Global System for Mobile communications
  • FIG. 2 is a block diagram illustrating a configuration example of the television receiver 1 as a monitoring device.
  • the television receiver 1 includes a control unit 101, a face recognition unit 102, a memory 103, a storage 104, a power supply unit 105, an operation key 106, a remote control light receiving unit 107, a human detection unit 108, a communication unit 109, a tuner unit 112, and a demodulation unit 113.
  • the system bus 110 is a data communication path for performing data transmission / reception between the control unit 101 and each unit in the television receiver 1.
  • the television receiver 1 is connected to an antenna 111 and an external device 140 and can be operated with the operation key 106 or the remote controller 150.
  • the control unit 101 includes a CPU and the like, and controls the entire television receiving apparatus 1 according to an operating system, various application programs, and the like stored in the memory 103.
  • the face recognition unit 102 captures an image captured by the camera unit 125 and processed by the image processing unit 118 via the system bus 110, and recognizes a person's face shown on the camera unit 125.
  • the storage unit 104 has a storage area for various information.
  • the individual information storage area 104a stores information on the account of the user connected to the mail server 3, information for recognizing the face used by the face recognition unit 102, setting information related to the answering machine operation, and the like.
  • the program recording storage area 104b stores broadcast program recording and information related to broadcast program recording (reservation information, etc.).
  • the message storage area 104c an image captured by the camera unit 125 and / or a voice captured by the microphone 126 is stored as a message.
  • the answering machine information storage area 104d an image captured by the camera unit 125 and an audio captured by the microphone 126 during the answering machine operation are stored.
  • the operation key 106 detects that the user has pressed the key, and based on the detected key, the control unit 101 instructs channel switching, volume change, and the like.
  • the remote control light receiving unit 107 receives a remote control signal (infrared signal) emitted from the remote controller 150. Based on the remote control signal received by remote control light receiving unit 107, control unit 101 instructs channel selection in channel selection unit 115, change of OSD to be drawn, and the like.
  • the person detection unit 108 is a sensor that detects whether or not there is a person around the television receiver 1. For example, the person detection unit 108 detects whether or not a person is present around by detecting the movement of the heat source or by detecting the difference between the temperature of the heat source and the ambient temperature. An output signal of the human detection unit 108 (hereinafter referred to as a human detection signal) is amplified by an amplifier circuit (not shown) and then converted into a digital signal by an ADC (Analog Digital Converter) (not shown). Via the control unit 101. When there is a person around the television receiver 1 (more precisely, the detection range of the human detection unit 108), the temperature change detected by the human detection unit 108 becomes large.
  • the control unit 101 can determine whether or not there is a person around the television receiver 1 by referring to the value of the human detection signal supplied from the human detection unit 108.
  • the communication unit 109 is connected to the router 4 by a wired LAN or a wireless LAN, and transmits / receives information, voice, image signal, or mail via an external network 6 such as the Internet.
  • the tuner unit 112 extracts a signal superimposed on the channel selected by the channel selection unit 115 from the broadcast signal received by the antenna 111, and supplies the extracted signal to the demodulation unit 113.
  • the demodulation unit 113 generates stream data in a predetermined format (for example, MPEG2-TS format) from the signal extracted by the tuner unit 112 and supplies the generated stream data to the separation unit 114.
  • the separation unit 114 separates the stream data supplied from the demodulation unit 113 into video data, audio data, and additional data.
  • the video data, audio data, and additional data separated by the separation unit 114 are supplied to the video decoding unit 116, the audio decoding unit 123, and the EPG / OSD processing unit 117, respectively.
  • the video decoding unit 116 decodes the video data separated by the separation unit 114.
  • the video signal obtained by the video decoding unit 116 is supplied to the video processing unit 118.
  • the audio decoding unit 122 decodes the audio data separated by the separation unit 114.
  • the audio signal obtained by the audio decoding unit 122 is supplied to the audio processing unit 124.
  • the EPG / OSD processing unit 117 Based on the additional data separated by the separation unit 114, the EPG / OSD processing unit 117 generates a video signal representing a video to be OSD displayed. Examples of the video to be OSD-displayed include EPG and subtitles.
  • the video signal generated by the EPG / OSD processing unit 117 is supplied to the synthesis unit 119.
  • the video processing unit 118 in addition to the video signal from the video decoding unit 116 described above, a video signal output from the external device 140 (for example, an optical disc device) via the external input unit 129, and face recognition at the time of answering machine And a video signal output from the camera unit 125 used for message input or the like.
  • the video processing unit 118 processes a target video signal from among these video signals based on a command from the control unit 101. Examples of the processing here include frame rate conversion.
  • the video signal processed by the video processing unit 118 is supplied to the synthesis unit 119.
  • the synthesizing unit 119 synthesizes the video signal processed by the video processing unit 118 and the video signal generated by the EPG / OSD processing unit 117 and supplies the synthesized signal to the display control unit 120.
  • the display control unit 120 drives the display unit 121 based on the synthesized video signal.
  • the audio processing unit 123 includes an audio signal output from the external device 130 via the external input unit 127 and a microphone 126 that is used when a message is input. The audio signal output from is supplied. The audio processing unit 123 amplifies a target audio signal from among these audio signals based on a command from the control unit 101, and supplies the amplified audio signal to the speaker 124.
  • the camera unit 125 is used for photographing a person / object around the television receiver 1 during face recognition or answering machine operation and for capturing an image when inputting a message.
  • the image captured from the camera unit 125 is a video processing unit 118.
  • the microphone 126 is used for capturing sound from a person / object around the television receiver 1 during the answering machine operation and capturing sound when inputting a message.
  • the sound captured from the microphone 126 is supplied to the sound processing unit 125.
  • FIG. 3 is a block diagram showing a configuration example of the mail server 3.
  • the mail server 3 includes a control unit 301, a memory 302, a communication unit 303, and a storage 304, which are mutually connected by a bus 300.
  • the control unit 301 includes a CPU (Central Processing Unit) and the like, and controls each component by executing a program stored in the memory 302 to perform various processes.
  • the communication unit 303 is an interface for connecting to the external network 6, and sends mail from the external network 6 via the base station 7 or the routers 4, 5 to the television receiver 1, portable terminal device 2, etc. that support network connection. Send and receive.
  • the storage 304 has an area 304a for storing user information such as an account and a password for authenticating each user connected to the mail server 3, and an area 304b for storing each user's mail information.
  • FIG. 4 is an overall flowchart showing the answering machine operation by the television receiver 1.
  • S400 as initial settings for user registration, personal information such as facial images of family members and acquaintances and e-mail addresses is registered.
  • face recognition registration each camera user's face is imaged by the camera 125 and stored in the individual information recording area 104 a of the storage 104.
  • the mail address information is input using the remote controller 150 or the operation key 106, and stored in the individual information recording area 104a in the same manner.
  • the answering machine operation mode includes not only a normal “answering machine mode” in which the user is absent, but also “at-home mode” when the user is at home and is in a room different from the room where the television receiver 1 is installed. Prepare and switch according to the situation. Of course, when the user is in the room where the television receiver 1 is installed, the answering mode is turned off.
  • S402 answering machine setting processing is performed.
  • “suspicious person response”, “notification function”, “message function”, and the like are selected as answering machine functions, and the conditions are set. Details of S402 will be described later with reference to FIG.
  • step S403 the control unit 101 captures the output of the human detection signal from the human detection unit 108.
  • step S ⁇ b> 404 the control unit 101 determines whether there is a person within the detection range of the human detection unit 108 by comparing the value of the human detection signal with a predetermined threshold value. If it is determined in S404 that there is no person (No), the process returns to S403 and takes in the output of the person detection unit 108 again.
  • S404 If it is determined in S404 that there is a person (Yes), the process proceeds to S405, and the control unit 101 captures an image captured by the camera unit 125.
  • face recognition processing is performed based on the user's face recognition registration information stored in advance in the individual information storage area 104a of the storage 104.
  • the face recognized in the face recognition process S406 is compared with the registered user's face to determine whether or not they match. If it does not match the registered user's face (No), the process proceeds to S408, and suspicious person handling processing (warning sound generation, etc.) is executed. Details of the suspicious person handling process in S408 will be described later with reference to FIG.
  • the process proceeds to S409 to determine whether the user is registered in the message destination. If the user is a message destination registered (Yes), the process proceeds to S410, and a message output process is performed.
  • the message output process of S410 the message content (video, audio message, etc.) stored in the message storage area 104c of the storage 104 is output from the display unit 121 or the speaker 124. Thereby, the message content can be transmitted to the person registered in the message destination. If the user is not registered as a message destination in S409 (No), the process proceeds to S411.
  • S411 it is determined whether a notification destination is set.
  • the process proceeds to S412 to perform notification processing.
  • the result of the face recognition process S406 or the like is sent by e-mail to a preset notification destination.
  • the mobile terminal device 2 can receive the mail transmitted in the notification process S412 from the mail server 3 and confirm whether the user registered for face recognition has returned home or whether a suspicious person has entered.
  • the recipient can confirm whether the user is registered for face recognition or a suspicious person by attaching an image when the face is further recognized.
  • the notification method is not limited to e-mail, and may be by telephone.
  • the answering machine operation mode is canceled and the process ends.
  • the answering machine operation of the present embodiment when a person is detected by the camera during absence, it is determined whether or not the person is a registrant based on the user's face recognition registration information, and a suspicious person is handled according to the detected person. Processing such as message processing and notification processing can be performed appropriately.
  • FIG. 5 is a detailed flowchart of the answering machine operation setting process (S402).
  • S501 a screen for selecting an item for setting an answering machine operation is displayed on the display unit 121 (see FIG. 6A).
  • S502 the user selects and inputs setting items displayed on the screen using the operation keys 106 (or the remote controller 150, the same applies hereinafter).
  • S503 the process branches according to the selected setting item. If “end” is selected in the selection input in S502, the answering machine operation setting process ends.
  • S502 If “suspicious person correspondence” is selected in the selection input in S502, the process proceeds to S504, and a suspicious person correspondence setting screen is displayed (see FIG. 6B).
  • S505 the user inputs a suspicious person setting using the remote controller 150, and in S506, the input item is set and registered. Thereafter, the process returns to S501 and the answering machine operation setting selection screen is displayed, and the setting contents can be added or updated.
  • FIG. 6A to 6D are diagrams showing display examples of the display unit 121 in the answering machine operation setting process (S402).
  • FIG. 6A is an example of screen display in the answering machine operation setting selection screen display process (S501), in which the current setting states of the “suspicious person correspondence”, “notification destination”, and “message” items are displayed.
  • the user operates the operation key 106 to change (update) the setting state, that is, the “suspicious person response” button (6a-2), the “notification destination” button (6a-3), and the “message” button (6a-4). ) Is selected. If all the setting states are not changed, the “end” button (6a-1) is selected.
  • FIG. 6B is an example of screen display in the suspicious person setting screen display process (S504), and “warning sound” (6b-2), “warning display” (6b-3), “recording (6b-4)”.
  • “Notification” (6b-5), “Notification destination” (6b-6) the current setting status is displayed.
  • the user selects the “ON” button or the “OFF” button for each setting item using the operation key 106, thereby changing the ON / OFF setting for each item.
  • a mail address of a report destination such as a security company can be input and registered with the remote controller 150.
  • suspicious person information can also be transmitted to the residence area.
  • the e-mail address may be input directly using a numeric keypad (not shown) of the remote controller 150, or a software keyboard is displayed on the screen, and the software is displayed on the remote controller 150 or the like. You may operate by operating the keyboard.
  • the “end” button (6b-1) is selected.
  • FIG. 6C is an example of screen display in the notification destination setting screen display process (S507), and a plurality of notification destinations can be set.
  • a check box (6c-2) for setting whether or not to notify each notification destination, a user registration name (6c-3) of the notification destination, and a mail address (6c-4) of the notification destination can be set.
  • the check box (6c-2) is switched in accordance with whether or not the notification is performed by determining the selected state.
  • the user registration name (6c-3) of the notification destination and the mail address (6c-4) of the notification destination may be directly input using a numeric keypad (numeric key) (not shown) of the remote controller 150.
  • a software keyboard may be displayed on the screen, and the software keyboard may be operated with the remote controller 150 or the like.
  • FIG. 6D is an example of screen display in the message setting screen display process (S510), where the registered user's face (6d-2 to 5) is displayed, and a plurality of message destinations can be set. By selecting the user's face from these, it is switched and set as to whether or not to be the message destination. Further, by selecting the “message input” button (6d-6), it is possible to input a message.
  • a message input method for example, the video and / or voice of the message input person is input and registered by the camera unit 125 and the microphone 126. Alternatively, a message is input by inputting characters with the remote controller 150.
  • the “end” button (6d-1) is selected.
  • FIG. 7 is a detailed flowchart of the suspicious person handling process (S408).
  • processing is performed according to the conditions set in FIG. 6B.
  • S701 it is determined whether the warning sound setting is set to ON in the suspicious person handling setting. If it is set to ON (Yes), the process proceeds to a warning sound output process S702, and a warning sound is generated from the speaker 124.
  • S703 it is determined whether the warning display setting is set to ON. If it is set to ON (Yes), the process proceeds to the warning screen display process S704, and the warning screen is displayed on the display unit 121.
  • S705 it is determined whether the recording setting is set to ON. If it is set to ON (Yes), the process proceeds to the recording process S706, and the video captured by the camera unit 126 and the audio input from the microphone 126 are stored in the answering machine information storage area 104d of the storage 104.
  • S707 it is determined whether the notification setting is set to ON. If it is set to ON, the process proceeds to report process S708, and a report is sent to a preset report destination via the communication unit 109. For example, it is more effective if a security company is set as the report destination. Security companies will contribute to the local crime prevention effect by collecting information (face images) of suspicious persons transmitted from the monitoring devices of each household and reporting them to the police.
  • a process for inputting a password within a predetermined time may be provided at the beginning of the suspicious person processing S408, and the suspicious person processing may be terminated when a correct password is input within the predetermined time.
  • the suspicious person handling process S408 may not be performed.
  • the user's face is recognized and an appropriate process is performed.
  • a pet dog, cat
  • the suspicious person may be handled as a suspicious person.
  • no malfunction will occur if a domestic pet is added to face recognition registration, no malfunction will occur.
  • a method of judging a pet its size and / or shape may be registered and judged. For example, if it is recognized as a pet, it may be determined that there is no person, and the process returns to S403 to capture the output of the person detection unit 108 again.
  • FIG. 8 is an overall flowchart showing a modification of the answering machine operation. This is a modification from FIG. 4, and the same processing parts as those in FIG. In this example, a step S420 for displaying a specific screen after human detection is added, and then the image captured by the camera unit 125 is captured to perform face recognition.
  • the process proceeds to S420, and a specific screen is displayed on the display unit 121. Subsequently, in S405, an image captured by the camera unit 125 is captured, and face recognition processing is performed in S406.
  • the specific screen to be displayed here is preferably a bright screen that is noticed by a detected person. Thereby, the detected person turns his / her face in the direction of the television receiver 1 (camera unit 125), and a bright screen illuminates the detected person as a lighting tool. As a result, it is possible to capture a clear face image even in a dark room, improve the accuracy of face recognition, and easily discriminate between registered users and unregistered suspicious persons.
  • a specific screen is displayed on the display unit 121.
  • a process in which the detected person turns his / her face toward the television receiver 1 may be performed, and a specific sound is output from the speaker 124.
  • produces may be performed and both a screen display and generation
  • FIG. 9 is a flowchart showing a modification of the answering machine operation setting process. This is a modification of FIG. 5, and the same processing parts as those in FIG. In this example, steps S520 to S522 are added in which the user who sets the answering machine is face-recognized and the recognized user is set as the notification destination.
  • an image captured by the camera unit 125 (a user who performs setting work) is captured.
  • face recognition processing is performed based on the user's face recognition registration information stored in advance in the individual information storage area 104a of the storage 104.
  • the recognized user is set as a notification destination. Thereafter, as in FIG. 5, the answering machine operation setting selection screen is displayed on the display unit 121 (S501), and various setting operations are performed.
  • the notification destination setting performed in S507 to S509 can be simplified. If it is desired to set a user other than the setter as the notification destination, the setting may be made through S507 to S509.
  • FIG. 10 is a block diagram showing a second embodiment of the monitoring system of the present invention.
  • the recording server 9 is added to the configuration of the first embodiment (FIG. 1), and the other configuration is the same as that of FIG.
  • the recording server 9 is additionally connected to the external network 6.
  • the television receiver 1 stores in the recording server 9 via the router 4 answering machine information consisting of video captured by the camera unit during voice mail operation and voice input from the microphone.
  • the portable terminal device 2 is connected to the external network 6 via the router 5 by wireless LAN or by the long-distance wireless communication such as W-CDMA or GSM via the base station 7 and stored in the recording server 9. Can be viewed.
  • the hardware configuration of the television receiver 1 is the same as that in FIG.
  • FIG. 11 is a block diagram illustrating a configuration example of the recording server 9.
  • the recording server 9 includes a control unit 901, a memory 902, a communication unit 903, and a storage 904, which are mutually connected by a bus 900.
  • the control unit 901 is configured by a CPU or the like, and controls each component by executing a program stored in the memory 902 to perform various processes.
  • a communication unit 903 is an interface for connecting to the external network 6, and can send and receive answering machine information to and from the television receiver 1 or the mobile terminal device 2 corresponding to the network connection.
  • the storage 904 has an area 904a for storing user information such as an authentication account and a password for each user connected to the recording server 9, and an area 904b for storing answering machine information.
  • the answering machine operation of the television receiver 1 in this embodiment is the same as that shown in FIG. 4 or FIG. 8 except for the answering machine operation setting process S402.
  • FIG. 12 is a detailed flowchart of the answering machine operation setting process (S402) in the present embodiment.
  • the video received by the camera unit 125 and the audio input from the microphone 126 during the answering operation of the television receiver 1 are stored in the recording server 9 or the storage unit as answering information.
  • Recording destination setting processes S530 to S532 for this purpose are added.
  • a recording destination setting screen is displayed in S530.
  • the user inputs a recording destination setting by the remote controller 150.
  • the item input by the user is set and registered, and the process returns to S501.
  • FIG. 13A to 13C are diagrams showing display examples of the display unit 121 in the answering machine operation setting process of FIG.
  • FIG. 13A is an example of screen display in the answering machine operation setting selection screen display process (S501), and an item “recording destination” is added to the screen display example of FIG. 6A.
  • the “recording destination” button (13a-1) using the operation key 106, the screen proceeds to the recording destination setting screen.
  • FIG. 13B is an example of screen display in the recording destination setting screen display processing (S530), and “built-in storage” and “recording server” can be selected as recording destinations.
  • the recording destination is set (or changed).
  • FIG. 13C is an example of a screen display when “recording server” is selected, and an “account name” (13c-1) and “password” (13c-2) for user authentication to connect to the recording server are input. Set up.
  • the recorded contents are stored in the recording server 9, even if the information in the internal storage 104 of the television receiver 1 is destroyed, the recorded contents can be reproduced.
  • the notification destination user can view the recorded content stored in the recording server 9 via the external network 6 without attaching an image to the notification mail to the mobile terminal device 2.
  • the monitoring device in the present invention is not limited to a television receiver, and may be an information display terminal device having a camera unit, a face recognition unit, and the like. Specifically, a personal computer or a pet-type robot may be used.
  • the present invention is not limited to the above-described embodiments, and includes various modifications.
  • the above-described embodiments are described in detail for the entire system in order to explain the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the configurations described.
  • a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
  • each of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit.
  • Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
  • Information such as programs, tables, and files for realizing each function can be stored in a recording device such as a memory, a hard disk, an SSD (Solid State Drive), or a recording medium such as an IC card or an SD card.
  • the face recognition information and the setting information related to the answering machine operation are stored in the storage 104, but may be stored in the memory 103.
  • each processing example may be independent programs, or a plurality of programs may constitute one application program. Further, the order of performing each process may be changed and executed.
  • control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown. Actually, it may be considered that almost all the components are connected to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Social Psychology (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Telephonic Communication Services (AREA)

Abstract

 La présente invention concerne un dispositif de surveillance pour réaliser, lors de la détection d'une personne durant une absence, un processus correspondant à la personne détectée. Un dispositif de surveillance (récepteur de télévision (1)) est pourvu d'une unité de détection de personne (108) pour détecter une personne, une unité d'appareil photographique (125) pour capturer une image d'une personne, et une unité de mémoire d'informations individuelles (104a) pour préenregistrer et mémoriser l'image de visage d'une personne spécifique. Lorsqu'une personne est détectée par l'unité de détection de personne, une reconnaissance de visage de la personne détectée est réalisée par comparaison de l'image de visage capturée par l'unité d'appareil photographique et de l'image de visage mémorisée dans l'unité de mémoire d'informations individuelles. Un processus est exécuté, sur la base du résultat de reconnaissance, différemment pour une personne enregistrée et une personne non enregistrée dans l'unité de mémoire d'informations individuelles. Un processus de message est réalisé pour le cas de la personne enregistrée, et un processus de réponse de personne suspecte est réalisé pour le cas de la personne non enregistrée. En outre, un résultat de reconnaissance est notifié à une destination de notification préétablie.
PCT/JP2013/080432 2013-11-11 2013-11-11 Dispositif de surveillance, système de surveillance et procédé pour fournir des informations WO2015068302A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2013/080432 WO2015068302A1 (fr) 2013-11-11 2013-11-11 Dispositif de surveillance, système de surveillance et procédé pour fournir des informations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2013/080432 WO2015068302A1 (fr) 2013-11-11 2013-11-11 Dispositif de surveillance, système de surveillance et procédé pour fournir des informations

Publications (1)

Publication Number Publication Date
WO2015068302A1 true WO2015068302A1 (fr) 2015-05-14

Family

ID=53041103

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/080432 WO2015068302A1 (fr) 2013-11-11 2013-11-11 Dispositif de surveillance, système de surveillance et procédé pour fournir des informations

Country Status (1)

Country Link
WO (1) WO2015068302A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE202016105934U1 (de) 2016-10-21 2017-08-22 Krones Ag Andockstation für ein Etikettieraggregat

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008294939A (ja) * 2007-05-28 2008-12-04 Funai Electric Co Ltd テレビジョン受像機
JP2010074551A (ja) * 2008-09-18 2010-04-02 Sony Corp テレビジョン受像機及び記録方法
JP2010226541A (ja) * 2009-03-25 2010-10-07 Brother Ind Ltd 応対装置、来訪者応対方法、来訪者応対制御プログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008294939A (ja) * 2007-05-28 2008-12-04 Funai Electric Co Ltd テレビジョン受像機
JP2010074551A (ja) * 2008-09-18 2010-04-02 Sony Corp テレビジョン受像機及び記録方法
JP2010226541A (ja) * 2009-03-25 2010-10-07 Brother Ind Ltd 応対装置、来訪者応対方法、来訪者応対制御プログラム

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE202016105934U1 (de) 2016-10-21 2017-08-22 Krones Ag Andockstation für ein Etikettieraggregat
WO2018072901A1 (fr) 2016-10-21 2018-04-26 Krones Ag Station d'arrimage pour une unité d'étiquetage

Similar Documents

Publication Publication Date Title
KR102110457B1 (ko) 단말기의 미확인 메시지 표시장치 및 방법
EP3160151B1 (fr) Dispositif d'affichage de vidéo et son procédé de fonctionnement
KR102101818B1 (ko) 단말기의 데이터전송 제어장치 및 방법
US9912627B2 (en) Generation and transmission of event notification content based on score
CN104601674A (zh) 通知消息同步方法、装置及系统
JP2005072764A (ja) 機器制御システムとそのための装置及び機器制御方法
KR20140104286A (ko) 단말기의 메신저 제어장치 및 방법
CN104683568A (zh) 信息提醒方法和装置
KR100703330B1 (ko) 휴대단말기에서 블루투스 통신을 위한 디바이스 검색방법
KR20060112933A (ko) 휴대단말기의 데이터 표시방법
CN105516944A (zh) 短信清理方法及装置
CN106453032B (zh) 信息推送方法及装置、系统
JP4953850B2 (ja) コンテンツ出力システム、携帯通信端末およびコンテンツ出力装置
CN101783063A (zh) 用于一遥控装置的功能设定方法及其相关功能设定装置
JPWO2009019890A1 (ja) 通信装置、及びそのイベント処理方法
US11310561B2 (en) Portable terminal device, television receiver, and incoming call notification method
US20100171634A1 (en) Function Configuration Method and Related Device for a Remote Control Device
WO2015068302A1 (fr) Dispositif de surveillance, système de surveillance et procédé pour fournir des informations
CN105376318A (zh) 文件传输方法、装置及系统
KR100879996B1 (ko) 무선 다기능 주방 디스플레이 장치
EP2922306A1 (fr) Procédé de service de personnalisation et système relié à un terminal utilisateur
JP2004289625A (ja) カーセキュリティ装置、システム及び方法
US10212476B2 (en) Image display apparatus and image displaying method
WO2015004840A1 (fr) Dispositif de réception vidéo, procédé d'affichage d'informations et système de réception vidéo
KR102067644B1 (ko) 메시지 수신 음 출력 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13896955

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13896955

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP