WO2019106734A1 - コンピュータシステム、機器制御方法及びプログラム - Google Patents
コンピュータシステム、機器制御方法及びプログラム Download PDFInfo
- Publication number
- WO2019106734A1 WO2019106734A1 PCT/JP2017/042697 JP2017042697W WO2019106734A1 WO 2019106734 A1 WO2019106734 A1 WO 2019106734A1 JP 2017042697 W JP2017042697 W JP 2017042697W WO 2019106734 A1 WO2019106734 A1 WO 2019106734A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- person
- name
- image
- module
- command
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000001514 detection method Methods 0.000 claims description 17
- 238000004891 communication Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 9
- 238000003384 imaging method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000000060 site-specific infrared dichroism spectroscopy Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000010454 slate Substances 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
Definitions
- the present invention relates to a computer system that controls an apparatus, an apparatus control method, and a program.
- Patent Document 1 since only a person is specified, there has been no idea of controlling devices around the person.
- An object of the present invention is to provide a computer system, an apparatus control method, and a program that can be controlled so that only an apparatus closest to a person matching a desired person's name is made to act.
- the present invention provides the following solutions.
- the present invention provides an acquisition unit for acquiring an image of a person; An identifying unit that analyzes the image and identifies the person; Detection means for detecting equipment in the vicinity of the person; Accepting means for accepting a desired person name among the identified persons; Control means for causing a device existing near a person who matches the person's name to execute a predetermined action; Providing a computer system characterized by comprising:
- the computer system acquires an image of a person, analyzes the image, identifies the person, detects a device in the vicinity of the person, and desires among the identified persons A person's name is accepted, and a device existing near the person matching the person's name is made to execute a predetermined action.
- the present invention is a category of computer system, but it exerts the same operation / effect according to the category in other categories such as a method and a program.
- FIG. 1 is a diagram showing an outline of the device control system 1.
- FIG. 2 is an overall configuration diagram of the device control system 1.
- FIG. 3 is a functional block diagram of the wearable terminal 100.
- FIG. 4 is a flowchart showing the device control process performed by the wearable terminal 100.
- FIG. 5 is a flowchart showing the device control process performed by the wearable terminal 100.
- FIG. 6 is an example schematically showing a state in which the wearable terminal 100 specifies a distance.
- FIG. 7 is an example schematically showing a state in which the wearable terminal 100 specifies position information of a person.
- FIG. 1 is a view for explaining an outline of a device control system 1 according to a preferred embodiment of the present invention.
- the device control system 1 is a computer system configured of a wearable terminal 100.
- the wearable terminal 100 performs data communication via a device such as a speaker, a smartphone, or another wearable terminal (not shown) and a gateway terminal such as a router (not shown) or a computer having a server function.
- the number of wearable terminals 100 can be changed as appropriate.
- the wearable terminal 100 is not limited to an existing device, and may be a virtual device.
- the device control system 1 may be configured by the above-described device or computer.
- the wearable terminal 100 is a terminal device connected so as to be capable of data communication with a device.
- the wearable terminal 100 is, for example, a terminal device such as a smart glass worn by the user or a head mounted display.
- Wearable terminal 100 is an audio device such as a speaker or headphone, a portable telephone, a portable information terminal, a tablet terminal, a personal computer, and an electric appliance such as a netbook terminal, a slate terminal, an electronic book terminal, a portable music player, etc. May be configured by
- the wearable terminal 100 captures an image of a target person using an imaging device such as a stereo camera that the wearable terminal 100 has (step S01).
- the wearable terminal 100 acquires an image of a person by photographing the person.
- the wearable terminal 100 may be configured to continuously shoot, or may be configured to shoot based on an instruction input such as voice input or gesture input from the user. In the case of continuously shooting, it is also possible to delete an image shot after a predetermined time (e.g., 30 seconds, 1 minute, 5 minutes).
- the wearable terminal 100 analyzes the acquired image and identifies a person appearing in the image (step S02).
- the wearable terminal 100 analyzes an image (feature point analysis or feature amount analysis).
- the wearable terminal 100 includes feature points and feature amounts of persons that can be targeted in advance, names of persons, identifiers of terminal devices such as owned smartphones and wearable terminals (SSID, phone number, mail address, ID of communication application), etc.
- SSID identifiers of terminal devices
- the image is obtained by collating the feature points and feature amounts extracted this time by image analysis with the feature points and feature amounts of a person registered in the person database. Identify the person in the picture.
- the wearable terminal 100 may perform machine learning by pattern recognition such as a neural network on the person identified this time, and may improve the accuracy of the person identification in the subsequent identification.
- the wearable terminal 100 detects a device in the vicinity of the identified person (step S03).
- the wearable terminal 100 acquires position information of itself from a GPS (Global Positioning System).
- the wearable terminal 100 identifies the distance between the position information and the identified person.
- the wearable terminal 100 specifies the position information of a person based on the viewing direction of the user (the viewing direction of the user, which means the direction in which the user is facing) and the distance.
- the wearable terminal 100 detects a device in the vicinity (for example, within a few meters of the radius) of the person based on the position information of the person.
- the wearable terminal 100 performs data communication with an apparatus via a router (not shown), acquires position information acquired by the apparatus from the GPS, and identifies an apparatus that satisfies the condition by identifying an apparatus that satisfies the condition.
- a router not shown
- the wearable terminal 100 receives a desired personal name among the specified persons (step S04).
- the wearable terminal 100 receives a person's name by, for example, voice input.
- the wearable terminal 100 causes the device existing in the vicinity of the person who matches the person's name to execute a predetermined action (step S05).
- the wearable terminal 100 generates a command for causing a device in the vicinity of the person to perform a predetermined action.
- the wearable terminal 100 acquires device information (a product number, a manufacturer, a specification, and the like) of the device, and generates a command according to an action to be performed by the device based on the device information.
- the wearable terminal 100 transmits the generated command to the device.
- the device performs a predetermined action based on this command.
- the predetermined action is, for example, the following.
- the wearable terminal 100 When the wearable terminal 100 detects a speaker close to the person, the wearable terminal 100 causes the user's voice to be amplified only to the speaker. Further, when the wearable terminal 100 detects a smartphone owned by the person, the wearable terminal 100 performs control to cause only the smartphone to talk. Further, when the wearable terminal 100 detects a smartphone owned by the person, the wearable terminal 100 performs control such that only the smartphone is made to send an email.
- FIG. 2 is a diagram showing a system configuration of the device control system 1 according to a preferred embodiment of the present invention.
- the device control system 1 is a computer system configured of a wearable terminal 100.
- the device control system 1 may be realized not only by existing devices but also by virtual devices. Further, the device control system 1 is not limited to the configuration using only the wearable terminal 100, and may have a configuration not only shown but also devices such as a speaker, a smartphone, other wearable terminals, a gateway terminal, a computer, etc. Good. In this case, the respective devices constituting the control system 1 are connected so as to be capable of data communication. In addition to the wearable terminal 100, each process to be described later may be realized by any or a combination of the above-described device, gateway terminal, or computer.
- the wearable terminal 100 is the above-described terminal device having a function described later.
- FIG. 3 is a diagram showing a functional block diagram of the wearable terminal 100. As shown in FIG.
- the wearable terminal 100 includes a central processing unit (CPU), a random access memory (RAM), a read only memory (ROM), and the like as the control unit 110, and can communicate with a device (not shown) as the communication unit 120.
- Devices for example, a wireless-fidelity (Wi-Fi) compliant device or a near field communication device compliant with IEEE 802.11.
- the wearable terminal 100 includes, as the storage unit 130, a storage unit of data such as a hard disk, a semiconductor memory, a recording medium, and a memory card.
- the wearable terminal 100 includes, as the processing unit 140, various devices that execute image analysis, voice recognition, command generation, person specification, person shooting, various calculations, processing, and the like.
- the control unit 110 reads a predetermined program to realize the device detection module 150 and the connection module 151 in cooperation with the communication unit 120. Further, in the wearable terminal 100, the control unit 110 reads a predetermined program to realize the storage module 160 in cooperation with the storage unit 130. Further, in the wearable terminal 100, the control unit 110 reads a predetermined program to cooperate with the processing unit 140 to capture the photographing module 170, the person determination module 171, the person identification module 172, the distance identification module 173, and the voice reception module. 174, the control module 175 is realized.
- FIG.4 and FIG.5 is a figure which shows the flowchart of the apparatus control processing which the wearable terminal 100 performs. The processing executed by each module described above will be described together with this processing.
- the photographing module 170 photographs an image such as a still image or a moving image (step S10).
- the imaging module 170 captures a stereoscopic image using a stereo camera.
- the imaging module 170 captures an image based on a shooting instruction from the user (for example, an input by a voice instructing a shooting or an input by a gesture).
- the user issues a shooting instruction by, for example, inputting a person's name of a target person, shooting, and other specific voices.
- the wearable terminal 100 acquires an image of a person by capturing an image.
- the imaging module 170 may be configured to always capture an image regardless of a shooting instruction from the user. For example, the imaging module 170 may capture an image constantly or at predetermined time intervals. In this case, the storage module 160 temporarily stores the image captured by the imaging module 170. The storage module 160 deletes this image if the process of accepting the name of the person to be described later is not executed even after a predetermined time (for example, 30 seconds, 1 minute, 5 minutes) has elapsed after storage.
- a predetermined time for example, 30 seconds, 1 minute, 5 minutes
- the wearable terminal 100 performs processing for receiving a name of a person, which will be described later, if the image is captured constantly, the wearable terminal 100 performs target processing based on the image at the time of acceptance and captures an image every predetermined time In this case, the target processing may be performed based on the image at the point closest to the received point.
- the person determination module 171 determines whether a person is included in the image (step S11). In step S11, the person determination module 171 extracts feature points and feature amounts in the image. The person determination module 171 determines whether a person is included in the image based on the extracted feature points and feature amounts.
- step S11 when the person determination module 171 determines that a person is not included in the image (NO in step S11), the process of step S10 is performed again.
- the person identifying module 172 identifies the person in the image (step S12).
- the person specifying module 172 is configured to store feature points and feature amounts of a person that can be a target stored in advance in the storage module 160, and attribute information of the person (a person's name, a terminal device such as a owned smartphone or wearable terminal).
- a person included in the image is specified by referring to a person database registered in association with an identifier (SSID, telephone number, e-mail address, ID of communication application, etc.).
- the person specifying module 172 collates the feature points and feature amounts extracted this time with the feature points and feature amounts registered in the person database, and is associated with the feature points and feature amounts obtained as the matching result. Identify attribute information.
- the person specifying module 172 specifies that a person corresponding to the attribute information is a person in the image.
- the person specifying module 172 includes a plurality of persons in the image, and is described as specifying each person.
- the person specifying module 172 may be configured to perform machine learning by pattern recognition such as a neural network on the person specified this time, and to improve the person specifying accuracy at the next and subsequent times.
- the distance specifying module 173 specifies the distance between the specified person and the position of the user (step S13). In step S13, the distance specifying module 173 specifies the distance from the result of learning the deviation between the two images captured by the imaging module 170.
- FIG. 6 is a view schematically showing a state in which the distance specifying module 173 specifies a distance.
- the photographing module 170 photographs a person 220 by the photographing device provided in each of the first lens 200 and the second lens 210 of the wearable terminal 100.
- the imaging module 170 superposes two images and captures one stereoscopic image.
- the distance specifying module 173 estimates the distance D between itself and the person 220 from the result of learning in advance the length of the image shift in the stereoscopic image and the actual distance.
- the configuration in which the distance specifying module 173 estimates the distance between itself and the person 220 is not limited to the above-described example, and may be another method, and can be appropriately changed.
- the distance specifying module 173 specifies the position information of the person 220 based on the position information of itself acquired from the GPS and the distance between the specified self and the person 220 (step S14). In step S14, the distance specifying module 173 specifies the position information of the person 220 based on the view direction of the user (the view direction of the user, which means the direction in which the user is facing) and the distance.
- FIG. 7 is a diagram schematically showing a state in which the distance specifying module 173 specifies position information of the person 220.
- the distance specifying module 173 schematically indicates the viewing direction F of the wearable terminal 100 which is the self and the distance D between the self and the person 220.
- the distance specifying module 173 specifies, from the view direction F, in which direction the person 220 is viewed from the user.
- the distance specifying module 173 specifies the position information of the person 220 based on the specified direction, the position information of itself, and the distance D. That is, the distance specifying module 173 specifies the position information of the person 220 by adding the distance D toward the corresponding direction to the position information of itself.
- the configuration in which the distance specifying module 173 specifies the position information of the person 220 is not limited to the example described above, and can be changed as appropriate.
- the device detection module 150 detects devices in the vicinity of the person 220 (step S15).
- the device detection module 150 performs data communication with the device via a router (not shown).
- the device detection module 150 acquires, from the device, position information of the device acquired by the device from the GPS.
- the device detection module 150 compares the acquired position information of the device with the position information of the identified person 220, and detects a device under a predetermined condition (for example, within a few meters radius, within a few tens cm radius)
- the device is detected as a device in the vicinity of the person 220.
- the device detection module 150 also detects devices in the vicinity of itself. Further, when there is no device in the vicinity of the person 220, the device detection module 150 detects that the device could not be detected.
- the device detection module 150 acquires device information (product number, manufacturer, specifications, SSID, etc.) of the detected device from this device (step S16). In step S16, the device detection module 150 acquires device information stored in the detected device for processing for causing a predetermined action to be described later.
- the device detection module 150 may obtain the device information of the detected device from an external database or the like instead of directly from the device. For example, the device detection module 150 may acquire the device information by acquiring an identifier that can uniquely identify the detected device and accessing an external database in which device information corresponding to the identifier is registered. Further, the device detection module 150 does not necessarily have to acquire device information at this timing, and may be appropriately performed at the timing during generation of a command for causing a device to be described later to execute a predetermined action.
- the voice receiving module 174 receives an input of voice from the user (step S17).
- the voice receiving module 174 collects voice from the user, and receives input of this voice.
- the voice reception module 174 receives an input of a voice, "Is Mr. Yamada how are you?"
- the voice receiving module 174 determines whether an input of a person's name has been received (step S18). In step S18, the voice receiving module 174 performs voice recognition on the received voice to determine whether a person's name is included in the voice. When the voice receiving module 174 determines that the input of the person's name is not received (step S18) If it is determined that the voice does not contain a person's name, the process is repeated from the beginning.
- step S18 if it is determined in step S18 that the voice receiving module 174 receives an input of a person's name (YES in step S18), that is, if it is determined that the person's name is included in the voice, the person specifying module 172 It is determined whether the name of the person whose input is accepted this time matches the name of the person in the specified image (step S19). In step S19, the person specifying module 172 determines whether the received person's name matches the person name included in the attribute information of the specified person.
- step S19 when the person specifying module 172 determines that the person names do not match (step S19 NO), the control module 175 notifies the user that there is no corresponding person in the field of view. And a command for causing a device (speaker, smartphone, etc.) near the user (for example, within a radius of several meters, within a radius of tens of cm) to execute a predetermined action (step S20).
- step S20 when the control module 175 detects a plurality of devices in the vicinity of the user, the control module 175 generates a command for causing the device to execute a predetermined action based on the device information of the device located closest to the user. .
- control module 175 detects a speaker in the vicinity of the user, a sound notifying that there is no corresponding person based on the device information of this speaker (for example, "the corresponding person does not exist.")
- the voice that notifies the person who is in the field of vision) (“There is no corresponding person. The person in the surroundings is Mr. Tanaka, Mr. Koga, Mr. Muto.”) Is generated, and this generated voice is Generate a command to louden from the speaker.
- step S20 when the control module 175 can not detect any device other than the smartphone in the vicinity of the user except the wearable terminal 100, the control module 175 detects the smartphone, and based on the device information of the smartphone Generate a command to execute a predetermined action.
- control module 175 makes a call to this smartphone, and also generates a voice notifying that there is no corresponding person or a voice notifying of a person within the field of view by the call function and calling this generated voice Generate a command to
- control module 175 generates a message notifying that there is no person corresponding to only this smartphone or a message notifying a person within the field of view, and sends this generated message to the email address of this smartphone or this smartphone Send it to the ID of the installed communication application and generate a command to display this generated message.
- the control module 175 may generate the above-described command for each of a plurality of devices.
- the above-described command may be generated for each of different types (for example, a speaker and a smartphone).
- connection module 151 transmits the generated command to the device (step S21).
- the connection module 151 enables data communication with the target device via a router or the like, and transmits this command.
- the connection module 151 may transmit the generated command to each device. At this time, the connection module 151 may shift the timing of transmitting the command according to the device of the transmission destination. For example, the connection module 151 may transmit the command to the smartphone first, and may transmit the command to the speaker after a predetermined time has elapsed, or conversely, the command may be transmitted to the speaker first, and the predetermined time has elapsed. You may make it transmit a command to a smart phone later. Also, the connection module 151 first transmits a command to one device, and if there is no action from the user even after a predetermined time has elapsed, the re-generated command is transmitted to the one device or the user. It may be transmitted to another device different from this one device in the vicinity.
- the device receives this command and performs a predetermined action according to the command. That is, the voice is spread only to the speaker near the user, the call is made only to the smartphone owned by the user, and the message is displayed only via the smartphone owned by the user via the mail or communication application. .
- the wearable terminal 100 After transmitting the command, the wearable terminal 100 ends the process.
- step S19 when the person specifying module 172 determines that the person names match in step S19 (YES in step S19), the control module 175 selects the vicinity of this person (for example, within a few meters radius, within a few tens cm radius). A command for causing a certain device (speaker, wearable terminal, smartphone, etc.) to execute a predetermined action is generated (step S22).
- step S22 when the control module 175 detects a plurality of devices in the vicinity of a person, the control module 175 generates a command for causing the device to execute a predetermined action based on the device information of the device closest to the person.
- the predetermined action to be executed by the control module 175 is, for example, expanding the voice uttered by the user from this device, converting the voice uttered by the user into text, and e-mailing or communicating with this device
- the message may be displayed as a message in the application, or vital data of the person may be acquired from a device worn by the person.
- the device in the vicinity if the device does not have the function of executing a predetermined action, it is also possible to execute a substitutable action.
- a substitutable action For example, when the device in the vicinity is a speaker, and the smartphone is not in the vicinity, and a command related to a call is generated as a predetermined action, a ringer tone, a ringing tone or the like capable of identifying a caller can be amplified by the speaker.
- a ringer tone, a ringing tone or the like capable of identifying a caller can be amplified by the speaker.
- the wearable terminal is closest to the device, but there may be cases where there is no function that causes a predetermined action to be performed, so actions other than the specified action (for example, acquisition of vital data) are executed.
- the command When generating a command to be executed, the command may be excluded from the transmission destination device.
- the input of a specific action and the input of a specific keyword may be performed when the above-described person's name is received, or may be realized by a configuration other than that.
- step S22 when the control module 175 detects a speaker in the vicinity of a person, a command necessary for causing the speaker to louden the received voice "How are you doing? Yamada?” Based on the device information of this speaker Generate Further, when the control module 175 detects a smartphone in the vicinity of a person, the control module 175 makes a call to the smartphone and generates a command necessary to enable the call function. Further, when the control module 175 detects a smartphone in the vicinity of a person, the control module 175 converts the received voice into text, transmits the text converted voice to the email address of the smartphone or the ID of the communication application as a message, Generate commands necessary to display the message “How are you?”. In addition, when the wearable terminal is detected in the vicinity of a person, the control module 175 acquires vital data, and generates a command necessary to transmit the vital data to the wearable terminal 100 or a computer (not shown).
- the control module 175 is in the vicinity based on a predetermined input (for example, designation of a device, exclusion of a device), a predetermined keyword (for example, various device names, a specific word, a purpose)
- a command may be generated to perform a predetermined action on a device that most meets the conditions among the devices.
- the device may be searched by referring to a database in which the device and the purpose of use, the use condition, and the like are associated in advance and stored.
- the control module 175 may generate a command for causing the smartphone to execute a predetermined action when the device other than the smartphone can not be detected in the vicinity.
- the control module 175 may generate the above-described command for each of a plurality of devices.
- the above-described command may be generated for each of different types (for example, a speaker and a smartphone).
- the command generated at this time is a command for executing a predetermined action according to the device of the transmission destination (for example, a speaker's voice, a smartphone displaying a call or a message).
- connection module 151 transmits the generated command to the device (step S23).
- the connection module 151 enables data communication with the target device via a router or the like, and transmits this command.
- the connection module 151 may transmit the generated command to each device. At this time, the connection module 151 may shift the timing of transmitting the command according to the device of the transmission destination. For example, the connection module 151 may transmit the command to the smartphone first, and may transmit the command to the speaker after a predetermined time has elapsed, or conversely transmit the command to the speaker first, and after the predetermined time has elapsed. The command may be sent to the smartphone.
- connection module 151 first transmits a command to one device, and if there is no action from a person even after a predetermined time has elapsed, this one device or this person re-generates a command It may be sent to another device different from this one device in the vicinity of.
- the device receives this command and performs a predetermined action according to the command. That is, the voice is spread only to the speaker near the person, the call is made only to the smartphone owned by the person, and the message is displayed only via the smartphone owned by the person via the mail or communication application. .
- the above-described means and functions are realized by a computer (including a CPU, an information processing device, and various terminals) reading and executing a predetermined program.
- the program is provided, for example, in the form of being provided from a computer via a network (SaaS: software as a service).
- the program is provided in the form of being recorded on a computer-readable recording medium such as, for example, a flexible disk, a CD (such as a CD-ROM), and a DVD (such as a DVD-ROM or a DVD-RAM).
- the computer reads the program from the recording medium, transfers the program to an internal storage device or an external storage device, stores it, and executes it.
- the program may be recorded in advance in a storage device (recording medium) such as, for example, a magnetic disk, an optical disk, or a magneto-optical disk, and may be provided from the storage device to the computer via a communication line.
Landscapes
- Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Telephonic Communication Services (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
前記画像を解析して、前記人物を特定する特定手段と、
前記人物の近傍にある機器を検出する検出手段と、
特定した前記人物のうち、所望する人物名を受け付ける受付手段と、
前記人物名に一致した人物の近傍に存在する機器に所定のアクションを実行させる制御手段と、
を備えることを特徴とするコンピュータシステムを提供する。
本発明の好適な実施形態の概要について、図1に基づいて説明する。図1は、本発明の好適な実施形態である機器制御システム1の概要を説明するための図である。機器制御システム1は、ウェアラブル端末100から構成されるコンピュータシステムである。ウェアラブル端末100は、図示していないスピーカ、スマートフォン、他のウェアラブル端末等の機器と図示していないルータ等のゲートウェイ端末やサーバ機能を有したコンピュータ等を介して、データ通信を行う。
図2に基づいて、本発明の好適な実施形態である機器制御システム1のシステム構成について説明する。図2は、本発明の好適な実施形態である機器制御システム1のシステム構成を示す図である。機器制御システム1は、ウェアラブル端末100から構成されるコンピュータシステムである。
図3に基づいて、本発明の好適な実施形態である機器制御システム1の機能について説明する。図3は、ウェアラブル端末100の機能ブロック図を示す図である。
図4及び図5に基づいて、機器制御システム1が実行する機器制御処理について説明する。図4及び図5は、ウェアラブル端末100が実行する機器制御処理のフローチャートを示す図である。上述した各モジュールが実行する処理について、本処理に併せて説明する。
NO)、すなわち、音声に人物名が含まれていないと判断した場合、本処理を最初から繰り返す。
Claims (6)
- 人物の画像を取得する取得手段と、
前記画像を解析して、前記人物を特定する特定手段と、
前記人物の近傍にある機器を検出する検出手段と、
特定した前記人物のうち、所望する人物名を受け付ける受付手段と、
前記人物名に一致した人物の近傍に存在する機器に所定のアクションを実行させる制御手段と、
を備えることを特徴とするコンピュータシステム。 - 前記検出手段は、前記人物から近い所にあるスピーカを検出し、
前記制御手段は、前記人物名に一致した人物から近い所にあるスピーカだけに拡声させるように制御する、
ことを特徴とする請求項1に記載のコンピュータシステム。 - 前記検出手段は、前記人物が所有しているスマートフォンを検出し、
前記制御手段は、前記人物名に一致した人物が所有しているスマートフォンだけに通話させるように制御する、
ことを特徴とする請求項1に記載のコンピュータシステム。 - 前記検出手段は、前記人物が所有しているスマートフォンを検出し、
前記制御手段は、前記人物名に一致した人物が所有しているスマートフォンだけにメールさせるように制御する、
ことを特徴とする請求項1に記載のコンピュータシステム。 - コンピュータシステムが実行する機器制御方法であって、
人物の画像を取得するステップと、
前記画像を解析して、前記人物を特定するステップと、
前記人物の近傍にある機器を検出するステップと、
特定した前記人物のうち、所望する人物名を受け付けるステップと、
前記人物名に一致した人物の近傍に存在する機器に所定のアクションを実行させるステップと、
を備えることを特徴とする機器制御方法。 - コンピュータシステムに、
人物の画像を取得するステップ、
前記画像を解析して、前記人物を特定するステップ、
前記人物の近傍にある機器を検出するステップ、
特定した前記人物のうち、所望する人物名を受け付けるステップ、
前記人物名に一致した人物の近傍に存在する機器に所定のアクションを実行させるステップ、
を実行させるためのコンピュータ読み取り可能なプログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019556442A JPWO2019106734A1 (ja) | 2017-11-29 | 2017-11-29 | コンピュータシステム、機器制御方法及びプログラム |
PCT/JP2017/042697 WO2019106734A1 (ja) | 2017-11-29 | 2017-11-29 | コンピュータシステム、機器制御方法及びプログラム |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2017/042697 WO2019106734A1 (ja) | 2017-11-29 | 2017-11-29 | コンピュータシステム、機器制御方法及びプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019106734A1 true WO2019106734A1 (ja) | 2019-06-06 |
Family
ID=66665464
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/042697 WO2019106734A1 (ja) | 2017-11-29 | 2017-11-29 | コンピュータシステム、機器制御方法及びプログラム |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2019106734A1 (ja) |
WO (1) | WO2019106734A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001357067A (ja) * | 2000-04-03 | 2001-12-26 | Konica Corp | 画像データ検索方法及びコンピュータ読み取り可能な記憶媒体 |
JP2003296268A (ja) * | 2002-04-02 | 2003-10-17 | Minolta Co Ltd | ネットワークシステム、コントローラ、およびプログラム |
JP2014010502A (ja) * | 2012-06-27 | 2014-01-20 | Shunji Sugaya | メッセージ送信システム、メッセージ送信方法、プログラム |
JP2017143588A (ja) * | 2017-05-25 | 2017-08-17 | 沖電気工業株式会社 | 情報処理装置、情報処理方法、プログラムおよびネットワークシステム |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014164374A (ja) * | 2013-02-22 | 2014-09-08 | Canon Inc | 情報表示システム、情報端末、サーバ装置、情報端末の制御方法、サーバ装置の制御方法、及びプログラム |
JP6528311B2 (ja) * | 2015-03-10 | 2019-06-12 | 成広 武田 | 行動サポート装置 |
-
2017
- 2017-11-29 WO PCT/JP2017/042697 patent/WO2019106734A1/ja active Application Filing
- 2017-11-29 JP JP2019556442A patent/JPWO2019106734A1/ja active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001357067A (ja) * | 2000-04-03 | 2001-12-26 | Konica Corp | 画像データ検索方法及びコンピュータ読み取り可能な記憶媒体 |
JP2003296268A (ja) * | 2002-04-02 | 2003-10-17 | Minolta Co Ltd | ネットワークシステム、コントローラ、およびプログラム |
JP2014010502A (ja) * | 2012-06-27 | 2014-01-20 | Shunji Sugaya | メッセージ送信システム、メッセージ送信方法、プログラム |
JP2017143588A (ja) * | 2017-05-25 | 2017-08-17 | 沖電気工業株式会社 | 情報処理装置、情報処理方法、プログラムおよびネットワークシステム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2019106734A1 (ja) | 2020-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10212593B2 (en) | Context-related arrangements | |
US9792488B2 (en) | Adjacent person specifying apparatus, adjacent person specifying method, adjacent person specifying program, and adjacent person specifying system | |
US20140006513A1 (en) | Adjacent person specifying apparatus, adjacent person specifying method, adjacent person specifying program, and adjacent person specifying system | |
CN110536075B (zh) | 视频生成方法和装置 | |
US20150139508A1 (en) | Method and apparatus for storing and retrieving personal contact information | |
US20160277707A1 (en) | Message transmission system, message transmission method, and program for wearable terminal | |
US10657361B2 (en) | System to enforce privacy in images on an ad-hoc basis | |
US20200112838A1 (en) | Mobile device that creates a communication group based on the mobile device identifying people currently located at a particular location | |
CN105791325A (zh) | 图像发送方法和装置 | |
US20120242860A1 (en) | Arrangement and method relating to audio recognition | |
CN111406400B (zh) | 会议电话参与者标识 | |
KR101599165B1 (ko) | 무선 기기들을 연결하는 방법 및 시스템 | |
CN112119372B (zh) | 电子设备及其控制方法 | |
JP2019028559A (ja) | 作業分析装置、作業分析方法及びプログラム | |
US9843683B2 (en) | Configuration method for sound collection system for meeting using terminals and server apparatus | |
WO2019106734A1 (ja) | コンピュータシステム、機器制御方法及びプログラム | |
US11184184B2 (en) | Computer system, method for assisting in web conference speech, and program | |
JP6387205B2 (ja) | コミュニケーションシステム、コミュニケーション方法及びプログラム | |
JP5904887B2 (ja) | メッセージ送信システム、メッセージ送信方法、プログラム | |
KR102192019B1 (ko) | 배포 엔진 기반 콘텐츠 제공 방법 및 이를 사용하는 전자 장치 | |
JP7139839B2 (ja) | 情報処理装置、情報処理方法およびプログラム | |
US20230090122A1 (en) | Photographing control device, system, method, and non-transitory computer-readable medium storing program | |
US20230084625A1 (en) | Photographing control device, system, method, and non-transitory computer-readable medium storing program | |
JP2016106334A (ja) | メッセージ送信システム、メッセージ送信方法、プログラム | |
KR20170022272A (ko) | 녹음 시스템 및 녹음 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17933199 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019556442 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24/08/2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17933199 Country of ref document: EP Kind code of ref document: A1 |