WO2005022461A1 - 電子機器及び電子機器における応答情報出力方法 - Google Patents
電子機器及び電子機器における応答情報出力方法 Download PDFInfo
- Publication number
- WO2005022461A1 WO2005022461A1 PCT/JP2004/012863 JP2004012863W WO2005022461A1 WO 2005022461 A1 WO2005022461 A1 WO 2005022461A1 JP 2004012863 W JP2004012863 W JP 2004012863W WO 2005022461 A1 WO2005022461 A1 WO 2005022461A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- image
- response information
- electronic device
- response
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
Definitions
- the present invention relates to an electronic device that makes a response based on image information, an electronic device that makes a response based on image information and can simulate a virtual character, and a response information output method for the electronic device.
- Patent Document 1 As electronic devices that communicate with a virtual character, those described in Patent Document 1 and Patent Document 2 have been proposed.
- the virtual pet device described in Patent Document 1 responds appropriately to stimuli such as external voices and images, and analyzes the external stimuli to determine the stimulus input person. The response is made according to the discriminator and the degree of recognition.
- the user recognition growth system described in Patent Document 2 is a variety of robot devices for toys, games, consumer products, industrial use, navigation devices, vending machines, automatic reception devices, etc. Applies to interactive machines, compares user information recognized with stored user information, calculates user recognition, and action selected according to calculated user recognition , Or the method of action selection.
- Patent Document 1 whether or not an external voice or image input person is a registrant, and the degree of recognition in the case of a registrant (Natsuki) Response). Therefore, the person other than the registrant cannot perform the communication with the virtual character or is limited to the simple one if possible, and the owner has little fun to use with the friend. In addition, the response tends to be uniform because it does not respond to the input voice or image itself.
- Patent Document 1 International Publication No. 0 0/5 3 2 8 1 Pamphlet
- Patent Document 2 Japanese Patent Laid-Open No. 2 0 0 1-5 1 9 70
- the present invention has been made in view of the above circumstances, and is an electronic device capable of simulating a virtual character, and a person other than the user can communicate with the virtual character.
- the purpose is to provide an electronic device and a response information output method that can be enjoyed even if the user uses it with other people.
- An electronic device performs a response based on image information, and includes an image input means for inputting image information, and a registration dictionary storage for storing a registration dictionary including user face image data or facial feature data.
- the registration dictionary includes partial area image data or feature data for each facial part, and the response information generating means is a degree of similarity between the image information for each facial part extracted from the input image information and the registration dictionary It generates response information based on. According to the present invention, since the response information corresponding to the image information for each part of the face extracted from the input image information is generated, a variety of responses can be made to the input image.
- the electronic device of the present invention further includes character simulation means for simulating a virtual character, and the character simulation means generates simulated response information simulating the behavior of the character based on the response information,
- the output means performs output based on the simulated response information. According to the present invention, a response simulating a virtual character is performed, so that communication with the character can be enjoyed.
- the response information generation unit compares the registration dictionary with the input image information to determine whether the input image information indicates the user image, and Includes information that generates different response information depending on the judgment result.
- a person other than the user can also communicate with the virtual character. You can play a case and enjoy it even if you use it with other people.
- the electronic device may be configured such that when the user's face image data or facial feature data included in the registration dictionary is determined that the input image information indicates the user's image, Including those that are updated based on the input image information. According to the present invention, when the user uses, the registration dictionary is updated in consideration of the latest face image data, so that a more accurate response can be performed.
- the electronic device of the present invention is configured such that the response information includes the input image and the partial region image data for each part of the face of the user. Or, it includes information on facial parts that have low similarity to feature data. According to the present invention, since a response is made to the user for a different part of the usual face, the user can enjoy the response.
- the response information when the input image information is not determined to indicate the user's image, the response information includes partial area image data or feature data for each facial part of the input image. Including partial area image data for each facial part included in the registration dictionary or information including information related to a facial part with high similarity of feature data. According to the present invention, it is possible to respond to a partly similar person and part of the image input other than the user, so that it can be enjoyed even when used with other people. Can do. By storing image data or feature data of the user or other people around the user as a registered dictionary, you can enjoy further responses.
- the registration dictionary includes image data or feature data downloaded from a data providing server via a network.
- face images of a large number of people including talents, celebrities, etc. can be easily stored as a registered dictionary.
- the electronic device includes an electronic device in which the response information generating unit generates the response information using information downloaded from a data providing server via a network. According to the present invention, it is possible to appropriately select a response information generation algorithm, and it is possible to enjoy a response that is more varied.
- the electronic device according to the present invention includes an electronic device in which the downloaded response information can be updated. According to the present invention, the response information generation algorithm can be changed periodically or at the request of the user, and a variety of responses can be enjoyed.
- the image input unit includes an image photographing unit.
- image information can be input easily. For example, when a face enters the field of view of an image capturing device such as a camera, automatically input the captured image and output a response according to the input image information, so that multiple users can enjoy together be able to.
- the output unit includes an image display unit. According to the present invention, a response can be output easily.
- the response information output method of the present invention is a response information output method in an electronic device that makes a response based on image information, and includes an image input step for inputting image information, and facial image data or facial feature data of a user Response information is generated using a registration dictionary storage step for storing a registration dictionary, the input image information input in the image input step, and a registration dictionary including user face image data or feature data.
- the registration dictionary includes partial region image data or feature data for each part of the face,
- the response information generation step generates response information based on the similarity between the image information for each part of the face extracted from the input image information and the registration dictionary. It is.
- the electronic device includes character simulation means for simulating a virtual character, and the character simulation means performs character behavior based on the response information generated in the response information generation step.
- the character simulation means performs character behavior based on the response information generated in the response information generation step.
- the response information generation step compares the registered dictionary with the input image information, and the input image information indicates an image of the user. It is determined whether or not the response information is different, and different response information is generated according to the determination result.
- the response information output method of the present invention when it is determined that the input image information indicates the user's image, the user's face image data or facial feature data included in the registration dictionary Is updated based on the input image information.
- the program of the present invention is a program for executing each step in the above-described response information output method using a computer.
- a non-user person can communicate with the virtual character, and an electronic device and a response information output that can be enjoyed even if the user can use it with another person.
- a method can be provided.
- FIG. 1 is a diagram showing a schematic configuration of a camera-equipped mobile phone according to an embodiment of the present invention.
- FIG. 2 is a diagram showing an example of user face information items registered in the user face information database of the camera-equipped mobile phone according to the embodiment of the present invention.
- FIG. 3 is a diagram showing a configuration example of an arbitrary face information data base of the camera-equipped mobile phone according to the embodiment of the present invention.
- FIG. 4 is a diagram showing an example of items stored in the person attribute table of the camera-equipped mobile phone according to the embodiment of the present invention.
- FIG. 5 is a diagram showing a schematic operation flow in the case of performing a response according to input image information in the camera-equipped cellular phone according to the embodiment of the present invention.
- FIG. 6 is a diagram showing a display example of the display unit when performing a response according to input image information in the camera-equipped mobile phone according to the embodiment of the present invention.
- FIG. 7 is a diagram showing an example of information stored in the response database of the camera-equipped mobile phone according to the embodiment of the present invention.
- 1 is a control unit
- 2 is a ROM
- 3 is a RAM
- 4 is a nonvolatile memory
- 5 is an imaging unit
- 6 is a display unit
- 7 is an operation unit
- 10 is an internal bus
- 21 is an antenna
- 30 is an audio processing unit
- 31 is a microphone
- 32 is a speaker
- 100 is a mobile phone with a camera.
- the applied electronic device is a camera-equipped mobile phone, but the application target is not limited to a camera-equipped mobile phone.
- FIG. 1 is a diagram showing a schematic configuration of a camera-equipped cellular phone that is an electronic apparatus according to an embodiment of the present invention.
- 1 includes a control unit 1, ROM 2, RAM 3, non-volatile memory 4, imaging unit 5, display unit 6, operation unit 7, internal bus 10 and communication unit 2 0. , Antenna 2 1, sound processing unit 30, microphone 3 1, and speaker 3 2.
- the control unit 1 controls the overall operation of the mobile phone 100 and is mainly composed of a processor (not shown) that executes a predetermined program.
- the control unit 1 controls the exchange of data and commands via the internal bus 10 between the elements of the mobile phone 100. Further, as will be described in detail later, the control unit 1 has a function of generating response information using input image information and the dictionary image data stored in the nonvolatile memory 4. Furthermore, the control unit 1 has a function of simulating a virtual character. When outputting an image simulating the behavior of the character, it is output via the display unit 6, and when outputting a sound simulating the behavior of the character, it is output via the voice processing unit 30 and the speaker 32.
- the ROM 2 stores programs executed by the processor constituting the control unit 1 and various data used by the mobile phone 100.
- RAM 3 is a memory that temporarily stores data, and is also used as work memory when executing various processes by the control unit 1.
- the non-volatile memory 4 is composed of, for example, an EEPROM, stores a registration dictionary and response information template, which will be described later, and is also used for various data files when the user uses the camera-equipped mobile phone 100.
- the imaging unit 5 includes an optical system such as a lens, an imaging device, an image processing unit (all not shown), and the like, and outputs digital image data based on a captured image signal.
- the imaging unit 5 is the same as that provided in a conventional camera-equipped mobile phone, and the operation in the normal imaging mode is also the same.
- the through image in the shooting mode is displayed on the display unit 6.
- the shutter button of the operation unit 7 is operated, digital image data based on the image signal at that time is temporarily stored in the RAM 3, and the operation unit If save is supported from 7, it is stored in non-volatile memory 4. Since camera-equipped mobile phones that perform such photographing operations are well known, detailed description thereof will be omitted.
- the imaging unit 5 is also used as an image input means for inputting image information in a game mode in which a response based on image information described later is performed. In this mode, it is often operated while looking at the display screen of the display unit 6. Therefore, the lens of the imaging unit 5 is preferably directed to the display surface side of the display unit 6. Provide multiple imaging units 5, one of which is for shooting on the display surface side of the display unit 6, or the shooting direction of the imaging unit 5 is variable, and the display surface side of the display unit 6 is set as the shooting direction in the game mode Can be realized.
- the display unit 6 displays various types of information of the cellular phone 100, and includes a liquid crystal display panel that performs display and a display control circuit (none of which is shown) that drives the liquid crystal display panel.
- the operation unit 7 is used for a user to input commands and data for operating the cellular phone 100, and includes a numeric keypad for inputting a telephone number and various data, various function keys, and the like. These keys have different functions depending on the operation mode, and also have a function of a shutter button and a zoom button in a normal shooting mode, and a function of a shooting image input instruction key in a game mode to be described later. It is also used for data input to communicate with a virtual character simulated by the control unit 1.
- the communication unit 20 connected to the antenna 21 performs wireless communication with the outside, transmits transmission data on a carrier wave and transmits from the antenna 21, and demodulates reception data received by the antenna 21. To do.
- the demodulated data is audio data, it is sent to the audio processing unit 30.
- it is controlled by the control unit 1. It is sent to the control unit 1, RAM 3, nonvolatile memory 4, etc. via the internal bus 10.
- the transmission data is input directly from the audio processing unit 30 or from other elements via the internal path 10.
- the audio processing unit 30 converts the audio signal input from the microphone 31 into digital data, outputs it as transmission data to the communication unit 20, and receives data (audio data) output from the communication unit 20. Is converted to an analog audio signal and output to the speaker 32.
- the digital data based on the audio signal from the microphone 31 is sent to the control unit 1 and the like via the internal bus 10, and the digital data input via the buttocks path 10 is converted into an audio signal, and the speaker 3 Output to 2 is also possible.
- This cellular phone 100 has not only a voice call but also a camera function using the imaging unit 5, a data communication function using the communication unit 20, and a game function. These functions can be selectively operated by operating predetermined keys on the operation unit 7. Since the voice call function, camera function, and data communication function are the same as the conventional ones, explanations are omitted.
- the game function includes a function to enjoy a response based on the input image information.
- an output simulating the action of a virtual character is performed. Since the registration information stored in advance in the nonvolatile memory 4 is used for generating the response information, the registration dictionary will be described first.
- the registration dictionary includes a user face information database and an optional face information database.
- the user face information database stores user face information of the cellular phone device 100, and includes partial area image data for each part of the user's face or feature data for each face part.
- Figure 2 shows examples of user face information items registered in the user face information database. The data registered in these items is generated by analyzing the user's face image input in advance. Use the vertex coordinates for eyes, nose, mouth, ears, eyebrows, etc., multiple point coordinates on the outline for outlines, and data that distinguishes the hair area and other areas by binary values for hairstyle It can be used as human face information.
- the element layout data includes typical elements such as eyes, nose, and mouth. Register relative position data.
- the user face information may be a reduced image instead of the vertex coordinates.
- an image obtained by cutting out and reducing an image around the eyes so as to include the eyes may be used as the user face information.
- the user indicates one or more persons who have previously registered facial image data or facial feature data as users. Registration of facial image data or facial feature data is performed by inputting user facial image data captured by the imaging unit 5 and analyzing it by the control unit 1. It is preferable to use a plurality of face image data as the data to be registered. Therefore, as described later, an analysis result of image data determined to be a user when using the game function may be used. When using multiple face image data, the average value of them may be used, or the distribution may be registered.
- the optional face information database stores optional face information.
- Figure 3 shows an example of the configuration of the arbitrary face information database. Any person can be registered in the optional face information database. For example, a face image of a user's acquaintance or a celebrity face image such as a talent may be used. It may also contain user face images.
- the optional face information database may be created using face image data of the user photographed by the imaging unit 5, or may be created using face image data downloaded from a data providing server (not shown). . Also, the optional face information database itself may be downloaded. In the case of downloading, it is assumed that the cellular phone 100 can be connected to the data providing server via the network.
- a non-volatile memory 4 is provided with a person attribute table that stores attribute data of the person in association with the person ID.
- Figure 4 shows an example of items stored in the person attribute table.
- the data of the person attribute table can be used to generate response information based on input image information described later.
- the data of the person attribute table is stored together with the registration of the optional face information database.
- a response database described later may be stored in association with the person ID. In this case, it is preferable to download the response database when providing the optional face information database to be registered.
- FIG. 5 is a diagram showing a schematic operation flow when a response is made according to input image information.
- the operation of FIG. 5 is controlled by the program of the control unit 1.
- the face image is taken using the image pickup unit 5 and the taken image information is input (step S 5 0 2).
- the control unit 1 of the cellular phone 100 has a function of simulating a virtual character, an image as shown in FIGS. 6 (a) and 6 (b) is displayed on the display unit 6. It prompts the user to operate.
- FIGS. 6 (a) and 6 (b) is displayed on the display unit 6. It prompts the user to operate.
- the character simulated by the control unit 1 is a cat, and before setting the game mode, an image of sleeping or playing freely is displayed as shown in Fig. 6 (a). .
- the game mode is set in step S 5 0 1
- a question is asked facing the front as shown in FIG. 6 (b).
- the imaging unit 5 can shoot, and the through image from the imaging unit 5 is displayed on the display unit 6.
- the imaging unit 5 performs imaging, and the obtained image information is input and stored in the RAM 3.
- the character simulated by the control unit 1 may be a model of a real creature such as a cat, a model of an imaginary creature, or a model of an inanimate object such as a robot. Since a technique for displaying an image simulating such a character is well known in various game devices, a description thereof will be omitted. In addition, messages from characters may be displayed not only by characters but also by voice. Of course, the character data may be simply displayed to prompt the input of the image without using the character simulation technique.
- the image information for each part of the face is extracted from the input image information (step S500) and compared with the user face information in the user face information database (step S500). 4) It is determined whether or not the input image information indicates a user (step S500).
- Various methods can be used as the judgment method. The degree of similarity and the degree of similarity of the arrangement of the main elements can be obtained, and their scores (weighting determined as appropriate) can be used.
- the user face information database is updated as necessary in step S 5 06. This update process can be skipped and may be performed only when the similarity is high.
- step S500 response information is generated.
- the response information here is obtained by comparing the image information for each part of the face extracted from the input image information with the user face information in the user face information database, and according to the comparison result. is there.
- Response information corresponding to the comparison result can be generated using information stored in the response database as shown in FIG.
- the response information stored in the response database if information on the part of the face where the similarity between the input image information and the user face information is low is used, the difference from the usual face can be pointed out and you can enjoy the response .
- the response database is stored in advance in a non-volatile memory.
- Information downloaded from the data providing server may be used as the data stored in the response database.
- the response information generated in step S 5 07 is output as image data to the display unit 6 (step S 5 0 8).
- Figure 6 (c) shows an example. This example is a response example when the input image information is determined to indicate the user, but the hairstyle similarity is particularly low.
- the response information based on the input image information may be output not only using an image simulating a virtual character, but also output audio information. If the cellular phone 100 does not have a function of simulating a virtual character, it may be simply displayed as text information on the display unit 6, or may be output as audio to the speaker 32.
- step S 5 0 9 After the response information is output, it is determined whether or not to continue the game (step S 5 0 9). If the game is to be continued, the process returns to step S 5 0 2 and image information is input. If it is determined in step S500 that the input image information does not indicate a user, the image information for each part of the face extracted from the input image information is stored in the arbitrary face information database. Compare with information (step S 5 1 0). In step S500, response information corresponding to the comparison result is generated. The response information in this case is generated by selecting arbitrary face information having a high similarity as a result of comparison and using information related to the part and the person.
- a person ID table corresponding to the arbitrary face information is used to refer to a person attribute table as shown in FIG. Get attribute.
- response information is generated using information on the facial part and the person attribute. For example, if the extracted arbitrary face information is included in the eye image database and is the eye image data of the talent A, a response sentence “the eye looks like A” is generated. Also, when the similarity between the user's multiple user face information is high, a response sentence such as “Mr. X (user name) is very similar. .
- a template for generating a response sentence is prepared in advance in the response database. It is also possible to prepare a dialect response database and generate dialect response sentences. A response database that mimics the tone of talent and comic characters may also be prepared.
- the information of the shooting date and time is held as an attribute of the person corresponding to the optional face information, and “Your eyes are similar to Talent A, which is 0 years ago,” or “You may make a response like “It ’s similar.”
- the present invention can be used for an electronic device that makes a response based on image information, an electronic device that makes a response based on image information and can simulate a virtual character.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Collating Specific Patterns (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04772812A EP1662437A1 (en) | 2003-09-01 | 2004-08-30 | Electronic device and method for outputting response information in electronic device |
US10/569,989 US20070003140A1 (en) | 2003-09-01 | 2004-08-30 | Electronic device and method for outputting response information in electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-308617 | 2003-09-01 | ||
JP2003308617A JP2005078413A (ja) | 2003-09-01 | 2003-09-01 | 電子機器及び電子機器における応答情報出力方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005022461A1 true WO2005022461A1 (ja) | 2005-03-10 |
Family
ID=34269520
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/012863 WO2005022461A1 (ja) | 2003-09-01 | 2004-08-30 | 電子機器及び電子機器における応答情報出力方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20070003140A1 (ja) |
EP (1) | EP1662437A1 (ja) |
JP (1) | JP2005078413A (ja) |
CN (1) | CN1846227A (ja) |
WO (1) | WO2005022461A1 (ja) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4544026B2 (ja) * | 2005-05-11 | 2010-09-15 | オムロン株式会社 | 撮像装置、携帯端末 |
US7697827B2 (en) | 2005-10-17 | 2010-04-13 | Konicek Jeffrey C | User-friendlier interfaces for a camera |
US8300953B2 (en) * | 2008-06-05 | 2012-10-30 | Apple Inc. | Categorization of digital media based on media characteristics |
JP2010134910A (ja) * | 2008-11-07 | 2010-06-17 | Fujifilm Corp | ペット画像検出システムおよびその動作制御方法 |
JP5956860B2 (ja) * | 2012-07-09 | 2016-07-27 | キヤノン株式会社 | 画像処理装置、画像処理方法、プログラム |
WO2017043132A1 (ja) | 2015-09-08 | 2017-03-16 | 日本電気株式会社 | 顔認識システム、顔認識方法、表示制御装置、表示制御方法および表示制御プログラム |
JP6433928B2 (ja) * | 2016-02-15 | 2018-12-05 | 株式会社東芝 | 検索装置、検索方法および検索システム |
JP7435908B2 (ja) | 2021-04-09 | 2024-02-21 | 日本電気株式会社 | 認証システム、処理方法及びプログラム |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1173092A (ja) * | 1997-08-28 | 1999-03-16 | Omron Corp | 仮想ペット飼育装置、方法及びプログラム記録媒体 |
JP2000137818A (ja) * | 1998-11-04 | 2000-05-16 | Ntt Data Corp | パターン認識方式 |
JP2001057641A (ja) * | 1999-08-17 | 2001-02-27 | Nec Corp | 画像入力装置とその制御方法、画像表示装置、画像表示方法、及びプログラム供給媒体 |
JP2003058888A (ja) * | 2001-08-15 | 2003-02-28 | Secom Co Ltd | 個人照合装置 |
JP2003271934A (ja) * | 2002-03-18 | 2003-09-26 | Toshiba Corp | 顔画像認識システム及び顔画像認識方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001266151A (ja) * | 2000-03-17 | 2001-09-28 | Toshiba Corp | 個人識別装置および個人識別方法 |
JP4314016B2 (ja) * | 2002-11-01 | 2009-08-12 | 株式会社東芝 | 人物認識装置および通行制御装置 |
JP2004178163A (ja) * | 2002-11-26 | 2004-06-24 | Matsushita Electric Ind Co Ltd | 画像処理方法及びその装置 |
-
2003
- 2003-09-01 JP JP2003308617A patent/JP2005078413A/ja active Pending
-
2004
- 2004-08-30 US US10/569,989 patent/US20070003140A1/en not_active Abandoned
- 2004-08-30 EP EP04772812A patent/EP1662437A1/en not_active Withdrawn
- 2004-08-30 WO PCT/JP2004/012863 patent/WO2005022461A1/ja not_active Application Discontinuation
- 2004-08-30 CN CNA2004800250069A patent/CN1846227A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1173092A (ja) * | 1997-08-28 | 1999-03-16 | Omron Corp | 仮想ペット飼育装置、方法及びプログラム記録媒体 |
JP2000137818A (ja) * | 1998-11-04 | 2000-05-16 | Ntt Data Corp | パターン認識方式 |
JP2001057641A (ja) * | 1999-08-17 | 2001-02-27 | Nec Corp | 画像入力装置とその制御方法、画像表示装置、画像表示方法、及びプログラム供給媒体 |
JP2003058888A (ja) * | 2001-08-15 | 2003-02-28 | Secom Co Ltd | 個人照合装置 |
JP2003271934A (ja) * | 2002-03-18 | 2003-09-26 | Toshiba Corp | 顔画像認識システム及び顔画像認識方法 |
Non-Patent Citations (2)
Title |
---|
SAITO K. ET AL.: "Keitai camera de "kao" o ninshiki - eyematic", SOFUTO BANKU IT MEDIA KABUSHIKI KAISHA, 20 August 2002 (2002-08-20), XP002985664, Retrieved from the Internet <URL:http://www.itmedia.co.jp/mobile/0208/20/n-eyematic.html> [retrieved on 20040924] * |
SHIRANE M. ET AL.: "Omron, kao no gazo o ninshiki shite ninso uranai nado o suru service", IMPRESS KABUSHIKI KAISHA, 6 July 2001 (2001-07-06), XP002985663, Retrieved from the Internet <URL:http://k-tai.impress.co.jp/cda/article/news_toppage/5206.html> [retrieved on 20040624] * |
Also Published As
Publication number | Publication date |
---|---|
EP1662437A1 (en) | 2006-05-31 |
US20070003140A1 (en) | 2007-01-04 |
CN1846227A (zh) | 2006-10-11 |
JP2005078413A (ja) | 2005-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11605193B2 (en) | Artificial intelligence-based animation character drive method and related apparatus | |
CN111191599B (zh) | 姿态识别方法、装置、设备及存储介质 | |
RU2293445C2 (ru) | Способ и устройство для выполнения имитации воспитания в мобильном терминале | |
EP2355009A2 (en) | Terminal and method for providing augmented reality | |
CN111760265A (zh) | 一种操作控制的方法及装置 | |
CN113508369A (zh) | 交流支持系统、交流支持方法、交流支持程序以及图像控制程序 | |
CN109091869A (zh) | 虚拟对象的动作控制方法、装置、计算机设备及存储介质 | |
CN111290568A (zh) | 交互方法、装置及计算机设备 | |
CN110555507B (zh) | 虚拟机器人的交互方法、装置、电子设备及存储介质 | |
CN110794964A (zh) | 虚拟机器人的交互方法、装置、电子设备及存储介质 | |
CN109819167A (zh) | 一种图像处理方法、装置和移动终端 | |
CN112669846A (zh) | 交互系统、方法、装置、电子设备及存储介质 | |
WO2005022461A1 (ja) | 電子機器及び電子機器における応答情報出力方法 | |
CN112669416B (zh) | 客服服务系统、方法、装置、电子设备及存储介质 | |
JP6796762B1 (ja) | 仮想人物対話システム、映像生成方法、映像生成プログラム | |
CN117271749A (zh) | 一种元宇宙场景非玩家角色的创建方法及计算机 | |
JP2005078590A (ja) | 顔照合システム | |
EP1662438A1 (en) | Electronic device having user authentication function | |
CN115068940A (zh) | 虚拟场景中虚拟对象的控制方法、计算机设备及存储介质 | |
JP6491808B1 (ja) | ゲームプログラムおよびゲーム装置 | |
CN108525307B (zh) | 游戏实现方法、装置、存储介质及电子设备 | |
KR101068941B1 (ko) | 이동 통신 단말기의 개인 캐릭터 서비스 방법 및 그 이동통신 단말기 | |
JP6792658B2 (ja) | ゲームプログラムおよびゲーム装置 | |
JP2001209779A (ja) | 仮想生物システム及び仮想生物システムにおけるパタン学習方法 | |
JP6889192B2 (ja) | ゲームプログラムおよびゲーム装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200480025006.9 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004772812 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007003140 Country of ref document: US Ref document number: 10569989 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 2004772812 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2004772812 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 10569989 Country of ref document: US |