CN107336246B - Personification system of endowment robot - Google Patents
Personification system of endowment robot Download PDFInfo
- Publication number
- CN107336246B CN107336246B CN201710453058.5A CN201710453058A CN107336246B CN 107336246 B CN107336246 B CN 107336246B CN 201710453058 A CN201710453058 A CN 201710453058A CN 107336246 B CN107336246 B CN 107336246B
- Authority
- CN
- China
- Prior art keywords
- input information
- user input
- processing unit
- central processing
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/008—Manipulators for service tasks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The application discloses a personification system of an elderly people care robot, which comprises a central processing unit, and a sound pickup, a camera, a first storage module, a second storage module and a third storage module which are respectively in communication connection with the central processing unit; the central processor is internally provided with an information extraction module, a time screening module and a counting module; the central processing unit extracts the user input information in the second time period in the third storage module according to the second time period set in the time screening module, and the central processing unit enables the counting module to count and mark the user input information with the largest occurrence frequency in the second time period; the central processing unit enables the counting module to count and mark the user input information with the maximum occurrence frequency in the first time period; the central processing unit stores the marked user input information to the first storage module. This application can make the endowment robot grow gradually according to old man's behavioral habits gradually, makes the robot have the individuation and satisfies every old man's life demand more.
Description
Technical Field
The invention relates to the field of robots, in particular to a personification system of an aging robot.
Background
The elderly are gradually disconnected from society due to their declining physical functions after the elderly. People are social animals, and the old people still have social communication demands even though the old people have slow movement and slow thinking. But this society, and most people in this society, live at a fast pace. The elderly cannot follow such pace of life, and therefore, accompany of the elderly is also included.
A robot generally refers to an electromechanical system machine operated by a computer or an electronic program. Telepresence robots are capable of moving around their environment and are not fixed at one physical location. However, most remote robots are now used in industrial, military and security environments, and few are used for home medical services. Particularly for elderly medical service robots.
In order to facilitate the service of the elderly, the applicant has developed an aging robot. However, the existing robot can only be regarded as a machine with a high degree of automation and intelligence, and cannot be regarded as an intelligent person capable of replacing an attendant. Since the robots are set in a batch, the set parameters, characteristics, and the like are initial setting values set according to the average condition of the elderly. The daily habits of each elderly are different due to the influence of various factors. The requirements of each elderly person for the robot are naturally different. In order to better serve the elderly and enable the robot to become more suitable for the needs of the elderly in the process of serving the elderly, it is necessary to provide the robot with a learning function of collecting information of the elderly and performing an updated growth. In view of these circumstances, the present application has developed an anthropomorphic system for an elderly care robot that is specific to elderly people.
Disclosure of Invention
The invention aims to provide a personification system of an old-age keeping robot, and solves the problem that the existing old-age keeping robot cannot be suitable for each old person without a learning function.
In order to solve the above problems, the following scheme is provided:
the first scheme is as follows: the personification system of the endowment robot comprises a central processing unit, and a sound pickup, a camera, a first storage module, a second storage module and a third storage module which are respectively in communication connection with the central processing unit; an information extraction module, a time screening module and a counting module are arranged in the central processing unit;
the sound pick-up is used for recording the voice information of the user speaking and transmitting the voice information to the central processing unit;
the camera is used for shooting image information including user operation actions and face information and transmitting the image information to the central processing unit;
the information extraction module is prestored with a user input information extraction table and is used for extracting user input information from the received image information and sound information according to the user input information extraction table and transmitting the user input information to the third storage module;
the counting module is used for counting the occurrence frequency of each user input information in a time period;
the time screening module is respectively provided with a first time period, a second time period and a third time period, wherein the time period ranges are sequentially increased; the system is used for screening and extracting user input information in each time period;
the third storage module is used for storing all user input information;
the second storage module is used for updating and storing the user input information with the largest occurrence frequency in a second time period in real time;
the first storage module is used for updating and storing the user input information with the largest occurrence frequency in the first time period in real time;
the central processing unit extracts the user input information in the second time period in the third storage module according to the second time period set in the time screening module, and the central processing unit enables the counting module to count and mark the user input information with the largest occurrence frequency in the second time period; the central processing unit stores the marked user input information to the second storage module; the central processing unit extracts the user input information in the first time period in the second storage module, and the central processing unit enables the counting module to count and mark the user input information with the largest occurrence frequency in the first time period; the central processing unit stores the marked user input information to the first storage module;
the central processing unit marks the user input information in the first storage module into the user input information extraction table, and preferentially compares the marked user input information when the central processing unit extracts the user input information according to the user input information extraction table.
The noun explains:
a first time period: and pushing forward by taking the current day of the use day as a starting point, namely the latest time period.
A second time period: a period of time advanced from the day of use as a starting point, the second period of time including the first period of time.
A third time period: a time period which is pushed forward by taking the current day of the use as a starting point, and the third time period comprises the second time period.
The principle and the effect are as follows:
the invention shoots the operation action of a user and the face of the user through the camera to form image information, shoots the unsatisfactory limb action expressed by the old man on the robot in daily life, and transmits the shot limb action to the central processing unit as one of the original inputs; the sound information including the speaking sound of the old people is transmitted to the central processing unit as another original input through the sound pick-up, and the user input information in a format capable of being recognized by the robot is extracted from the central processing unit through the information extraction module. The central processing unit stores all the identified and extracted user input information into the third storage module, so that all the user input information representing the living habits of the old people can be stored, and the old people can be inquired later conveniently. Then, through a second time slot set in the time screening module, the user input information in the time slot in the third storage module is extracted, that is, all the user input information in a time slot closer to the screening time is extracted. The counting module counts the times of the extracted user input information, and the central processing unit transmits one or more pieces of user input information with the largest occurrence times to the second storage module. Therefore, the behavior habit which often appears in the old people in a period of time near the moment is extracted by inputting information by the user. In the second storage module, the time screening module extracts the user input information in the first time period, the counting module counts the occurrence frequency of each user input information in the time period, the central processing unit sends the user input information with the maximum occurrence frequency in the first time period to the first storage module for storage, and marks the user input information with the maximum occurrence frequency in the latest time period in the user input information extraction table. In the subsequent extraction of the user input information, the central processing unit will compare the marked user input information preferentially. Therefore, according to the difference of the behavior habits of each old man, the user input information for representing the behavior habits of the old man is different, so that the user input information marked in the user input information table is different, the central processing unit can quickly identify the original input of the old man including the voice information and the picture information according to the marked user input information, and the response speed of the robot can be improved. Meanwhile, the time of accompanying the old people with the robot is increased, the user input information extraction table of the robot can be continuously perfected and marked, the setting of each robot is more and more personalized, and the specific requirements of the old people can be met. The robot character formed by the anthropomorphic system is a building block type building mode through the plurality of storage modules, different user input information is formed by collecting different living habits of the old at ordinary times, and different characters or quality lattices of the robot are formed by combining different user input information. Therefore, the character or the quality of the robot generated by the anthropomorphic system can be changed after learning along with the change of the living habits of the old, and the old can also manually select the favorite character to set according to the habit of the robot.
The invention directly collects the image information of the old through the camera, collects the sound information of the old through the sound pick-up and provides a basis for the following information extraction. The invention unifies the formats of various original input information including image information input into the central processing unit through the information extraction module, and is beneficial to the robot to identify the behavior habits of the old through the voice information and the image information.
According to the invention, the user input information extracted in each time period is timely stored and updated through the three storage modules, so that the intermediate information in each storage module can be manually searched for checking and troubleshooting when judgment errors occur.
The invention can accurately and timely collect the behavior habits of the old people, translate the behavior habits into the user input information which can be grasped by the robot, and mark the user input information with the largest occurrence frequency into the user input information extraction table, so that the old-age care robot can continuously get used to the behavior habits of the old people, the response speed to the behavior habits of the old people is improved, the old-age care robot can more and more meet different requirements of each old person, and the old-age care robot becomes a humanized and personalized professional accompanying and attending.
Scheme II: furthermore, a voice recognition module is arranged in the central processing unit, and initial voice information of authorized persons including the user and the appointed user is preset in the voice recognition module; the voice recognition module compares the received voice information with the initial voice information, and starts the personification system of the endowment robot when the voice information is the same as the initial voice information.
Whether the old people or the designated users close to the old people are communicated with the endowment robot or not is judged through voice recognition, and only an authorized person can start and use the endowment robot, so that a specific information acquisition system arranged on the endowment robot starts to work. The personification system of the endowment robot is in a standby state at ordinary times, so that energy can be saved to the maximum extent, and meanwhile, irrelevant people are prevented from operating the personification system of the endowment robot at will. The personification system of the endowment robot has personalized customization and specificity, and is more suitable for accompanying the old one by one.
The third scheme is as follows: further, a face recognition module is arranged in the central processing unit, and initial face picture information of authorized persons including users and appointed users is preset in the face recognition module; the face recognition module compares the received image information with the initial face picture information, and starts the personification system of the endowment robot when the image information is the same as the initial face picture information.
Whether the old people or the authorized people such as the specified user communicate with the robot or not is judged through face recognition, and only the authorized people can start and use the robot. The personification system of the endowment robot is in a standby state at ordinary times, so that energy can be saved to the maximum extent, and meanwhile, irrelevant people are prevented from operating the personification system of the endowment robot at will. The personification system of the endowment robot has personalized customization and specificity, and is more suitable for accompanying the old one by one. The use of face recognition enables elderly people who are not speaking to have an adaptive robot starting mode.
And the scheme is as follows: further, the central processing unit is also connected with an interface module and a fourth storage module; the interface module is used for being connected with an external input device, and public input information containing various technical knowledge is updated to the fourth storage module through the interface module.
Just as children need to learn to grow up, the endowment robot needs to learn continuously to become an advanced robot. The interface module inputs public input information containing various technical knowledge into the fourth storage module, so that the content of the public input information can be artificially controlled, and the phenomenon that some junk information is input into the robot, so that the robot generates some behaviors which are not suitable for accompanying the old can be avoided.
And a fifth scheme: further, the central processing unit extracts the public input information in the fourth storage module in a third time period, the counting module counts the occurrence frequency of each public input information in the time period, and the central processing unit sends the public input information with the maximum occurrence frequency to the third storage module; the central processing unit sends the public input information with the maximum occurrence frequency in the third storage module in the second time period to the second storage module; the central processing unit sends the public input information with the largest occurrence frequency in the first time period in the second storage module to the first storage module; and simultaneously, the central processing unit supplements the public input information in the first storage module to the user input information extraction table.
Through the step-by-step screening of the input information reported by the people in the fourth storage module, the popular public input information can be supplemented into the user input information extraction table, so that the endowment robot can recognize and extract some popular vocabulary information.
Scheme six: further, the first time period is seven days, the second time period is thirty days, and the third time period is sixty days.
Relevant researches show that two months (sixty days) are the establishment time of a new behavior habit, and the third time period is set to sixty days, so that new user input information formed in a habit establishment stage can be extracted, and the omission of the user input information used for representing the new behavior habit is avoided. The second stage is set to be thirty days (one month), and the first stage is set to be seven days (one week), which all accord with the daily time statistical habit of people, and are beneficial to unifying the learning of the robot and the daily life of people.
Drawings
FIG. 1 is a logic diagram of an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail below by way of specific embodiments:
reference numerals in the drawings of the specification include: the system comprises a central processing unit 1, a voice recognition module 11, a face recognition module 12, an information extraction module 13, a time screening module 14, a counting module 15, a sound pickup 21, a camera 22, an interface module 23, a first storage module 31, a second storage module 32, a third storage module 33 and a fourth storage module 34.
The personification system of the endowment robot in the embodiment comprises a central processing unit 1, and a sound pickup 21, a camera 22, an interface module 23, a first storage module 31, a second storage module 32, a third storage module 33 and a fourth storage module 34 which are respectively in communication connection with the central processing unit 1; the central processing unit 1 is internally provided with an information extraction module 13, a time screening module 14, a counting module 15, a face recognition module 12 and a sound recognition module 11;
the sound pick-up 21 is used for recording the voice information of the user speaking and transmitting the voice information to the central processing unit 1;
a camera 22 for shooting image information including user operation actions and face information and transmitting the image information to the central processor 1;
an information extraction module 13, which pre-stores a user input information extraction table, extracts user input information from the received image information and sound information according to the user input information extraction table, and transmits the user input information to the third storage module 33;
the counting module 15 is used for counting the occurrence frequency of each user input information in a time period;
the time screening module 14 is respectively provided with a first time period, a second time period and a third time period, wherein the time period ranges are sequentially increased; the system is used for screening and extracting user input information in each time period;
a face recognition module 12, which is preset with initial face picture information of authorized persons including users and designated users; the face recognition module 12 compares the received image information with the initial face picture information, and starts the personification system of the endowment robot when the image information is the same as the initial face picture information;
a voice recognition module 11, which is preset with initial voice information of authorized persons including users and appointed users; the voice recognition module 11 compares the received voice information with the initial voice information, and starts the personification system of the endowment robot when the voice information is the same as the initial voice information;
the interface module 23, the interface module 23 is used for connecting with external input equipment, and public input information containing various technical knowledge is updated to the fourth storage module 34 through the interface module 23;
a fourth storage module 34 for storing all the public input information inputted through the external input device;
a third storage module 33 for storing all user input information;
the second storage module 32 is used for updating and storing the user input information with the largest occurrence frequency in the second time period in real time;
the first storage module 31 is used for updating and storing the user input information with the largest occurrence frequency in the first time period in real time;
the central processing unit 1 extracts the user input information in the second time period in the third storage module 33 according to the second time period set in the time screening module 14, and the central processing unit 1 makes the counting module 15 count and mark the user input information with the largest occurrence frequency in the second time period; the central processing unit 1 stores the marked user input information to the second storage module 32; the central processing unit 1 extracts the user input information in the second storage module 32 in the first time period, and the central processing unit 1 makes the counting module 15 count and mark the user input information with the largest occurrence frequency in the first time period; the central processing unit 1 stores the marked user input information to the first storage module 31;
the central processing unit 1 marks the user input information in the first storage module 31 in the user input information extraction table, and preferentially compares the marked user input information when extracting the user input information according to the user input information extraction table.
Whether the old people or the designated users close to the old people are communicated with the endowment robot or not is judged through voice recognition, and only an authorized person can start and use the endowment robot, so that a specific information acquisition system arranged on the endowment robot starts to work. The personification system of the endowment robot is in a standby state at ordinary times, so that energy can be saved to the maximum extent, and meanwhile, irrelevant people are prevented from operating the personification system of the endowment robot at will. The personification system of the endowment robot has personalized customization and specificity, and is more suitable for accompanying the old one by one.
Whether the old people or the authorized people such as the specified user communicate with the robot or not is judged through face recognition, and only the authorized people can start and use the robot. The personification system of the endowment robot is in a standby state at ordinary times, so that energy can be saved to the maximum extent, and meanwhile, irrelevant people are prevented from operating the personification system of the endowment robot at will. The personification system of the endowment robot has personalized customization and specificity, and is more suitable for accompanying the old one by one. The use of face recognition enables elderly people who are not speaking to have an adaptive robot starting mode.
Just as children need to learn to grow up, the endowment robot needs to learn continuously to become an advanced robot. The interface module 23 inputs public input information containing various technical knowledge into the fourth storage module 34, so that the content of the public input information can be artificially controlled, and the phenomenon that some junk information is input into the robot, so that the robot generates some behaviors which are not suitable for accompanying the aged can be avoided.
The central processing unit 1 extracts the public input information in the fourth storage module 34 in the third time period, the counting module 15 counts the occurrence frequency of each public input information in the time period, and the central processing unit 1 sends the public input information with the maximum occurrence frequency to the third storage module 33; the central processing unit 1 sends the common input information which appears most frequently in the third storage module 33 in the second time period to the second storage module 32; the central processing unit 1 sends the public input information with the largest occurrence frequency in the first time period in the second storage module 32 to the first storage module 31; meanwhile, the central processing unit 1 supplements the common input information in the first storage module 31 to the user input information extraction table.
By filtering the information stored in the fourth storage module 34 step by step, the popular public input information can be supplemented to the user input information extraction table, so that the endowment robot can recognize and extract some popular vocabulary information.
The first period of time is seven days, the second period of time is thirty days, and the third period of time is sixty days.
Relevant researches show that two months (sixty days) are the establishment time of a new behavior habit, and the third time period is set to sixty days, so that new user input information formed in a habit establishment stage can be extracted, and the omission of the user input information used for representing the new behavior habit is avoided. The second stage is set to be thirty days (one month), and the first stage is set to be seven days (one week), which all accord with the daily time statistical habit of people, and are beneficial to unifying the learning of the robot and the daily life of people.
In the embodiment, the operation action and the face of a user are shot by the camera to form image information, and the unsatisfactory limb actions expressed by the old to the robot in daily life are shot and transmitted to the central processing unit as one of the original inputs; the sound information including the speaking sound of the old people is transmitted to the central processing unit as another original input through the sound pick-up, and the user input information in a format capable of being recognized by the robot is extracted from the central processing unit through the information extraction module. The central processing unit stores all the identified and extracted user input information into the third storage module, so that all the user input information representing the living habits of the old people can be stored, and the old people can be inquired later conveniently. Then, through a second time slot set in the time screening module, the user input information in the time slot in the third storage module is extracted, that is, all the user input information in a time slot closer to the screening time is extracted. The counting module counts the times of the extracted user input information, and the central processing unit transmits one or more pieces of user input information with the largest occurrence times to the second storage module. Therefore, the behavior habit which often appears in the old people in a period of time near the moment is extracted by inputting information by the user. In the second storage module, the time screening module extracts the user input information in the first time period, the counting module counts the occurrence frequency of each user input information in the time period, the central processing unit sends the user input information with the maximum occurrence frequency in the first time period to the first storage module for storage, and marks the user input information with the maximum occurrence frequency in the latest time period in the user input information extraction table. In the subsequent extraction of the user input information, the central processing unit will compare the marked user input information preferentially. Therefore, according to the difference of the behavior habits of each old man, the user input information for representing the behavior habits of the old man is different, so that the user input information marked in the user input information table is different, the central processing unit can quickly identify the original input of the old man including the voice information and the picture information according to the marked user input information, and the response speed of the robot can be improved. Meanwhile, the time of accompanying the old people with the robot is increased, the user input information extraction table of the robot can be continuously perfected and marked, the setting of each robot is more and more personalized, and the specific requirements of the old people can be met.
The image information of the old man is directly collected through the camera in the embodiment, the sound information of the old man is collected through the sound pick-up, and a foundation is provided for the following information extraction. In the embodiment, the formats of various original input information including image information input into the central processing unit are unified through the information extraction module, so that the robot can identify the behavior habits of the old through the sound information and the image information.
In the embodiment, the user input information extracted in each time period is timely stored and updated through the three storage modules, so that the intermediate information in each storage module can be manually searched for checking and troubleshooting when judgment errors occur.
The embodiment can accurately and timely collect the behavior habits of the old people, translates the behavior habits into the user input information which can be grasped by the robot, marks the user input information with the largest occurrence frequency into the user input information extraction table, enables the old-age care robot to be continuously accustomed to the behavior habits of the old people, improves the response speed of the behavior habits of the old people, enables the old-age care robot to more and more meet different requirements of each old person, and becomes a humanized and personalized professional accompanying and attending.
The foregoing is merely an example of the present invention, and common general knowledge in the field of known specific structures and characteristics is not described herein in any greater extent than that known in the art at the filing date or prior to the priority date of the application, so that those skilled in the art can now appreciate that all of the above-described techniques in this field and have the ability to apply routine experimentation before this date can be combined with one or more of the present teachings to complete and implement the present invention, and that certain typical known structures or known methods do not pose any impediments to the implementation of the present invention by those skilled in the art. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.
Claims (4)
1. Personification system of endowment robot, its characterized in that: the system comprises a central processing unit, and a sound pickup, a camera, a first storage module, a second storage module and a third storage module which are respectively in communication connection with the central processing unit; an information extraction module, a time screening module and a counting module are arranged in the central processing unit;
the sound pick-up is used for recording the voice information of the user speaking and transmitting the voice information to the central processing unit;
the camera is used for shooting image information including user operation actions and face information and transmitting the image information to the central processing unit;
the information extraction module is prestored with a user input information extraction table and is used for extracting user input information from the received image information and sound information according to the user input information extraction table and transmitting the user input information to the third storage module;
the counting module is used for counting the occurrence frequency of each user input information in a time period;
the time screening module is respectively provided with a first time period, a second time period and a third time period, wherein the time period ranges are sequentially increased; the system is used for screening and extracting user input information in each time period;
the third storage module is used for storing all user input information;
the second storage module is used for updating and storing the user input information with the largest occurrence frequency in a second time period in real time;
the first storage module is used for updating and storing the user input information with the largest occurrence frequency in the first time period in real time;
the central processing unit extracts the user input information in the second time period in the third storage module according to the second time period set in the time screening module, and the central processing unit enables the counting module to count and mark the user input information with the largest occurrence frequency in the second time period; the central processing unit stores the marked user input information to the second storage module; the central processing unit extracts the user input information in the first time period in the second storage module, and the central processing unit enables the counting module to count and mark the user input information with the largest occurrence frequency in the first time period; the central processing unit stores the marked user input information to the first storage module;
the central processing unit marks the user input information in the first storage module into a user input information extraction table, and preferentially compares the marked user input information when the central processing unit extracts the user input information according to the user input information extraction table;
the image information comprises discontented limb actions which are expressed by the old to the robot in daily life; the central processing unit extracts user input information of a format which can be recognized by the robot through an information extraction module; the user input information is used for representing the living habits of the old;
the central processing unit is also connected with an interface module and a fourth storage module; the interface module is used for being connected with external input equipment and updating public input information containing various technical knowledge to the fourth storage module through the interface module;
the central processing unit extracts the public input information in the fourth storage module in a third time period, the counting module counts the occurrence frequency of each public input information in the time period, and the central processing unit sends the public input information with the maximum occurrence frequency to the third storage module; the central processing unit sends the public input information with the maximum occurrence frequency in the third storage module in the second time period to the second storage module; the central processing unit sends the public input information with the largest occurrence frequency in the first time period in the second storage module to the first storage module; and simultaneously, the central processing unit supplements the public input information in the first storage module to the user input information extraction table.
2. The personification system of an endowment robot according to claim 1, wherein: a voice recognition module is arranged in the central processing unit, and initial voice information of authorized persons including a user and a designated user is preset in the voice recognition module; the voice recognition module compares the received voice information with the initial voice information, and starts the personification system of the endowment robot when the voice information is the same as the initial voice information.
3. The personification system of an endowment robot according to claim 1, wherein: a face recognition module is arranged in the central processing unit, and initial face picture information of authorized persons including a user and a designated user is preset in the face recognition module; the face recognition module compares the received image information with the initial face picture information, and starts the personification system of the endowment robot when the image information is the same as the initial face picture information.
4. The personification system of an endowment robot according to claim 1, wherein: the first time period is seven days, the second time period is thirty days, and the third time period is sixty days.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710453058.5A CN107336246B (en) | 2017-06-15 | 2017-06-15 | Personification system of endowment robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710453058.5A CN107336246B (en) | 2017-06-15 | 2017-06-15 | Personification system of endowment robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107336246A CN107336246A (en) | 2017-11-10 |
CN107336246B true CN107336246B (en) | 2021-04-30 |
Family
ID=60219977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710453058.5A Active CN107336246B (en) | 2017-06-15 | 2017-06-15 | Personification system of endowment robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107336246B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108124008A (en) * | 2017-12-20 | 2018-06-05 | 山东大学 | A kind of old man under intelligent space environment accompanies and attends to system and method |
CN109119076B (en) * | 2018-08-02 | 2022-09-30 | 重庆柚瓣家科技有限公司 | System and method for collecting communication habits of old people and users |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1318454A (en) * | 2000-03-31 | 2001-10-24 | 索尼公司 | Robot device and action determining method of robot device |
US20110231017A1 (en) * | 2009-08-03 | 2011-09-22 | Honda Motor Co., Ltd. | Robot and control system |
CN106182032A (en) * | 2016-08-24 | 2016-12-07 | 陈中流 | One is accompanied and attended to robot |
CN205942833U (en) * | 2016-07-15 | 2017-02-08 | 杭州国辰机器人科技有限公司 | Can be used to intelligent system of robot among estate management |
-
2017
- 2017-06-15 CN CN201710453058.5A patent/CN107336246B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1318454A (en) * | 2000-03-31 | 2001-10-24 | 索尼公司 | Robot device and action determining method of robot device |
US20110231017A1 (en) * | 2009-08-03 | 2011-09-22 | Honda Motor Co., Ltd. | Robot and control system |
CN205942833U (en) * | 2016-07-15 | 2017-02-08 | 杭州国辰机器人科技有限公司 | Can be used to intelligent system of robot among estate management |
CN106182032A (en) * | 2016-08-24 | 2016-12-07 | 陈中流 | One is accompanied and attended to robot |
Also Published As
Publication number | Publication date |
---|---|
CN107336246A (en) | 2017-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106709254B (en) | A kind of medical diagnosis robot system | |
CN106462384B (en) | Based on multi-modal intelligent robot exchange method and intelligent robot | |
US11551103B2 (en) | Data-driven activity prediction | |
CN108351986A (en) | Learning system, learning device, learning method, learning program, training data generating means, training data generation method, training data generate program, terminal installation and threshold value change device | |
CN111496802A (en) | Control method, device, equipment and medium for artificial intelligence equipment | |
CN104462245B (en) | A kind of user's online preference data recognition methods | |
CN108983979B (en) | Gesture tracking recognition method and device and intelligent equipment | |
CN107336246B (en) | Personification system of endowment robot | |
CN114707562B (en) | Electromyographic signal sampling frequency control method and device and storage medium | |
CN107193382A (en) | Intelligent wearable device and it is automatic using sensor come the method for allocative abilities | |
CN107193836B (en) | Identification method and device | |
DE102012010627A1 (en) | Object detecting and measuring system i.e. gesture detecting system, for detecting gesture parameters of man machine interface during control of e.g. computer, has unit executing steps of feature-extraction and emission computation | |
CN105945949A (en) | Information processing method and system for intelligent robot | |
CN111197841A (en) | Control method, control device, remote control terminal, air conditioner, server and storage medium | |
CN108647229B (en) | Virtual person model construction method based on artificial intelligence | |
US20240047039A1 (en) | System and method for creating a customized diet | |
CN107451559A (en) | Parkinson's people's handwriting automatic identifying method based on machine learning | |
Pardasani et al. | Enhancing the ability to communicate by synthesizing american sign language using image recognition in a chatbot for differently abled | |
CN108091335A (en) | A kind of real-time voice translation system based on speech recognition | |
CN110362190B (en) | Text input system and method based on MYO | |
CN114550183B (en) | Electronic equipment and error question recording method | |
CN107283435B (en) | Specific information collection system of endowment robot | |
CN116127006A (en) | Intelligent interaction method, language ability classification model training method and device | |
DE112018007850T5 (en) | Speech recognition system | |
DE102021006546A1 (en) | Method for user-dependent operation of at least one data processing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |