CN115186148A - Man-machine interaction system and method for realizing digital immortal - Google Patents

Man-machine interaction system and method for realizing digital immortal Download PDF

Info

Publication number
CN115186148A
CN115186148A CN202210702927.4A CN202210702927A CN115186148A CN 115186148 A CN115186148 A CN 115186148A CN 202210702927 A CN202210702927 A CN 202210702927A CN 115186148 A CN115186148 A CN 115186148A
Authority
CN
China
Prior art keywords
user
module
question
information
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210702927.4A
Other languages
Chinese (zh)
Inventor
潘晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinxing Technology Hangzhou Co ltd
Original Assignee
Xinxing Technology Hangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinxing Technology Hangzhou Co ltd filed Critical Xinxing Technology Hangzhou Co ltd
Priority to CN202210702927.4A priority Critical patent/CN115186148A/en
Publication of CN115186148A publication Critical patent/CN115186148A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a human-computer interaction system for realizing digital immortal and a method thereof, wherein the system comprises a recording and analyzing unit, an interaction unit and a reply and play unit, wherein the recording and analyzing unit is used for collecting, analyzing, sorting and storing personal information of a user A and forming a personal database; the interaction unit is mainly used for interacting with the user B through the reply and play unit based on the personal database and other information when the user A is not online; the reply and play unit is used for generating and playing related voice and video. The system can realize digital perpetual of the user A, systematically record and analyze personal information of the user A and automatically generate autobiography of the user A, and provide functional experiences of information question answering, comfort assistance, scene chatting, success and happiness suggestions, health suggestions, worship and blessing and the like for the user B through 7 interaction modules.

Description

Man-machine interaction system and method for realizing digital perpetuation
Technical Field
The invention relates to a human-computer interaction system, in particular to a human-computer interaction system and a method for realizing digital immortalization.
Background
The existing chat robot and the user have a conversation based on a big data training result, but the answer is one of thousands of people, but not the answer of a specific person. Because the individual experience of each person is not related to the preference of health conditions and the like, no prior art can simulate the interaction with a user by a specific person from the perspective of the user, including answering personal questions, providing life suggestions and health suggestions for the user from the perspective of relatives and friends, and the like, and the interaction range is limited.
From the perspective of digital perpetuation, common people cannot perpetuate and can not have a conversation with relatives of the deceased. At present, no system can record the memory and attitudes of a person, including personal life events, preferences, attitudes and the like, and perform language interaction with other users from the personal perspective (including partial original sound and video of the person) to realize digital perpetuation of the person; and the system can not automatically generate one-person self-transmission, all the self-transmissions are manually written, and the system can not have conversation and interaction with readers, and has a single function.
In addition, according to the statistics of the ministry of education of the people's republic of china, about 30% of the people in china do not speak mandarin, and only 10% of the other 70% can smoothly communicate with the mandarin with the comparative standard. In addition, china has a large number of dialects, and each dialect has different pronunciations in various places, and the tone and the speaking speed of each person are different. Therefore, replacing one's voice with the speech synthesis mandarin chinese technique is not realistic, i.e. the user will feel that this digital user a is not like the real user a. Chinese dialects are various and men, women, old and young have different timbres, timbres and speech rates, and the problem that the interactive experience sense is poor or the cost is high when one person simulates (synthesizes) the voice by a computer exists.
Therefore, the existing digital interactive system has the problems that digital perpetuation cannot be realized, the function is single, and the interactive experience is poor.
Disclosure of Invention
The invention aims to provide a human-computer interaction system and a human-computer interaction method for realizing digital immortalization. The system can realize digital perpetuation of the user A, systematically record and analyze personal information of the user A and automatically generate autobiography of the user A, and provide functional experiences such as information question and answer, comfort assistance, scene chat, successful and happy suggestions, health suggestions, worship and blessing and the like for the user B through 7 interaction modules.
The technical scheme of the invention is as follows: a human-computer interaction system for realizing digital immortal comprises a recording and analyzing unit, an interaction unit and a reply and play unit, wherein the recording and analyzing unit is used for collecting, analyzing, sorting and storing personal information of a user A and forming a personal database; the interaction unit is used for interacting with the user B through different interaction units by utilizing the reply and play unit based on the personal database and other information when the user A is not online except for the online chat module; the reply and play unit is used for generating and playing related voice and video.
In the aforementioned human-computer interaction system for implementing digital immortal, the recording and analyzing unit includes a personal information collecting module, and the personal information collecting module includes a question-and-answer collecting submodule, a social media information collecting submodule, and an information collecting submodule other than a social media; the information acquisition submodule except for the social media comprises a first recall and interaction unit taking time, important life events and experiences and life contents as clues and a second recall and interaction unit taking pictures and videos as clues. The personal information collection module systematically collects personal information under the permission of the user A.
In the aforementioned human-computer interaction system for implementing digital immortalization, the question-answer acquisition submodule includes a question-answer information acquisition submodule, an original sound acquisition submodule and an original image acquisition submodule; the question-answer information collection submodule collects the personal information and original voice of the user A on the premise of agreement of the user by inquiring the relevant personal information questions through software or manpower on a hardware terminal; the original sound collection submodule creatively solves the problem that different practical situations of the user A are avoided due to culture, the original voice of the user A is recorded separately by taking words as main words in the collection process, and then the collected voice is spliced and played according to the reply content during playing; the original image acquisition sub-module is used for acquiring original images of the user A, including speaking and non-speaking original images of the user A under different expressions and original images of other life and work.
In the human-computer interaction system for realizing digital immortal, the recording and analyzing unit further comprises a personal information analyzing module and a self-transmission generating module, wherein the personal information analyzing module analyzes and arranges the acquired personal information to form a relatively complete personal database and stores the relatively complete personal database; the self-transmission generation module is used for automatically generating self-transmission of the user A according to the personal database of the personal information analysis module.
In the aforementioned human-computer interaction system for realizing digital perpetuation, the interaction unit includes an information question-answering module, a comfort assisting module, a scene chat module, a success and happiness suggestion module, a health suggestion module, a worship pray module and an online chat module, wherein the information question-answering module is used for answering personal information questions and non-personal information questions about the user A, which are provided by the user B, according to the personal information of the recording and analyzing unit and an existing database; the comfort assistance module is used for simulating the user A to give comfort to the language of the user B according to the fact that the user B needs comfort; the scene chatting module is used for simulating the chatting interaction between the user A and the user B according to different scenes; the success and happiness suggestion module is used for giving positive suggestions to a user B; the health suggestion module is used for giving health suggestions to the user B according to the collected causes of death, diseases, habits and hobbies of the user B and relatives and friends of the user B; the worship praying module is used for providing an electronic intelligent worship function for the user B; the online chat module can enable the user B and the online user A to carry out text, voice or video communication.
In the aforementioned human-computer interaction system for implementing digital immortalization, the reply and play unit includes a reply generation module, an active play module, a short video analysis and call module, a voice synthesis module, and a video synthesis module; the reply generation module is used for answering in any one or more modes of original voice, original short video, mixed original voice short video and synthesized voice; the active playing module is used for automatically playing the personal information of the user A by the system; the short video analyzing and calling module is used for analyzing emotions in the short video analyzing and calling the related short video of the user A according to the question of the user B and the character content of the answer of the user A; the voice synthesis module and the video synthesis module are used for synthesizing the voice and the video close to the user A by using the original voice and the original video of the user A through an artificial intelligence technology.
In the human-computer interaction system for realizing digital immortal, the system further comprises a login unit, a user individual privacy protection unit and a physical carrier of the whole system.
An interaction method of a man-machine interaction system for realizing digital immortal comprises the following steps:
step one, a user B logs in a system and selects different modules in an interaction unit for interacting with the user;
step two, if the user B selects the worship praying module, carrying out interaction according to the interaction method of the corresponding module; if user B selects the other interaction module, the system will check if user A is currently online: if the user A is online, entering an online chatting module; if the user A is not on line currently, entering a module selected by the user B for interaction;
and thirdly, the reply and play unit replies through a reply generation module, an active play module, a short video analysis and call module, a voice synthesis module or a video synthesis module.
In the foregoing interaction method for implementing a digital perpetual human-computer interaction system, if a user B selects an information question-answering module, the interaction method for the information question-answering module includes the following steps:
s1, reading the name of a user B, searching the person in a relatives and friends list of the user A, and determining the character relationship between the user A and the user B;
s2, obtaining corresponding terms and sentences through the character relation;
s3, the user B provides a question, the system judges whether the question provided by the user B is a personal question or a non-personal question, if the question is the personal question, the step S4 is executed, otherwise, the step S5 is executed;
s4, extracting keywords related to personal information in the personal questions, and if the keywords are not extracted, requesting the user B to say once again, change a question or change a question; if the keywords are extracted and relevant information can be found in the personal database according to the keywords, corresponding answers are made according to the found information; if the keywords are extracted but the related information cannot be found in the personal database according to the keywords, the system generates synonyms, related words, upper concept words and lower concept words of the keywords, searches the related information from the personal database, and returns the result if the related information is found; if the relevant information cannot be found, prompting the user B to: the user A does not input the information or change a question when establishing the digital person;
s5, extracting keywords in the non-personal question, and if the keywords in the question are not extracted, requesting the user B to speak again, change an inquiry method or change a question; if the keyword is extracted, searching an answer from the existing database according to the keyword, and if the answer is searched, returning a result; if the answer is not found, the system generates synonyms, related words of the keywords, upper concept words and lower concept words of the keywords, searches the answer again from the existing database, and returns the result if the answer is found; if no answer is found, the system answers user B: the question is difficult and indicates that learning needs to continue or user B is requested to change a question or provide a possible answer to learn.
In the interaction method of the human-computer interaction system for implementing digital immortalization, a reply method of the reply generation module is as follows:
when the user B chooses to simulate the interaction of the user A:
case 1: if the content returned by the interactive unit is a photo, a song, or a short video to be played for the corresponding question of user a:
case 1.1 user a database stores corresponding photos, singing or short videos, and then directly plays the photos, singing or short videos;
case 1.2 user a database does not have corresponding photo, singing or short video, then feedback without result is performed;
case 2: if the content returned by the interactive unit is the text content to be played, namely Q-T, the type of the text content is further judged:
case 2.1: if the Q-T is the text content or the original video and audio of the original voice of the user A, playing the original voice or the original photo, wherein the personal information of the user A recorded by the recording and analyzing unit in the process of inquiring the user A comprises the original recording; simultaneously starting a short video analysis and calling module to play related short videos;
case 2.2: if the Q-T is formed by automatically combining, screening and splicing the words and sentences spoken by the user A, playing the automatically spliced voice of the user A; simultaneously starting a short video analysis and calling module to play related short videos;
case 2.3: if Q-T is that the words and phrases which are generated by the mixed system including the interactive unit and the words and phrases spoken by the user A include the splicing words and phrases, then the following cases are distinguished:
case 2.3.1: if the voice synthesized by the voice synthesis module is very close to the real voice of the user A, the Q-T content is played by the first person by using the voice synthesized by the artificial intelligence system; simultaneously starting a short video analysis and calling module to play related short videos;
case 2.3.2: if the voice synthesized by the voice synthesis module can not approach the real voice of the user A, acquiring the sex and age information of the user A from the recording and analyzing unit, and playing the voice which is consistent with the sex of the user A and approaches the age by the voice synthesis module; if the sentence spoken by the user A includes the splicing sentence, triggering a condition 2.2; simultaneously starting a short video analysis and calling module to play related short videos;
case 2.4: if the Q-T does not contain the words of the user A and all the words generated by the system including the interactive unit, then:
case 2.4.1, if the speech synthesized by the speech synthesis module is close to the real sound of the user a, the process is the same as case 2.3.1;
case 2.4.2: if the voice synthesized by the voice synthesis module can not be close to the real voice of the user A, processing the same situation 2.3.2;
if user B selects the third party mode for interaction:
then, a video of a third party such as an electronic querier is played first, including the voice of the third party, and before the original voice or the image of the original of the user a needs to be played, the user B is asked whether to play: 1) If the user B allows the playing, the related content is played according to the content allowed by the user B; 2) If the user B does not allow the original image and the original sound of the user A to be played, the third party transfers all the contents returned by the system including the interaction unit, and the video and the sound of the third party are played.
In the interaction method of the human-computer interaction system for realizing digital immortalization, the use method of the short video analysis and call module is as follows:
s1, according to conversation contents between a user B and a user A, using keywords, an artificial neural network or a support vector machine and other classification decision methods to decide and classify emotion types to be replied by the user, wherein the emotion decision and classification result is as follows: whether speaking or not speaking and the emotion types under the condition of speaking or not speaking, wherein the emotion types comprise calmness, happiness, vitality, heart injury or surprise;
s2, if the video synthesis module can synthesize the video of the user A, and the synthesized content comprises a mouth shape corresponding to the spoken utterance of the digital person of the user A and emotion decision and classification results obtained according to the S1, playing the artificially and intelligently synthesized video of the user A; if the video synthesis module can not synthesize the video of the user A or the synthesized effect is not good enough, S3 is entered;
and S3, calling a corresponding user A to record the stored original short video in a recording and analyzing unit according to the emotion decision and classification result obtained in the S1: when replying to user B: 1) If the reply is the original short video which needs speaking, the user A is called to speak calmly, happy, angry, hurry or surprised according to the emotion type of the reply characters; 2) If the reply does not need to speak, calling the original short video which is calm, happy, angry, sad or surprised and is not spoken by the user A according to the emotion category of the reply;
and S4, according to the description of the user A self-defined short videos, if the keywords in the question of the user B and the related concepts thereof activate and describe the corresponding short videos or the user B says that the user B wants to see other videos of the user A, the short videos are played.
Compared with the prior art, the invention has the beneficial effects that:
the system can realize digital perpetual of the user A, systematically record and analyze personal experiences, personal preferences, attitudes and the like of the user A and automatically generate self-passing of the user A, and provide functional experiences such as information inquiry, comfort and pray, success and happiness suggestions and the like for the user B through 7 interaction modules.
The recording and analyzing unit is used for collecting, analyzing, sorting and storing personal information of the user A, establishing an information database of the user A, collecting and enriching the personal information in real time in various modes under the conditions of voluntary and privacy protection, and collecting the information comprehensively. From the point of view of the functioning of the recording and analysis unit, the user of the system can be any person who wishes to leave his memory record or who helps others to leave a memory record, including people who are about to come from the world, relatives of deceased people, patients with fatal diseases, ordinary people, high-risk practitioners (e.g. soldier police, etc.). Alzheimer's disease patients (senile dementia) can also record their own memory and experience in early stage of disease by using the system. For the individual's family and descendants/friends, this digital copy may help them better understand the life, choices, thinking, preferences, emotions, and attitudes of the person being queried (the ancestor thereof). Such copies of digital information may help friends and family better understand the lives of their relatives and friends. To a greater extent, this accumulation of knowledge is a significant historical evidence that common people and their lives, beliefs, thoughts, and preferences can provide for a given place and time. The system adds a new knowledge accumulation method for society, so that people can record common people and life, belief, thought and preference thereof.
The reply and play unit replies based on the personal information of the user A, answers the questions of various users B as much as possible, and can interact with the user B in various forms such as synthesizing voice and the like by directly playing the original recording and video of the user A or automatically splicing and editing the original voice or original short video of the user A or mixing, thereby improving the reply quality, reducing the cost and having good interactive experience. In contrast to traditional diaries, biographies, etc., the system may have a dialogue between user B and the digital person of user a, so that the record left can be vividly interacted with user B.
The success and happiness suggestion module in the interaction unit provides positive suggestions for the frustration and the failure of the user B in life, work and study according to the common situation principle and the similar principle in psychology.
The intelligent worship and pray module in the interaction unit can provide comfort and consolation for the user through animation and voice with the identity of the relatives of the user, and can automatically recognize the actual worship behavior of the user through computer vision, thereby increasing the user experience and sense of reality.
The health suggestion module in the interaction unit combines the disease condition of the relatives into the health suggestion, and is more realistic.
The comfort assistance module in the interaction unit can give comfort to the user by the identity of the relative of the user, but other products provide comfort in the role of a robot or stranger, so that a better comfort effect is achieved.
Drawings
FIG. 1 is a system block diagram of the present invention.
Fig. 2 is a block diagram of a recording and analyzing unit.
Fig. 3 is a schematic diagram of a plastic label without a user a photo.
Fig. 4 is a schematic diagram of a plastic label with a photograph and name of user a.
FIG. 5 is a schematic illustration of a crystal as the material carrier.
FIG. 6 is a schematic representation of the material carrier being diamond.
Figure 7 is a schematic representation of the physical carrier being a pedigree (booklet).
Fig. 8 is a schematic view of a two-dimensional code attached to a gravestone or a cinerary urn of the user a.
Fig. 9 is a schematic diagram of the carrier of matter being a digital photo frame or a cell phone.
Fig. 10 is a flow chart of user confirmation and access based on a material carrier.
FIG. 11 is a block diagram of a cloud storage design for a digital photo frame.
FIG. 12 is a block diagram of a local storage design for a digital photo frame.
Detailed Description
The present invention is further illustrated by the following examples, which are not to be construed as limiting the invention.
The embodiment is as follows:
referring to fig. 1 to 12, a human-computer interaction system for implementing digital perpetuation includes a recording and analyzing unit, an interaction unit, and a replying and playing unit.
1 recording and analysis unit
The recording and analyzing unit is used for collecting, analyzing, sorting and storing the personal information of the user A, and forming a personal database to correspondingly generate the digital person of the user A, wherein the digital person is also called the digital me. The user A is a user who wants to realize digital perpetuation, and the digital perpetuation refers to the perpetuation of ordinary people in a digital world, namely the digital virtual world perpetuation of personal information in an existing database.
Personal information is divided into 8 parts, including: basic personal information (names, native places, etc.), major personal experiences (important life events), individual family and major interpersonal relationships, individual hobbies, personal dreams, regrets and hopes, individual attitudes and experience summaries, individual needs and personalities, individual photos, sounds and videos, and other valuable personal information. Each part corresponds to a form, and each form corresponds to a plurality of grids containing information.
According to the personal information of the user A, a specific problem can be generated by utilizing an Internet database, and the database of the user A is enriched.
The recording and analyzing unit also allows a user to establish a digital person who is not the person himself without infringing the portrait right and privacy of the other person, and after establishment, the user can use the digital person by himself or can be published as an example digital person after approval.
The recording and analyzing unit comprises a personal information acquisition module, a personal information analysis module and a self-transmission generation module.
1.1 personal information acquisition Module
The personal information acquisition module comprises a question-answer acquisition submodule, a social media information acquisition submodule and an information acquisition submodule except for social media, and is used for systematically acquiring personal information.
1.1.1 question-and-answer acquisition submodule
The question-answer acquisition submodule collects information in the following ways: 1) Inquiring personal preference of the user A, personal friend information (such as the name of the user A, the emotion of the user A to the person, the photo or the short video of the person and the like), video and audio records of the person (such as the short video of the person, the song sung by the person and the like), the attitude and the character of the person, other personal information (such as the name, the sex, the height, the profession or the skill) and the like, and enabling the user A to upload the corresponding photo or the short video. 2) The common words and sentences including blessing words and sentences, phrases and sentences (such as yes, no, like, dislike and the like) are presented through software, so that the user A speaks the words and sentences with the most frequently used language (dialect, mandarin) for recording and collecting. 3) Short video of user a speaking and short video of user a not speaking, etc. User a can continuously update and add the above content.
1.1.1.1 question-and-answer information collecting submodule
The question-answer type acquisition submodule comprises a question-answer type information collection submodule. The question-answering information collection submodule is used for recording personal information and original voice of a user A under the premise of agreement of the user through software manual input or manual telephone or WeChat consultation (particularly under the condition that the user A has dialect or is inconvenient to use electronic products) on a hardware terminal, and storing the personal information and the original voice into a storage medium (cloud storage, a hard disk and an optical disk) of a system to prevent information loss. The process of answering questions and collecting information of the user A can be short-term, such as half a day or a day, and can also be used as a diary or a week record of life events, emotional preferences of the user A and the like. The user A can input the information by himself or herself, and the user A can input the information by relatives of the user A. For example, if a new member is added to the family of the user a, the relatives and friends of the user a may input information of the new member, or some events that are not input when the user a inputs the information may be added to the relatives and friends of the user a.
The personal information collected by the question-answer information collecting submodule comprises the following specific contents:
the general directory of the collection interface is listed as follows: your personal experience, your relatives, your personal preferences, personality, dream, attitude and experience, your photo or video, your voice and appearance records these several parts.
1) Information acquisition of 'your personal experience', system prompt: in your past life, many life experiences that affect you have occurred. Such as your birth, personal growth, life, work, marital, etc. You recall carefully and answer the following questions in turn and record the experiences.
Please press the button "press the recording" to answer the next 9 questions with the usual speaking language (which may be dialect or native language), and if the voice recognition is wrong after you press the recording, you can input correction using the keypad of the mobile phone. Please ensure that: (1) the answer of each question has the original recording record; and (2) the transmitted text content is correct.
1/9 your 1 st important life experience, what happens in particular?
2/9 your 1 st important life experience during the process of happening, who is with you together?
When do 3/9 your 1 st important life experience happen? ( The following specific date ranges can be said: months to months or the age of you at that time )
4/9 where do your 1 st important life experience occur? (please specify a specific place name, e.g., a city, a county, a village)
5/9 why does this experience or thing? (cause of things)
What influence do you have on the 1 st important life experience of 6/9? What emotional reaction did you have to this life experience?
7/9 please evaluate the influence of the 1 st important life experience of your on your life or the whole life track according to actual conditions, and click a button below to select. (multiple push buttons)
+4 (extreme front/extra good)
+3 (very positive/positive)
+2 (positive/positive)
+1 (one point positive/positive)
0 (without any effect)
-1 (little negative/negative)
-2 (negative/negative)
-3 (very negative/passive)
-4 (extreme negative/passive)
8/9 if this is a bad life experience, what do you do during this process to change?
9/9 for your 1 st important life experience, you want to tell others what? Do you have other records about the life experience? What contribution to the world?
All life events/experiences of the user (1, 2,3, etc.) repeat 1) through 10), the user can always add a new experience (loop).
<xnotran> ___ ___ ___ ( ), _________ , ______________________ (: ), ? </xnotran> What feeling you have?
Please evaluate the impact of this event on your life:
+4 (extreme front/extra good)
+3 (very positive/positive)
+2 (positive/positive)
+1 (one point positive/positive)
0 (without any effect)
-1 (little negative/negative)
-2 (negative/negative)
-3 (very negative/passive)
-4 (extreme negative/passive)
2) Information acquisition of 'your relatives and friends', system prompt: there are many relatives and friends in a person's lifetime, and you carefully recall things that they have gone through with them, answer the following questions in turn, and record stories between you and relatives and friends.
Please press the "press and record" button, please answer the next 16 questions with the usual speaking language (which may be dialect or native language), and after you press the record, you can use the keypad of the handset to input corrections if the voice recognition is wrong. Please ensure that: (1) the answer of each question has the original recording record; and (2) the transmitted text content is correct.
1/16 what is the full name of your 1 st relatives?
2/16 your 1 st relatives is which of your relatives? (e.g. you can answer this way: mom)
3/16 do you know when your 1 st relatives are presumably born?
4/16 when you meet or know your 1 st relatives? (Please say specific years, months, days, pull-down menus)
5/16 your 1 st relatives you meet at what location? (Please say a specific location, e.g. country, county, city, province)
When do 6/16 your 1 st relatives leave your? [ drop-down menu: year if he does not leave you, please skip this question
What is the health of the 1 st relatives and friends of 7/16? What disease is specific if the body is healthy enough to skip the latter question? Do not good to ask about what disease or cause the person was if he/she had died?
Where is 8/16 your 1 st family member (native)? Now where the person lives?
How do 9/16 your 1 st relatives learn or work? (please skip if the person does not have a work or learning experience)?
What story is the 1 st relatives of 10/16 you? Or what did you have a common experience with him?
11/16 what emotion do you have to your 1 st relatives? Or what do you have a view of the person?
12/16 please select your emotion to your 1 st relatives (button selection):
+4 (Special liking or love)
+3 (very much like or love)
+2 (like or love)
+1 (Yi Dian Lian)
0 (without special emotion)
-1 (little bit less like)
-2 (dislike)
-3 (very dislike/hate)
-4 (extreme dislike/hate)
13/16 how much does your 1 st relatives influence your life?
0 (no effect)
1 (with some influence)
2 (influence is greater)
3 (very big effect)
4 (maximum effect)
14/16 if your 1 st family friend is in front of you, what do you want most and s/he say?
15/16 do you want to record any other information of your 1 st friend?
16/16 please take a photo/short video of your 1 st relatives with the mobile phone or upload the photo/short video in the mobile phone album, a new relatives and friends 2/3.
3) Information acquisition of 'favorite and character', and system prompt: what interests and hobbies you have at ordinary times? Please answer the following question.
Please press the button "press and hold the recording", please answer the following 33 questions with the usual speaking language (which may be dialect or native language), and if the voice recognition is wrong after you press the recording, you can input correction using the keypad of the mobile phone. Please ensure that: (1) the answer of each question has the original recording record; and (2) the transmitted text content is correct.
1/33 what do you like in hobbies or in sports, leisure, entertainment?
2/33 what color do you like?
3/33 where do you like to live for a long time? (please specify, including province, city, town, village, etc.)
4/33 where are your favorite places of travel or vacation?
5/33 what are your favorite vehicles or travel patterns?
6/33 what are your favorite outdoor activities?
7/33 what are your favorite indoor activities?
8/33 what is a sentence or celebrity name or inspirational word or householder liked by you?
9/33 what are the gifts you like? (such as book, wine, jewelry, health care product or travel for one time, etc.)
What is the memory you like 10/33? (this is the best thought of you from small to big)
11/33 what are your favorite memorial days or holidays or ceremonies?
12/33 which school is your favorite school? (if not please click "skip")
13/33 what are the subjects or courses or professions that you like to learn? (if not please click skip)
14/33 what name do you like teachers or guides or leaders (please say full name)?
15/33 what name do you like a classmate or colleague? (Please say the full name)
16/33 what name do you like under the genus or the minister? (Please say the full name)
17/33 what do you like in taste or diet or cuisine or chinese-western meal?
18/33 what are your favorite beverages or tea or wine or coffee or milk juice etc.?
19/33 what are the snacks, candies, desserts, cold drinks, or snacks you like to eat?
20/33 what are the fruits or vegetables you like to eat?
21/33 what are you like to eat fish or meat?
22/33 what name do you like relatives and friends? (Please say the full name)
What is the communication or communication mode you like 23/33? (e.g., QQ, weChat, or cell phone, etc.)
What are the 24/33 your favorite music, songs, instruments, singers, etc.?
What are the novels, newspapers, magazines, stories, or other reading material that 25/33 you like?
26/33 what are movies, television, radio, video, animation, games, etc. that you like?
27/33 what are sports, sports programs, sports teams, athletes, etc. you like to watch?
28/33 what are the animals, pets, plants, etc. that you like?
29/33 what is the brand of the car or vehicle that you like?
30/33 what are your favorite cell phones, phone brands, or other communication tools?
31/33 what are your favorite physical stores or malls?
32/33 what are your favorite brand of apparel, fashion brand, brand of shoes and hats, etc.?
33/33 what brand do you like watches, bracelets, earrings, necklaces, perfumes, lipsticks, skin care products, etc.?
4) Information acquisition of 'your dream', system prompt: what dream and hope to realize in your life? Let us understand that please answer the following questions in turn:
please press the button "press and hold the recording" and please answer the next 9 questions with the usual speaking language (which may be dialect or native language), and after you press the recording, you can use the keypad of the handset to input corrections if the voice recognition is wrong. Please ensure that: (1) the answer of each question has the original recording record; and (2) the transmitted text content is correct.
1/9 what do you live in dream, goal or wish?
2/9 what did you not do what is particularly unfortunate? Why?
3/9 what specific repentance do you do? Why?
4/9 what do you want or bless for oneself?
5/9 what do you want, instruct, or bless your parent or grandparent?
6/9 what hopes, orders, or blessings for your lover or loved one?
7/9 what do you want or say or bless to your friends or relatives?
8/9 what do you want or say or bless to your children or grandchildren?
And 9/9 you can select photos related to the dream or desire from the mobile phone photo album to upload.
5) Information acquisition of 'your attitude and experience summary', and system prompt: we now understand your attitude and experience summary. Please answer the following questions in turn.
Please press the button "press and hold the recording", please answer the following 26 questions with the usual speaking language (which may be dialect or native language), and if the voice recognition is wrong after you press the recording, you can input correction using the keypad of the mobile phone. Please ensure that: (1) the answer of each question has the original recording record; and (2) the transmitted text content is correct.
1/26 what are your attitudes to life?
2/26 how do you see money?
3/26 how do you see work?
4/26 how do you see learning?
5/26 how do you see the love?
6/26 how do you see the family?
7/26 how do you see the person needing help?
8/26 how do you see your parents?
9/26 how do you see your children? If there are no children currently, click "skip".
10/26 how do you see your friends?
11/26 how do you see our society?
12/26 do you think how to count happy?
13/26 how do you see life?
14/26 what do your attitudes or reactions if a good thing happens on your body?
15/26 what do your attitudes or reactions if a bad event happens on your body?
16/26 do you have any other attitudes or beliefs that you want to record?
17/26 please summarize what did you learn from inside life? (conclusion)
18/26 please summarize what important experiences do you have in their own learning or work (summary)?
19/26 please summarize what important experiences or experiences (summaries) you have in personal health care and wellness?
20/26 please summarize what important experience is in your contact with relatives? What did you learn from inside (summary)?
21/26 please summarize which important experiences do you have in social interactions? What did you learn from it (summary)?
22/26 please summarize what important experiences do you have in their emotional or marital lives? What did you learn from inside (summary)?
23/26 please summarize which interesting or important experiences do you have during travel? What did you learn from it (summary)?
What interesting experience or experience is between 24/26 you and the small animals?
25/26 do you have other important experiences and summaries to record?
26/26 please take the relevant picture or short video upload with the mobile phone or upload the relevant picture or short video.
6) "your other personal information" collection, system prompt: we need to know your other personal information. Please answer the following questions in turn.
Please press the button "press and hold the recording" and please answer the next 30 questions with the usual speaking language (which may be dialect or native language), and after you press the recording, you can use the keypad of the handset to input corrections if the voice recognition is wrong. Please ensure that: (1) The answer of each question has the correct text content sent by the original recording record (2).
1/30 what are your full name?
What is the reason why 2/30 your family gives you this name? What is worth commemorating your name or where the offspring need to know?
What is the external number, small name or nickname of 3/30 you? (if not, please click "skip")
4/30 your sex is male or female?
5/30 your birthday is a few years, months, a number?
6/30 you have no special things about your birthday?
7/30 who is a person of which nationality?
What is the degree of culture of 8/30?
9/30 from which school you graduate or from which school you will graduate?
10/30 your profession, occupation, position, job title, or what is the position name?
How many centimeters are the 11/30 your height?
How many kilograms of body weight is 12/30? Is your body type fat, thin, or more well-balanced?
13/30 your health? What are often uncomfortable places in the body? What specific diseases are there?
14/30 do you have things or people without fear? What, if any, specifically?
15/30 what are the topics of interest to compare?
Is 16/30 you the city (place) that has been in the longest time before is which city (place)?
17/30 in which city are your birth? Or in which city (place) most of the time is an hour?
18/30 what skills you have (e.g. professional/technical/talent, etc.)?
19/30 what valuable things you have or what thinks deserves leaving for family or ancestors?
20/30 you know where your ancestor was living at? Or where the grandma is located?
What are especially memorable things of 21/30 your ancestors?
22/30 what motto you have or what family trainers, grandparents, family traditions, or family transferors (information on specific property please keep secret)?
23/30 you know which of your families are left on your body?
24/30 do you have what secrets to share with friends?
25/30 if you have other personal information you want to record you can enter.
26/30 you can select photos related to the personal information from the mobile phone album
27/30 you are an inward, neutral, or outward person?
28/30 you are an acute, neutral, or chronic person?
29/30 at work, do you like to deal with people or do you like to deal with machines?
30/30 please take the relevant photo or short video to upload with the mobile phone, or upload the relevant photo or short video.
7) Information acquisition of 'your photo or video', and system prompt: each photo or video, often carries a sweet recall or an interesting story. Please upload photos or videos that you find interesting. Telling the story of the photo or video.
Please press the button "press and hold the recording" and please answer the next 2 questions with the usual speaking language (which may be dialect or native language), and after you press the recording, you can use the keypad of the handset to input corrections if the voice recognition is wrong. Please ensure that: (1) The answer of each question has the correct text content sent by the original recording record (2).
1/2 what happened when the picture or video was taken? Or do you want to tell others what?
2/2 when this photo or video occurred? ( The following dates: years, months, or the age of your person at the time )
All life events/experiences of the user, (1, 2,3, etc.), repeating 1) through 2), the user may always add new photos or videos.
8) The collection of the other valuable information, the system prompts: your good! You can freely input information (characters, photos or videos) which you find valuable, and the photos or videos can be uploaded by clicking a camera icon above the dialog box.
Please press the button of "press and hold the recording", please record the information that you want to upload with the language (can be dialect or native language) of speaking at ordinary times, after you press the recording, if the speech recognition is wrong, you can use the mobile phone keyboard to input and correct. Please ensure that: (1) all the uploaded information has original recording records; and (2) the transmitted text content is correct.
For life events or experiences with large influence on the life track and the life experience of the user A, besides the life events of the user A, the system checks whether the user A inputs the reasons of the events or the experiences, and if the events or the experiences are not input, the system prompts incomplete information input and asks for inputting the reasons of the life events or the experiences. The prompt appears only 1-2 times, allowing the user to skip the question because of personal privacy concerns.
1.1.1.2 original sound acquisition submodule and original image acquisition submodule
The question-answer type acquisition submodule further comprises an original sound acquisition submodule and an original image acquisition submodule. The original sound collection submodule creatively solves the problem that culture is avoided and different practical situations of the user A are avoided, the original voice of the user A is recorded separately by taking words as a main mode in the collection process, and then the collected voice is spliced and played according to the reply content during playing. The original image acquisition sub-module is used for acquiring original images of the user A, including original images of the user A speaking and not speaking under different expressions and original images of other life and work.
The method for collecting the voice of the user A by the original voice collecting submodule is as follows: the commonly used words are presented one by one through a mobile phone or other hardware terminals, so that a user can speak the commonly used words in the most common language (dialect or mandarin) of the user, and the original sound collection submodule carries out sound recording.
In the recording process of collecting user a, creatively considered cultural taboo (for example, many people are reluctant to avoid the sentence "i can bless you" because abstaining from death) and different actual situations (for example, some people are married and some people are not), so when initiatively collecting the recording of user a, the system collects the recording separately through original sound collection submodule by taking words as main (for example, "i can", "blessing", "marriage" and the like), and the problem of the different actual situations of above cultural taboo and user a is solved by splicing through the splicing module during playing.
The step of collecting the sound of the words commonly used by the user A by the original sound collection submodule comprises the following steps: i is on the woolen cloth; hello; the life is happy and happy; i congratulate you; protecting the young people; i also want you; is that; the year of age; i are then; because; i feel; sorry, my number i has no relevant information; is safe and safe; happy birthday; all the steps are smooth; your reassurance bar; i; you; you; i stand well here; comprises the following steps of; then, the operation is finished; meeting; if you meet similar frustrations, i believe you will also be like me, overcome the influence of this thing on you, have not gone too far; so hope you or your relatives and friends can; unfortunately, i may not have this photo at that time, or upload; no, or none what you said; is My; my; we are at; at least one of the following steps; living in; asking me himself or other relatives as to his/her other situation; i feel that time too; so you are now the situation that you are also encountered by normal people; you are happy and say a bar with I; there is no particular reason why this is just my personal taste; there is no reason why this is my personal attitude or experience; there is no reason why this is the case; does not occur; is not; i, pair; how old you ask the question, change a bar; not good meaning, just opened the little badly, you change the question to try; i prefer; i know that; i like; i want; i can; i meet; i have; then, the process is carried out; is not known; birth; i have gone; i have not gone; i do not like; if not; none; i do not want; i do not like; i do not want; i can not; i will not; i do not; none; born year after year; then I am; a difference; the age is; immature; young; middle-aged and young; (ii) elderly; i am there; the year of age; i know, is my; still alive; is; not; to me; yes, I will; yes, i know that you say yes; what I would be; i use it; not, i don't use; my hobbies are; year; i prefer; is I; you can say that you can change one's saying or specifically say that you can; month; day; live in the year; live in the year; the celebrity idol i like is; i like best; family age; calling; the birthday is; let me; kah-then; i remember; i earns in the year; the most annoying people of I are; marrying; i'm highest schooling calendar is; my economic situation is; are all liked; the person you say has not yet added woolen cloth, and may not know the person too much; that, we have been home; now so; the people have not yet died or lack related information; i still get alive good woollen cloth; is that; i and; meeting; a single-bodied child; i arrange the old at home; perhaps you can ask you about himself; i feel I good; heyday, i can remember too late after the past; etc.; during this period people have something to eat; i are incomplete when inputting information at that time, and do not have the information of the person; i is; from (a); the career is that; i wish; opening the heart; no worry is generated; i feel; the brute force is good; is bad; and then the user will meet the requirement.
The acquisition method of the original image acquisition submodule comprises the following steps: 1) Short videos in different emotional states including calm, happy, angry, hurt, surprise and the like are recorded under the condition that the user A does not speak. 2) Recording short videos of the user A under different emotional states under the normal speaking condition, wherein the different emotional states comprise calmness, happiness, anger, injury, surprise and the like. Optionally, a segment of text is displayed on the screen to allow the user a to read aloud so as to obtain short videos in different emotional states under the condition that the user a normally speaks. 3) Record other short videos that user a wants to keep commemorating, such as singing, daily life work, or talent, etc.
1.1.2 social media information acquisition submodule
The social media information acquisition submodule is used for collecting the existing personal data of the user A after the user A agrees, wherein the personal data comprises the WeChat friend circle of the user A and the content of other social media. For example, if the user a has a WeChat friend circle and other social media, and the user a allows the system to collect the content of the friend circle, the content of the social media of the user a is obtained by methods such as software or customer service (e.g., letting the user a WeChat service, customer service download). Similar to the related content of the judder, microblog, self-transfer, photo, video and other existing material.
1.1.3 information gathering submodule outside social media
The information acquisition submodule except the social media is used for inquiring the information of the user A in a certain mode (such as an image agent and customer service on software, or the user A fills in the software or a webpage by himself, and videos and audios are recorded by the system in the inquiring process. The information acquisition submodule other than the social media comprises the following acquisition mode units (freely selected by the user A):
a. first recall and interaction unit with time, important life events and experiences, life content as clues: including inquiring about important life events or life contents occurring at various age groups according to the ages of people, inquiring about the cause, process or content of each life event or life content, the result including the influence or feeling on the user a, the time, the place, who is the same with the user a at that time, etc., and having the user a upload a corresponding photo (if any) or short video (if any) of each life event or life content. The user can also add new life events or life content at any time.
b. Second recall and interaction unit with photo and video as clues: let user a select and upload photos or short videos he/she finds important or meaningful, ask about related life events or life contents of each important or meaningful photo or short video, ask about the cause, process or content, result of each life event or life contents, including influence or feeling on user a, time, place, who is together at the time, etc. User a may also be shown a photo or short video (if any) of WeChat friend circles and other social media, supplemented.
c. A third recall and interaction unit with important historical events as clues: by automatically inquiring relevant important historical events (such as important historical changes or natural disasters) about the main living area and the life age of the user A, the specific influence of the historical events on the user A is inquired.
1.2 personal information analysis Module
The personal information analysis module analyzes and arranges the collected personal information to form a relatively complete personal database and store the personal database;
the personal information analysis module is used for:
a. classifying the collected photos and short videos (such as tourism, pet, social and the like) by artificial intelligence methods such as an artificial neural network and a support vector machine;
b. the text description is automatically generated for the photo or the short video by methods of video and image analysis technology, including artificial neural network and the like. If the text descriptions are correct through manual verification, the text descriptions serve as the supplementary contents of the photos or the short video text descriptions of the user A (or replace the missing of the text descriptions of the photos or the short videos of the user A);
c. further analyzing text information such as personal experience, personal preference, attitude and the like on the social media of the user A to obtain the personality characteristics, the personal experience and the related life experience trainings of the user A;
d. combining the collected personal information and the processed information, including characters, a knowledge graph, photos corresponding to the characters, audio (including personal information, commonly used words and sentences and blessing words and sentences spoken by a user A) and video, and numbering the information correspondingly to form a relatively complete personal database for storage;
e. and the voice synthesis module and the video synthesis module are trained through the collected audio recording or video recording.
f. Supplementing the personal information of user a. For example, if user A skips some collection of personal information, the answers to these personal information questions will be supplemented with information on the friends circle and other social media of user A. For example, if the user a skips a favorite food question, the system extracts photos of the food posted by the user a's circle of friends or other social media and fills these photos or their corresponding text in the response to the skipped question. When the user asks similar questions, the photos or their corresponding text are played.
1.3 self-transmission generation module
And the self-transmission generation module is used for automatically generating the self-transmission of the user A according to the personal database of the user A stored by the personal information analysis module.
1. For personal information in a personal database, particularly personal experiences, the system automatically analyzes continuity and causal logic of life events, if the life events are missing, the system performs relevant text supplement, and causes of missing or other information are stated by a first person, for example, "as to the reason of occurrence of the event or experience, i do not input the reason of the event or experience because i are in personal privacy when using the digital perpetual system, and the like".
2. Based on big data analysis, for the part lacking in input of the user A or the place insufficient in discussion description, the system fills up the relevant content through big data and marks the content. If the user a does not state a specific reason for a certain life experience or event, but other people have similar experiences and described the reason, then the similar experiences will be used to replace the missing, and autobiography will be noted in the text: "the segment of text is from big data analysis and supplemented because the principal entry information skips this content". For another example, the user a only mentions a place name of a certain residence, but the description of the local geomantic and emotional conditions is not sufficient, and then the description information of the local through big data is added to the self-transmission, and the self-transmission is marked in words.
3. The self-transmission of the user A comprises personal experiences, photos and characters of the user A, which are described by a first person, and the relatives, personal preferences, characters, dreams, attitudes, experiences and the like of the user A are added at a proper position, and maps and pedigree denograms of the change of the living and traveling places of the individual and family are automatically generated. The autobiography part is attached with a work set of the user A or a photo or a short video set, and a reader can obtain related videos by scanning a two-dimensional code on a book.
4. The content of each chapter is automatically generated and transmitted according to the information, the organization form of each chapter is various, and a user who purchases the product can select which form to write, and the method specifically comprises the following steps: an chronological form (the life and information of the user a are organized according to time, each chapter name is a certain time period), a main life experience and event form (each chapter name is a summarized sentence or a word phrase of the life experience and event), a life experience form (each chapter name is a summarized sentence or a word phrase of some important life experience or lessons of the user a), a place form, and a character form (each chapter name is a person name or a title of a character).
The self-transmission can be paper edition or electronic edition (recorded on the optical disk). The autobiographical content may also be automatically updated as new content stored by user a at the recording and analysis unit is added.
2 interaction unit
The interaction unit, except the online chatting module, is used for interacting with the user B through the reply and play unit based on the personal database and other information when the user A is not online. User B refers to user A who is currently using the system and interacts with user A, and comprises relatives and friends of user A.
The interactive unit comprises an information question-answering module, a comfort assisting module, a scene chatting module, a success and happiness suggestion module, a health suggestion module, a worship praying module and an online chatting module.
2.1 information question-answering module
The information question-answering module is used for answering personal information and non-personal information questions about the user A, which are provided by the user B, according to the personal database and the existing database;
the interactive method of the information question-answering module comprises the following steps:
s1, reading the name of a user B, searching the user B in a friend list of the user A, and determining a character relationship between the user A and the user B;
s2, obtaining corresponding titles and sentences through the character relation;
if user B is the parent ancestor or sibling of user A, the call is: family names (e.g., tertiary, grandfather, etc.); if the user B is the relationship of the relatives and descendants of the user A, the name is: user B first name (without surname); if user B is the friend relationship of user A, the call is: the full name of the user B; if user B is not the friend or affiliation of user A, it is not called.
The sentence is: you good or hello.
S3, judging and classifying the input problems by using an artificial neural network, a support vector machine or other classification algorithms; judging whether the question is a personal question or a non-personal question, if the question is a personal question, executing a step S4, otherwise, executing a step S5;
s4, extracting keywords related to personal information in the personal question, and then the system operates according to the following conditions or situations:
condition S4.1: if no keywords are extracted, user B is requested to speak again, change a question, or change a question.
Condition S4.2: if keywords are extracted and relevant information can be found from the information in the recording and analysis unit based on the keywords, corresponding answers are made based on these found information. The method comprises the following specific steps:
s 4.2.1. If only 1 cell is activated, the content of this cell is taken as the answer back.
S4.2.2, if more than 1 grid is activated:
if the multiple grids all belong to the same form (e.g., personal story form), all of the contents of the multiple grids are returned as answers.
If the multiple grids are not the same form (e.g., a grid of personal story forms is activated, and a grid of personal favorite forms is activated), returning the list name for the user to select: if the active table number > =3, then output: do you ask (name of X form), or (name of Y form), or neither? If none of the user answers, the system outputs: you are a question (name of W form), or (name of Z form), or neither? And so on. If the number of the activated forms is 2, outputting: is you asking (the name of the X form), or (the name of the Y form), or neither?
If the user selects a form of the questions, returning the content of the activated grid of the form as the content of the answer; if the user finally chooses neither, then conditional S4.3 is entered.
S4.2.3 for different types of questions, such as the "why" question, corresponding answer words are added before returning the found information. For example, corresponding to the "why" problem, add "because of 8230; \8230;".
Condition S4.3: if the keyword is extracted but the related information cannot be found according to the information of the keyword in the recording and analyzing unit, the system generates synonyms, related words, upper concept words and lower concept words of the keyword, repeats each step of the condition S4.2, searches the related information again from the personal database, and returns the result if the related information is searched; if no relevant information is found, the system prompts user B: user a does not enter such information or a question when setting up a digital person, i.e. entering relevant personal information.
And S5, extracting keywords in the non-personal question, and if the keywords in the question are not extracted, requesting the user to speak again, change a question or change a question.
If the keywords are extracted, the system searches answers from an existing database such as an encyclopedia knowledge base according to the keywords, and if the answers are found, the result is returned. If the answer is not found, the system generates synonyms, related words of the keywords, upper concept words and lower concept words of the keywords, searches the answer again from the existing database, and returns the result if the answer is found; if no answer is found, the system answers user B: the question is difficult and indicates a need to continue learning or to request the user to change a question or provide a possible answer to learn.
2.2 comfort assistance Module
The comfort assistance module is used for simulating the user A to give comfort to the language of the user B according to the fact that the user B needs comfort; the method specifically comprises the following steps: when keywords needing comfort appear in the conversation extracted from the user B or the camera detects that the user B has an inattentive expression, giving comfort words when the user B needs comfort through Internet knowledge and information of the user A;
the interactive method of the comfort assistance module comprises the following steps:
s1, reading the name of a user B, searching the user B in a friend list of the user A, and determining a character relationship between the user A and the user B;
s2, obtaining corresponding titles and sentences through the character relation;
if user B is the relative ancestor or of user A, the call is: relative designations (e.g., tertiary, grandfather, etc.); if the user B is the relationship of the relatives and descendants of the user A, the name is: user B first name (without surname); if user B is the friend relationship of user A, the call is: the full name of the user B; if user B is not the friend or affiliation of user A, it is not called.
A sentence is a guide utterance, for example: i feel you feel like you are somehow happy today, and your dad (call the relationship word) say the bar.
And S3, classifying the problems or the keywords in the sentences input by the user B by using an artificial neural network, a support vector machine or other classification algorithms, wherein the classification algorithms are ordinary comfort, assistance (psychological comfort, life assistance and the like), emergency and other assistance respectively.
1) General comfort and assistance: and further classifying the question or sentence of the user B according to the keywords: emotional assistance (autism, depression, boredom, etc.), or non-emotional life assistance. Corresponding comfort or assistance answer logic is provided based on the results of these classifications:
answer logic part 1 general comfort and assistance based on internet knowledge; such as: if you feel solitary, you can get to a friend to chat.
Answer logic part 2. Using keywords inside user B's question/sentence, looking for user a's personal experience, etc.,
case 1: finding a similar experience, then a sentence is generated: "i also sometimes feel solitary while reading university" (call up user a's personal story) + so you are now also normally everyone encounters (system creation).
Case 2: finding the opposite experience or emotion, then a sentence is generated: "i think that you feel happy when you want you happy, do so when i don't happy, so i think about the experience of happy before i (system generation) when i don't happy, for example," i especially happy when you arrive at 28 years old birthday because there are many friends together to congratulate "call the personal experience content of user a", it is certain that you also have the experience of happy can think about the experience of happy (system generation) ".
2) Emergency or other assistance: the question or sentence of user B is classified by keyword and only in case a certain keyword is activated (e.g. "burned") the corresponding pre-prepared answer is activated. If no specific keywords are activated, the unified answer to your situation does not know how to handle asking to reach your relatives as soon as possible or to seek other help such as dialing 120, community property, policemen 110, etc.
User B may also select a third party mode, which is relayed by a third party electronic querier, such as "this question, we have queried your father for information, which means: \8230'
2.3 scene chat module
The scene chatting module is used for simulating the chatting interaction between the user A and the user B according to different scenes;
the interaction method of the scene chat module comprises the following steps:
s1, classifying the current activities of a user B by using the functions of a camera and a microphone or other sensors and simultaneously using computer vision or sound or Internet of things information and a classification algorithm: eating, watching television, cooking, etc. The current time as an aid to correct classification, such as by computer vision or computer hearing, that the current user's activity is cooking, but the current time is 1.
And S2, according to the classification result, activating corresponding contents in the personal information of the user A, for example, if the user B is found to be eating, the system calls the food-related hobbies of the user A, and the user B 'likes to eat fish at best (calling result)'.
Invoke the information of user a and user B and invoke some of the life experiences related to both parties and the currently active scenario, such as "dawn (call the appropriate call), dad (call relationship) remembers to go to east lake fishing with you in 2010 (call a common life experience)".
And waiting for the reply of the user B, and if the user B replies, starting an information question-answering module or a consolation assistance module according to the keywords in the reply.
In addition to the simulated user a mode suggestion based on the first name, if the user has a fear mind for direct conversation with the deceased, the user may select a third party mode, which is relayed by a third party electronic querier, such as "hello, we say based on information of your father that he also likes to eat fish, especially a hot and busy dining together \8230;".
2.4 success and happiness suggestion Module
The success and happiness suggestion module is used for applying a common situation principle and a similar principle in psychology to give positive suggestions to the user B; the related experience of the user A who is the relative and friend of the user B is extracted as much as possible, because the relative and friend have more background similarities with the self, and the content in the self-compiled active life example library is played without related information.
The success and happiness suggestion module, the interaction of which comprises the following two phases:
stage one, user a gives the relevant system prompts after using the digital person created by the recording and analysis unit. This phase comprises the following steps:
s1, after a user A establishes a digital person by using a recording and analyzing unit, checking a life event or experience added by the user A, and if the life event or experience input by the user A in the recording and analyzing unit is less than 3, prompting: you now input a few personal experiences, and then ask you to return to the main menu to input some more life events in the personal experiences of the digital me, otherwise the use of the system is influenced.
S2, checking the number of relatives and friends added by the current user A at the same time: if the number of relatives and friends is less than 2, prompting that: the number of relatives and friends added is too small, and the use of the system can be influenced if the number of relatives and friends added is too small. Please return to the main menu to add more relatives to the digital my personal information if possible.
And S3, after the user input is finished, the system automatically arranges the life events: the user a is inquired about the positive life events and the negative life events in the life experience of the user a through the success keywords (such as success) and the negative keywords (such as failure), and the corresponding reasons of each positive event and the corresponding behaviors of each negative event (see the recording and analyzing unit for details, the 8 th/9 th problem of the 'your personal experience', if the user is a bad life experience, which things you do to change in the process), and the positive and negative life event records are stored in the recording and analyzing unit to be called in the second stage. If no positive or negative life events are queried, the user's positive and negative life events are recorded as empty.
And a second stage, in which the user B selects and activates a success and happiness suggestion module in the interactive unit, wherein the second stage comprises the following interactive steps:
s1, if the user B uses the system for the first time, the introduction of the module is played. And waits for user B' S input, including the results of speech recognition, and if user B has no input for several seconds (e.g., 5 seconds), then S2 is entered; if user B has any input within a few seconds (e.g., 5 seconds), then proceed to S3;
s2, inquiring positive and negative life event records of the user A or the relatives and friends of the user generated in the first stage, and playing corresponding contents according to the following conditions and sub-conditions:
and if the conditions S2.1 are all null, activating and playing the contents of the actively-compiled actively-living example library self-compiled by the system, prompting the user B that the user A does not input related actively or negatively living events in the established digital people and relatives and friends thereof, and playing the contents in the actively-compiled actively-living example library self-compiled by the system.
Conditional S2.2, if there are positive and negative life event records for user a or his relatives, then these positive and negative life event records are played. The sub-conditions and contents of the playback are as follows:
sub-condition S2.2.1 of S2.2: on condition that user B selects the simulation user a mode and positive events are extracted (photo or short video of relatives and friends user a is displayed): "XXX" (title or name), extracted from the recording and analysis unit: if the relatives and friends are the peers (including colleagues, classmates and friends) or the elders of the user, the name of the user is used; if the relatives and friends are the users' descendants, the relatives are called (such as "dad"), "i am then, because of" (or similar expressions), the reason for invoking a randomly selected positive event (if any, the original recording obtained from the recording and analyzing unit), the content of the positive event (if any, obtained from the recording and analyzing unit) is played, "so hope you or your relatives and friends can also", the keywords of the positive event (such as success) or related phrases are played.
S2.2 sub-condition S2.2.2: on condition that user B selects the third party mode, i.e. the electronic inquirer answers and extracts positive events (photo or short video of relatives user a is displayed): "XXX" (title or name, call relation/name, "then, because" the reason for the positive event of the person and the content of the positive event are played randomly selected with the sound of an electronic querier, "so you or your relatives and friends are expected to be able to" play the keywords (e.g., success) or related phrases of the positive event.
S2.2 sub-condition S2.2.3: in the case where the user B selects the simulation user a mode and it is a negative event that is extracted and the system judges that the coping action of the negative event is positive, a photo or a short video of the relatives and friends user a is displayed: "XXX (name or names), extracted from the recording and analysis unit; if the relatives and friends are siblings (including colleagues, classmates and friends) or elders of the user B, the name of the user B is used; if the relatives and friends are the descendants of the user B, the relatives and friends are called as terms (such as dad), "I are at that moment", the randomly selected content of the negative life event of the person is called (if the original recording is obtained from the recording and analyzing unit), the behavior of the person corresponding to the negative life event is called (if the original recording is obtained from the recording and analyzing unit), and "if you suffer similar frustration, i believe that you can also be like I, the influence of the thing on you is overcome, there is no too much missing threshold" (or similar sentence), "I also know", and the content of the active life event example library self-compiled by the playing system is played.
S2.2 sub-condition S2.2.4: in the case where the user B selects the third party mode and the negative event is extracted and the system judges that the coping action of the negative event is positive, (a photo or a short video of the relatives user a is displayed): "XXX" (title or name, call relation/name), "then," sound play a randomly selected negative event and the person's positive response to the negative event with an electronic inquirer, "if you encounter similar frustration, i believe you will also like him, overcome the impact of this event on your, without too many thresholds" (or similar statements), "i also know," play the contents of the system's self-organized library of positive life event instances.
S2.2 sub-condition S2.2.5: regardless of whether user B selects a mode that simulates user a mode or third party playback, if a negative event is extracted and the system determines that the coping behavior of the negative event is negative, the contents of the active life example library self-compiled by the system are activated and played.
S3, inquiring positive or negative life event records with highest content correlation degree input by the user B according to the input of the user B, playing corresponding contents according to 5 sub-conditions similar to S2, wherein the difference is as follows: the positive events with the highest relevance and the reasons thereof or the negative events with the highest relevance and the positive coping behaviors thereof are played instead of being randomly selected.
2.5 health advice Module
The health suggestion module is used for giving health suggestions to the user B according to the collected causes of death, diseases, habits and hobbies of the user B and relatives and friends of the user B and the collected causes of death, diseases, habits and hobbies and relationships of the user B and the relatives and friends of the user B;
check before run if user B input in the recording and analysis unit is complete (including his/her own disease status), and if not, prompt: you go back to the main menu to enter your health condition in the digital i's personal information, otherwise the quality and quantity of health advice provided by the system you would be affected. And checks the number of relatives and friends added by user B: if the number of relatives and friends is less than 2, prompting that: too few relatives you add may affect the quality and quantity of health advice provided by the system for you, and if possible, please return to the main menu to add more relatives.
The interaction method of the health suggestion module comprises the following steps:
s1, inquiring the causes of death, diseases, habits and hobbies of the user B and relatives and friends thereof collected by the recording and analyzing unit, and obtaining related results. See table 1, for example, according to the cause of death, disease, habit and hobby of user B and his relatives and friends:
TABLE 1
Figure RE-GDA0003795637180000281
For a certain Disease i, its Disease Severity (SDi, severity of Disease i), related Habit Severity (SHI, severity of Habit), environmental factor Severity (SEi, severity of environmental element i) are calculated as follows:
step 1. Starting with the disease at row 1, column 1 (e.g., D1), search the entire table from row 2 to the last row:
1) SDi =1 (initial value of self-morbidity);
2) If the same or similar disease as Di occurs and the disease has genetic factors, relevant information is extracted from the recording and analyzing unit:
2.1 If oneself and patient are in direct relationship but the cause of death of patient is not Di, SDi = SDi +1 and checks whether the habit is similar to the patient and the habit is a causative factor of Di according to the disease database, if so, SHI = SHI +1 and checks whether the living and working environment is similar to the patient and the environmental factor (Ei) is a causative factor of Di according to the disease database, SEi = SEi +1 and records the number of persons of relative relatives (NDi) and their Name set { Name1, name 2 \8230 }
2.2 If the person and the patient are in an immediate relationship and the cause of death of the patient is Di, SDi = SDi +2 and checks whether the person and the patient have similar habit preferences and the habit preference (Hi) is a causative factor for Di according to the disease database, SHI = SHI +2 and checks whether the person and the patient have similar working and living environments and the environment factor (Ei) is a causative factor for Di according to the disease database, SEi = SEi +2, if any, records the number of persons of the relevant relatives (SNDi) and their Name sets { Name A, name B \8230 }
Note: the specific values +1 or +2, +0.2, etc. may be adjusted for a particular disease, otherwise the value of SDi is unchanged.
3) If the same or similar disease as Di occurs but the disease has no genetic factors but is an infectious disease, relevant information is extracted from the recording and analyzing unit:
3.1 If oneself and the patient have a common life/exposure history but the cause of death of the patient is not Di, SDi = SDi +1 and checks whether the person and the patient have similar habit preferences and the habit preference (Hi) is a causative factor of Di from the disease database, SHi = SHi +1 and checks whether the person and the patient have similar living and working environments and the environmental factor (Ei) is a causative factor of Di from the disease database, SEi = SEi +1 if any, updates the number of people of the concerned relatives (NDi) and their Name set { Name1, name 2 \823030;);
3.2 If oneself and the patient have a common life/contact history and the cause of death of the patient is Di, SDi = SDi +2 and checks whether the person and the patient have similar habit preferences and the habit preference (Hi) is a causative factor of Di according to the disease database, SHi = SHi +2 and checks whether the person and the patient have similar working environment and the environmental factor (Ei) is a causative factor of Di according to the disease database, SEi = SEi +2 if any, updates the number of people of the concerned relatives (NDi) and their Name sets { Name1, name 2 \8230; };
3.3 If oneself and sick person have not common life/contact history and the fate reason of the sick person is not Di, SDi = SDi +0.1, and check whether the person and the sick person have similar habit preferences and the habit preference (Hi) is a causative factor of Di according to the disease database, if yes, SHI = SHI +0.1, and check whether the person and the sick person have similar living and working environment and the environmental factor (Ei) is a causative factor of Di according to the disease database, if yes, SEi = SEi +0.1, update the number of people of the relative relatives (NDi) and their Name set { Name1, name 2 \8230me };
3.4 If the person and the patient do not have a co-life/contact history but the cause of death of the patient is Di, SDi = SDi +0.15, and checks whether the person and the patient have similar habit preferences and the habit preference (Hi) is a causative factor of Di according to the disease database, if any, SHI = SHI +0.15, and checks whether the person and the patient have similar working environment and the environment factor (Ei) is a causative factor of Di according to the disease database, if any, SEi = SEi +0.15, updates the number of people of the relevant relatives (SNDi) and their Name set { Name A, name B8230 };
4) If the same or similar disease as Di occurs, but the disease has no genetic factors and is not an infectious disease, relevant information is extracted from the recording and analyzing unit, then:
4.1 If the cause of death of the patient is not Di, SDi = SDi +0.1 and checks whether the person and the patient have similar habit preferences and the habit preference (Hi) is a causative factor for Di from the sickness database, SHi = SHi +0.1 if any, and checks whether the person and the patient have similar living and working environments and the environmental factor (Ei) is a causative factor for Di from the sickness database, SEi = SEi +0.1 if any, updating the number of people of the relevant relatives (NDi) and their Name sets { Name1, name 2 \8230 };
4.2 If the cause of death of the patient is Di, SDi = SDi +0.3, and checking whether the person and the patient have similar habit preferences and the habit preference (Hi) is a causative factor for Di according to the disease database, SHi = SHi +0.3, if any, and checking whether the person and the patient have similar living and working environments and the environmental factor (Ei) is a causative factor for Di according to the disease database, SEi = SEi +0.3, if any, updating the number of people of the relevant relatives (SNDi) and their Name set { Name A, name B \8230 };
step 2, repeat Step 1 for my second illness until all my reported (first row) illness runs completed.
Step 3, starting from the second row, repeats Step 1 for each disease (initial value of SDi is 0) until all disease runs in the table are completed.
The above 3-Step runs yielded an example of the 3-Step run results, as shown in table 2 below:
TABLE 2
Figure RE-GDA0003795637180000311
Figure RE-GDA0003795637180000312
And S2, sorting all diseases according to the severity of the diseases according to the result of the S1, and listing the most serious diseases in the front.
And S3, combining the disease conditions of the relatives and friends into health advice according to the results of the S1 and the S2, giving corresponding health advice from the perspective of the relatives and friends of the user, and judging whether the disease conditions are only of the user A, namely the user B, or only of the relatives and friends of the user B, or both the user B and some relatives and friends (S):
case 1: if it is the cause of death/disease/habit/hobby of user a, i.e. user B, only, the following steps are performed:
case 1.1, if it is a disease, the results from the disease database and S1 and S2, in order of severity Score (SDi) of the disease, provide relevant recommendations via the health database: a) If the habit hobbies and environmental factors related to the diseases are obtained through the two steps, related health suggestions are sequentially prompted according to the SHI scores and the SEi scores; b) If there are no habit preferences and environmental factors associated with these diseases, relevant recommendations are provided by the health database.
In case 1.2, if the habit is a healthy habit, the healthy or unhealthy habit is judged by the habit health database, and if the habit is a healthy habit, a language for encouragement is given. If not healthy, relevant recommendations are made via the habit preferences health database and used as input for daily reminders.
Case 1.3, if it is a non-disease cause of death: the cause of death is classified (such as traffic accident, war, etc.), and the user B is reminded or suggested for each type of cause of death. Although most of the cases are that the user a is alive using the system and there is no cause of death, there are cases that the relatives of the user a who is dead use the system instead of him/her, so we consider the case.
Case 2: if only the cause of death/disease/habit/hobby of the relatives and friends, it is judged whether the user B and the relatives and friends are immediate relatives or non-immediate relatives or friends/classmates/colleagues, etc. based on the information of the recording and analyzing unit.
Case 2.1, if user B and the relatives are immediate relatives and if they are disease, the following steps are performed:
1) If the etiology has inherited components (e.g., type ii diabetes, certain heart diseases, etc.) and B is a's descendant, the user is alerted via the disease database that B:
simulating a user A mode (sequentially selecting 1-2 relatives according to the severity Score (SDi) of diseases): i (present friend photo or short video) have been affected (call for disease name), (also your XXX, call for step 1 name set and the correlation from the recording and analysis unit). The disease has a genetic element suggesting you: a) If habit hobbies and environmental factors related to the diseases are obtained through the two steps, related health suggestions are sequentially prompted according to the SHI scores and the SEi scores; b) If there are no habitual preferences and environmental factors associated with these diseases, relevant recommendations are provided via the health database. Wherein the mood words vary with severity Score (SDi): for example, "you (certain/million/equal language words are set according to the SDi score) pay attention to regular physical examination, active prevention, \8230;".
Or third party mode, electronic querier replies: we consult your relatives for information, your X (recall S1 results) XXX (name of relatives) were affected (recall disease name), and others are as above.
2) If the etiology has ingredients of habit and hobbies (such as liver cancer), through the disease database (liver cancer ← drinking) and the result of step 1 is called, if the corresponding habit and hobbies are found:
then, in response: simulating a user A mode (sequentially selecting 1-2 relatives according to the severity Score (SDi) of diseases): i have suffered (called disease name), probably because I (called the lifestyle of the user A corresponding to Di in S1), so that you are advised (preferably don/million don/\ 8230; wording is based on severity score of the disease) like I (called the lifestyle of the user A corresponding to Di in S1), and the possibility of the disease occurrence can be reduced to a certain extent.
Or third party mode, electronic querier replies: we inquire the information of your relatives and friends, and your XXX (relatives and friends call), that is, the first person is replaced by the third person, and other contents are the same as above.
If not, the simulated user A mode reminds the user B: i had been ill (recall the disease name), research shows that the life habits of 823082308230, (life habits called through a disease database) can increase the occurrence chance of the disease to a certain extent, so advising you not to take care of (lifestyle through disease database calls) can reduce the possibility of this disease to some extent. Or a third party mode, the electronic inquirer replies, and the content is as above.
3) If the etiology has a component of the environment (such as lung cancer), through the disease database (lung cancer ← air pollution) and the result of calling S1, relevant information in the personal information of the user A, such as a working place (such as a chemical plant with much pollution) and a living region (xx city air pollution) and the like, is searched.
If found, the user B is reminded to:
simulating a user A mode (sequentially selecting 1-2 relatives according to the severity Score (SDi) of diseases): i have got sick (recall the name of the disease), possibly because I worked or lived (the place of work or life of the user A who obtained the result of recalling the environmental factors of step 1), so advising (generally/strongly advising and the like according to the SEi score) that you no longer touch or live in these places can reduce the possibility of the disease to some extent. Or a third party mode, the electronic inquirer replies, and the content after the person is called is converted.
If not, the simulated user A mode reminds the user B: the research shows that the environmental factors of 8230and 8230increase the occurrence chance of the disease to a certain extent (the life habits called through the disease database), so that the possibility of the disease can be reduced to a certain extent by advising the places where the environmental factors called through the disease database are avoided as much as possible. Or a third party mode, the electronic inquirer replies, and the content after the person is converted is the same as the content.
4) The etiology has the components of infection or the infectious disease, and the user B is reminded through a disease database (the infectious disease is generally not described by a first person, and a three-party electronic inquirer is automatically started to answer): you note your relatives (XXX, call S1 result) have this (call disease name) which is transmitted by (transmission route), if you are also touching this person or his items you need to pay attention to prevent transmission at all times.
Case 2.2, if user B and the relatives are immediate relatives and if it is a habit/hobby, the following steps are performed:
judging whether the habit hobby is healthy or unhealthy through the habit hobby health database,
1) If healthy, respond with:
simulating a user A mode: i (present friend photos or short videos) are also brute force, you can also try or stay.
Or third party mode, electronic querier replies: we inquire the information of your relatives and friends, and your XXX (relatives and friends call), that is, the first person is replaced by the third person, and other contents are the same as above.
2) If not healthy, make relevant recommendations through the habit preferences health database and respond as input to daily reminders:
simulating a user A mode: i (present friend photos or short videos) this (specific habits/hobbies) is not good, and you preferably do not.
Or third party mode, electronic querier replies: we inquire the information of your relatives and friends, and your XXX (relatives and friends call), that is, the first person is replaced by the third person, and other contents are the same as above.
Case 2.3. If user B and the relatives are immediate relatives and if they are a cause of death other than disease, the causes of death are classified (such as traffic accidents, etc.), and the user B is reminded to respond to each type of causes of death. Such as:
simulating a user A mode: i am because (calling the cause of death of user A), you must pay attention to \8230
Or third party mode, electronic querier replies: we inquire the information of your relatives and friends, and your XXX (relatives and friends call), that is, the first person is replaced by the third person, and other contents are the same as above.
Case 2.4 if user B and the relatives are not immediate and if they are disease, case 2.1 is run but 1 is skipped).
Case 2.5. If user B and the relatives are not immediate relatives and if it is a habit/hobby, case 2.2 is run.
Case 2.6. If user B and the relatives are not immediate relatives and if they are non-cause of death of the disease, case 2.3 is run.
S3 case 3: if the common cause of death/illness/habit/hobbies of user B and his relatives, then run as S3 case 2, at S1 the SD, SH, SE scores have taken this into account, the relevant recommendations are modified on the basis of the recommendations of case 2 as follows:
case 3.1: the relevant recommendations are updated to: simulating a user A mode: "I" changes to "We"; the third party electronic querier replies: we queried information about your relatives and your XXX (relatives-friends name).
Case 3.2: if the habit is common, judging whether the habit is healthy or unhealthy by a habit taste health database:
1) If healthy, respond with:
simulating a user A mode: we (presenting a photo of relatives or a short video) are all that (specific habits/hobbies) or brute-force, you can also try or stay.
Or third party mode, electronic querier replies: we inquired about information of your relatives and friends, and your XXX (relatives-friends name), that is, the first person is changed to the third person and other contents are the same as above.
2) If not healthy, make relevant recommendations through the habit preferences health database and respond with daily reminders:
simulating a user A mode: we (presenting photos of relatives and friends or short videos) all have this (specific habits/hobbies) not very good, and you do not.
Or third party mode, electronic querier replies: we inquire the information of your relatives and friends, you and your XXX (relatives and friends call), that is, the first person is changed to the third person for other contents.
2.6 worship and pray for fortune module
The worship and pray module is used for providing an electronic intelligent worship function for the user B, and giving psychological comfort and consolation to the user B according to the identity of relatives and friends of the lapses; in this function the system will also provide psychological counseling and information to the user B, guiding the user correctly or providing relevant help.
The interaction method of the worship praying module comprises the following steps:
s1, one or more head portraits of the user A appear on the interface, and if the user A does not die (the die date in the information of the user A is compared with the current date), the birthday reminder of the user A is only operated.
S2: if the user selects the user A (or a plurality of users A) needing sacrifice, the following sub-modules are operated;
submodule 1: worship reminding function
Reminding the user to worship at the time of Qingming and the like and before the birthday death date of the user A.
Submodule 2: worship and pray for blessing interactive function:
step 1: after user B chooses to begin worship, the system will present user A's picture and sacrifice (including candles, etc.) scenes.
Step 2: the system prompts the user through text on the interface or a voice of a third party that some worshiping actions (e.g. stubbing, etc.) may be performed. If the camera of the system detects that the user B has worship behaviors (e.g. the tap is made, etc., when the user camera or the computer visual identification is difficult, the user can also press a button to worship) or the microphone of the system detects the language of the user B blessing fator (the user A) (e.g. "grandpa blessing on that side), the system presents a good picture (e.g. presents characters: all good luck and happiness are good step by step, etc.), and the animation is presented: figures such as happiness, position and longevity, and motion pictures such as birthday peach and hairyvein agrimony.
And 3, step 3: if the microphone of the system detects that user B has a communication with the deceased (user a) language (such as "do grandpa you still there") or presses a pray button, the system will classify the user's language input:
1) If the personal information of the user A is inquired, starting an information question-answering module;
2) If the user B needs comfort or assistance, starting a comfort assistance module;
3) If the language of the user B is blessing or selects a blessing mode, the words and phrases of blessing given to the user B by the user A are automatically started.
After the above steps 1) -3) are finished, or 1) -3) are not activated, the general ancestor baotion-late sentence is returned, for example, "i will be good baotion you on that side, and you want happy life (system generation)" and the like.
And 4, step 4: a sending button is displayed on an interface, a user can send voice or text messages to a user A in the system, the interface of the system can display the sending process by using animation, the system replies the variants that the user A receives extraordinary thank you and the like, and then returns the sentences of general ancestors that blessing the late, such as 'I can blessing you well on that side, and you want happy life (system generation)', and the like.
In this module, the system will also prompt the user for psychological counseling and information to guide the user or related help correctly.
2.7 Online chatting Module
The module can enable the user B to communicate with the online user A in text, voice or video.
3 reply and play unit
The reply and play unit is used for generating and playing related voice and video.
The reply and play unit comprises a reply generation module, an active play module, a short video analysis and call module, a voice synthesis module and a video synthesis module. The reply generation module is used for answering in any one or more modes of original voice, original short video, mixed original voice short video and synthesized voice. The active playing module is used for automatically playing the personal information of the user A by the system. And the short video analyzing and calling module is used for analyzing the emotion in the question and calling the related short video of the user A according to the question of the user B and the character content of the answer of the user A. The voice synthesis module and the video synthesis module are used for synthesizing the voice and the video close to the user A by using the original voice and the original video of the user A through an artificial intelligence technology.
In order to reduce the fear of direct conversation between some users B and people who may have died away, the reply pattern of the reply generation module includes a third party pattern and a simulated user a pattern. The third party mode can be that the electronic inquirer changes over, the user interface displays the model of the electronic inquirer, the user asks the electronic inquirer about the information of the relatives, and the electronic inquirer replies the relevant content from the perspective of the third party. The user a mode is simulated and the user interface displays a picture or short video of user a and answers questions from the perspective of the first person by the digital person. User B can freely switch between these two reply modes.
3.1 reply Generation Module
The interactive method of the reply generation module comprises the following steps:
1. the preference of the user is obtained through voice or interface options, whether a third-party mode is selected for answering, namely, the third-party mode is transferred through an electronic consultant, or a simulation user A mode is selected, and the user directly talks with the digital user A in a first-person mode;
2. waiting for the user to wake up the word or acquiring the interactive object to be selected by the user B through the operation of the user B on the interface of the system, such as the name of the user a or clicking the name or the avatar or the name of the user a by the APP on the interface, and then acquiring the relationship between the user B and the selected user a and other information. If a plurality of users B exist, different users B are distinguished through account names, face recognition or voiceprint recognition;
3. play the example question and user a's speaking video and wait for the user to speak for a few seconds (say 10 seconds);
4. recognizing the speaking content of the user B through voice, and sending the speaking content to the interaction unit;
5. and according to the content returned by the interactive unit and the selection of the user B, namely the third-party mode reply or the simulated user A mode, determining the playing content:
5.1 if the user selects the simulated user A mode:
case 1: if the content returned by the interactive unit is to be played with non-plain text content such as a photo, a song, or a short video of the user a corresponding question:
case 1.1 user A database stores corresponding photos, singing or short videos, and then directly plays;
case 1.2 user a database has no corresponding photos, singing, or short videos, then no feedback is made.
Case 2: if the content returned by the interactive unit is the text content (Q-T) to be played, further judging the type of the text content:
case 2.1: if the Q-T is the text content or the original video and audio of the original voice of the user A, the original voice or the picture is played, wherein the personal information of the user A recorded by the recording and analyzing unit in the process of inquiring the user A comprises the original recording; and simultaneously starting the short video analysis and calling module to play the related short video.
Case 2.2: if the Q-T is the words which can be spoken by the user A and automatically combined, screened, cut and spliced (called automatic splicing), the voice of the automatically spliced user A is played; and simultaneously starting the short video analysis and calling module to play the related short video.
Case 2.3: if Q-T is a hybrid system comprising text content (Q-T-AI) generated by the interactive unit and the in-person spoken sentence of user A comprising its possible concatenation (Q-T-A), then the following are assigned:
case 2.3.1: if the speech synthesized by the speech synthesis module is very close to the real sound of the user A, the proximity is determined by the user experience and tests, and the Q-T content is played in the first person by using the speech synthesized by the artificial intelligence system. And meanwhile, starting the short video analysis and calling module to play the related short video, which is shown in detail in the short video analysis and calling module.
Case 2.3.2: if the voice synthesized by the voice synthesis module can not be close to the real sound of the user A (the close degree is determined by the experience and the test of the user), for the original voice (Q-T-AI) which comprises the text content generated by the interaction unit and can not be spliced by the user A, the gender and the age information of the user A are obtained from the recording and analyzing unit, and the voice which is consistent with the gender of the user A and is close to the age is played by the voice synthesis module; the case 2.2 is triggered if the sentence spoken by user a includes its possible concatenation (Q-T-a). And simultaneously starting the short video analysis and calling module to play the related short video.
Case 2.4: Q-T does not contain the words of the user A, and all the words comprise the text content (Q-T-AI) generated by the interactive unit, then:
case 2.4.1, if the speech synthesized by the speech synthesis module is close to the real voice of user a, the process is the same as case 2.3.1
Case 2.4.2: if the speech synthesized by the speech synthesis module can not approach the real voice of the user A, the processing is the same as the case 2.3.2.
5.2 if user B selects the third party mode, first playing the third party such as the video of the electronic inquirer including the voice of the third party, and inquiring whether user B plays before needing to play the original voice or the image of the original of user A: 1) If the user B allows playing, the related content is played according to the content allowed by the user B and the method in 5.1, the images of the third party and the user A can be displayed on an optional screen in a split mode, and a transitional introduction of the third party can be added optionally, such as 'I inquire the original photo of the vintage grandpa is \8230;'; 2) If user B does not allow the original video and sound of user A to be played, the system is completely rephrased by a third party including the content returned by the interaction unit, such as "I'm's query to you for vintage \8230;" and the third party's video and sound are played.
3.2 active play module
The active playing module is used for automatically playing the personal information of the user A;
the interaction method of the active play module comprises the following steps:
1) After the user B opens the interactive unit, the system prompts the user B to ask a relevant question (say your personal experience) through prompts on a screen or voices, and after the user B asks the question according to the prompts, the system automatically plays information such as life experiences relevant to the user A and relevant photos and videos.
2) The system actively plays the blessing content on a specific date, through the personal information collected by the recording and analyzing unit, including the relationship between the user B and the user A, and the birthday of the relatives and friends, or the important holiday obtained by the current time and date: such as having the system automatically play a sentence on the birthday of the relatives and friends that blesses the user. User B may turn off this function.
3.3 short video analysis and Call Module
And the short video analyzing and calling module is used for analyzing the emotion in the question and calling the related short video of the user A according to the question of the user B and the character content of the answer of the user A. The analysis and calling method of the short video analysis and calling module comprises the following steps:
s1, according to conversation contents between a user B and a user A, using keywords, an artificial neural network or a support vector machine and other classification decision methods to make a decision and classify the emotion types to be replied by the user, wherein the emotion decision and classification result is as follows: whether speaking or not speaking and a plurality of emotion categories in the case of speaking or not speaking, including calmness, happiness, anger, heart injury, or surprise.
And S2, if the video synthesis module is determined to be capable of better synthesizing the video of the user A through user test, specifically including the mouth shape corresponding to the spoken words of the digital person of the user A to be played and the emotion decision and classification result obtained according to the S1, playing the artificially and intelligently synthesized video of the user A. If the video composition module cannot compose the video of the user a or the effect of composition is not good enough, S3 is entered.
And S3, calling a corresponding user A to record the stored original short video in a recording and analyzing unit according to the emotion decision and classification result obtained in the S1: when replying to user B: 1) If the reply is the original short video which needs speaking, the user A is called to speak calmly, happy, angry, hurt or surprised according to the emotion type of the reply character; 2) If the reply is not to speak, the original short video of calm, happy, angry, sad, or surprised that user A does not speak is invoked according to the emotion classification of the reply.
And S4, according to the description of the user A self-defined short video, if the keywords and related concepts in the problem of the user B activate and describe the corresponding short video or the user B says that the user B wants to see other videos of the user A, the short videos are played.
4 other auxiliary modules and physical carriers
The system also comprises a login unit, a user personal privacy protection unit and a physical carrier of the whole system.
4.1 login unit: the system is used for logging in according to the access code and the account information; the account information comprises a mobile phone or an email address and a login password of the account. In order to ensure the security of the information, the login password and the access code of the account number must be different.
4.2 user personal privacy protection unit: personal privacy information for protecting user A; when a user wishes to add another user a, he must know the account number (e-mail or phone number) of the other user a and the access code of the user a. The access code is different from the account password of the user (when the system detects that the access code set by the user is the same as the password of the system, the system prompts the user to modify the access code), so as to increase the security of the system;
if user a is still alive or his account is hosted by another: other users, who need to access the number my of user a, need to be granted by user a. And user a may authorize which users may access their digital copies: if the user sees other users (including name and phone number emails) who are unfamiliar or incorrect in the list, filtering can be performed, so that personal privacy information management can be strengthened.
If user a has died: case 1), (account hosted by others), user a tells a trusted relatives, who may maintain the deceased's account number before death (e.g., add a new relatives member or enter content that user a did not enter); case 2), the user A does not want others to host his account, so he can leave relatively permanent material carriers such as plastic-sealed labels made of his account number, password and access code to the relatives and friends; case 3), a certain relatives of user a (user C) directly creates the account/password/access code of user a.
If the protection requirement of some users a on personal privacy is high, the users can add other hardware passwords or mobile phone authentication code protection besides setting the password of the user, for example, use a U shield or a mobile phone authentication code; the common user can log in the account of the common user through the account number and the password.
In the hardware implementation of the module, a two-dimensional code of the user A is set, the two-dimensional code only contains the name of the user and information for acquiring the system (such as a link for downloading the APP of the system), the two-dimensional code can be placed in a public place (such as a tombstone), and the user B downloads or accesses the system for interaction by scanning the two-dimensional code. A stranger can only find the name of the user and obtain the system by scanning the two-dimensional code of the user A, but can not obtain more private information.
The two-dimensional code, the access code and the account number of the user A are separated, and the user A or a user who hosts the account decides to provide the substance carrier with the account number and the access code to the trusted relatives.
4.3 the physical carrier is used for carrying the system, can be the self-transmission generated by the recording and analyzing unit, and can also be:
a) As shown in fig. 3 and 4, the label includes a plastic-sealed label or a common label: a user A puts a photo or does not put the photo, the two-dimensional code 2, the account and the access code 1 (the user A can also select to hide the account and the access code); user a may give a label to his trusted relatives and friends.
b) Crystal/diamond or other relatively strong materials; the user may give the crystal/diamond to his trusted relatives as shown in fig. 5 and 6.
c) A pedigree (booklet) containing the above materials is shown in FIG. 7.
d) The two-dimensional code 2 is attached to the gravestone, as shown in fig. 8.
e) A digital photo frame or a mobile phone, as shown in fig. 9.
4.4.1 use of the substance carrier, as shown in FIG. 10:
the user B can download the APP by scanning the two-dimensional code 2 on the substance carrier by using the mobile phone of the user B, or open the digital photo frame/App/intelligent sound box and input the account number and the access code provided by the user A, and if the account number and the access code are both correct, the user A can interact with the user B. If the account number or access code is wrong, or the user does not have such information, he may ask user a or his relatives to obtain such information. The company of APP can also provide this information but must meet 3 conditions: the user must present legal evidence that he is a relative of user a; the user must sign privacy protection treaties and liability statements; and his access interactive contents will be recorded.
Wherein, the digital photo frame can have two different designs. The design 1 adopts cloud storage, as shown in fig. 11, data of a user is stored in a cloud, which is relatively low in cost, but the privacy protection performance of the data of the user is weak. The 2 nd design uses local storage, as shown in fig. 12, where data is stored in a local hard disk, a paper or electronic version self-transmission, or other local media, so as to achieve more secure protection of user data, but at a relatively higher cost. Optionally, the digital photo frame can be connected with a projector, so that a good interaction effect is achieved. Optionally, the digital photo frame may also be connected to a printer and an optical disc recorder, and the material carrier used for manufacturing the system includes a USB storage disk (USB disk), an optical disc, and a self-transmission of printing paper.
An interaction method of a man-machine interaction system for realizing digital perpetuation comprises the following steps:
step one, a user B logs in a system and selects different modules in an interaction unit for interacting with the user;
step two, if the user B selects the worship praying module, carrying out interaction according to the interaction method of the corresponding module; if user B selects the other interaction module, the system will check if user A is currently online: if the user A is online currently, entering an online chatting module; if the user A is not online currently, entering a module selected by the user B for interaction;
and thirdly, the reply and play unit replies through a reply generation module, an active play module, a short video analysis and call module, a voice synthesis module or a video synthesis module.

Claims (10)

1. A human-computer interaction system for realizing digital immortalization is characterized in that: the system comprises a recording and analyzing unit, an interaction unit and a reply and play unit, wherein the recording and analyzing unit is used for collecting, analyzing, sorting and storing the personal information of a user A and forming a personal database; the interaction unit is used for interacting with the user B through the reply and play unit based on the personal database and other information when the user A is not on line except for the online chat module; the reply and play unit is used for generating and playing related voice and video.
2. The human-computer interaction system for realizing digital perpetuation according to claim 1, is characterized in that: the recording and analyzing unit comprises a personal information acquisition module, and the personal information acquisition module comprises a question-answer acquisition submodule, a social media information acquisition submodule and an information acquisition submodule except for social media.
3. The human-computer interaction system for realizing digital perpetuation according to claim 2, is characterized in that: the question-answer type acquisition submodule comprises a question-answer type information acquisition submodule, an original sound acquisition submodule and an original image acquisition submodule; the question-answer information collection submodule is used for collecting personal information and original voice of the user A; the original sound collection submodule is used for separately recording the original voice of the user A by mainly using words; the original image acquisition sub-module is used for acquiring original images of the user A, including original images of the user A speaking and not speaking under different expressions and original images of other life and work.
4. The human-computer interaction system for realizing digital perpetuation according to claim 1, is characterized in that: the recording and analyzing unit also comprises a personal information analyzing module and a self-transmission generating module, wherein the personal information analyzing module analyzes and arranges the acquired personal information to form a relatively complete personal database and store the personal database; and the self-transmission generation module is used for automatically generating the self-transmission of the user A according to the personal database of the personal information analysis module.
5. The human-computer interaction system for realizing digital perpetuation according to claim 1, wherein: the interactive unit comprises an information question-answering module, a consolation assistance module, a scene chatting module, a success and happiness suggestion module, a health suggestion module, a worship and praying module and an online chatting module; the information question-answering module is used for answering personal information questions and non-personal information questions about the user A, which are proposed by the user B, according to the personal database and the existing database; the comfort assistance module is used for simulating the user A to give comfort to the language of the user B according to the fact that the user B needs comfort; the scene chatting module is used for simulating the chatting interaction between the user A and the user B according to different scenes; the success and happiness suggestion module is used for giving positive suggestions to a user B; the health suggestion module is used for giving the health suggestion to the user B according to the collected causes of death, diseases, habits and hobbies of the user B and relatives and friends of the user B; the worship praying module is used for providing an electronic intelligent worship function for a user B; the online chatting module can enable the user B and the online user A to carry out text, voice or video communication.
6. The human-computer interaction system for realizing digital perpetuation according to claim 1, is characterized in that: the reply and play unit comprises a reply generation module, an active play module, a short video analysis and call module, a voice synthesis module and a video synthesis module; the reply generation module is used for answering in any one or more modes of original voice, original short video, mixed original voice short video and synthesized voice; the active playing module is used for automatically playing the personal information of the user A by the system; the short video analyzing and calling module is used for analyzing emotions in the short video analyzing and calling the related short video of the user A according to the question of the user B and the character content of the answer of the user A; the voice synthesis module and the video synthesis module are used for synthesizing the voice and the video close to the user A by using the original voice and the original video of the user A through an artificial intelligence technology.
7. The interaction method of the human-computer interaction system for realizing digital perpetuation according to any one of claims 1 to 6, characterized in that: the method comprises the following steps:
step one, a user B logs in a system and selects different modules in an interaction unit for interacting with the user;
step two, if the user B selects the worship praying module, interaction is carried out according to the interaction method of the corresponding module; if user B selects the other interaction module, the system will check if user A is currently online: if the user A is online currently, entering an online chatting module; if the user A is not on line currently, entering a module selected by the user B for interaction;
and thirdly, the reply and play unit replies through a reply generation module, an active play module, a short video analysis and call module, a voice synthesis module or a video synthesis module.
8. The interaction method of the human-computer interaction system for realizing digital perpetuation according to claim 7, wherein: if the user B selects the information question-answering module, the interaction method of the information question-answering module comprises the following steps:
s1, reading the name of a user B, searching the person in a relatives and friends list of the user A, and determining the character relationship between the user A and the user B;
s2, obtaining corresponding title words and sentences through the character relation;
s3, the user B provides a question, the system judges whether the question provided by the user B is a personal question or a non-personal question, if the question is the personal question, the step S4 is executed, and if not, the step S5 is executed;
s4, extracting keywords related to personal information in the personal questions, and if the keywords are not extracted, requesting the user B to say once again, change a question or change a question; if the keywords are extracted and relevant information can be found in the personal database according to the keywords, corresponding answers are made according to the found information; if the keywords are extracted, but the related information cannot be found in the personal database according to the keywords, the system generates synonyms, related words, upper concept words and lower concept words of the keywords, searches the related information from the personal database, and returns the result if the related information is searched; if the relevant information cannot be found, prompting the user B to: the user A does not input the information or change a question when establishing the digital person;
s5, extracting keywords in the non-personal question, and if the keywords in the question are not extracted, requesting the user B to speak again, change an inquiry method or change a question; if the keyword is extracted, searching an answer from the existing database according to the keyword, and if the answer is searched, returning a result; if the answer is not found, the system generates synonyms and related words of the keywords, and upper concept words and lower concept words of the keywords to find the answer again from the existing database, and if the answer is found, the result is returned; if no answer is found, the system answers user B: the question is difficult and indicates that learning needs to continue or user B is requested to change a question or provide a possible answer to learn.
9. The interaction method of claim 7, wherein: the reply method of the reply generation module comprises the following steps:
when the user B chooses to simulate the interaction of the user A:
case 1: if the content returned by the interactive unit is a photo, a song, or a short video to be played for the corresponding question of user a:
case 1.1 user a database stores corresponding photos, singing or short videos, and then directly plays the photos, singing or short videos;
case 1.2 user a database does not have a corresponding photo, singing, or short video, then feedback without results is performed;
case 2: if the content returned by the interactive unit is the text content to be played, namely Q-T, the type of the text content is further judged:
case 2.1: if the Q-T is the text content or the original video and audio of the original voice of the user A, the original voice or the picture is played, and the personal information of the user A recorded by the recording and analyzing unit in the process of inquiring the user A comprises the original recording; simultaneously starting a short video analysis and calling module to play related short videos;
case 2.2: if the Q-T is formed by automatically combining, screening and splicing the words and sentences spoken by the user A, playing the automatically spliced voice of the user A; simultaneously starting a short video analysis and calling module to play related short videos;
case 2.3: if Q-T is that the words and phrases which are generated by the mixed system including the interactive unit and the words and phrases spoken by the user A include the splicing words and phrases, then the following cases are distinguished:
case 2.3.1: if the voice synthesized by the voice synthesis module is very close to the real voice of the user A, the Q-T content is played by the first person by using the voice synthesized by the artificial intelligence system; simultaneously starting a short video analysis and calling module to play related short videos;
case 2.3.2: if the voice synthesized by the voice synthesis module can not be close to the real sound of the user A, acquiring the sex and age information of the user A from the recording and analyzing unit, and playing the voice which is consistent with the sex of the user A and is close to the age by the voice synthesis module; if the sentence spoken by the user A includes the splicing sentence, triggering a condition 2.2; simultaneously starting a short video analysis and calling module to play related short videos;
case 2.4: if the Q-T does not contain the words of the user A and all the words generated by the system including the interactive unit, then:
case 2.4.1: if the voice synthesized by the voice synthesis module is close to the real voice of the user A, the same situation 2.3.1 is processed;
case 2.4.2: if the voice synthesized by the voice synthesis module can not be close to the real voice of the user A, processing the same situation 2.3.2;
if user B selects the third party mode for interaction:
then, a video of a third party such as an electronic querier is played first, including the voice of the third party, and before the original voice or the image of the original of the user a needs to be played, the user B is asked whether to play: 1) If the user B allows the playing, the related content is played according to the content allowed by the user B; 2) If the user B does not allow the original image and the original sound of the user A to be played, the third party transfers all the contents returned by the system including the interaction unit, and the video and the sound of the third party are played.
10. The interaction method of claim 7, wherein: the using method of the short video analyzing and calling module comprises the following steps:
s1, according to conversation contents between a user B and a user A, making a decision and classifying the emotion types to be replied by the user, wherein the emotion decision and classification result is as follows: whether speaking or not speaking and the emotion types under the condition of speaking or not speaking, wherein the emotion types comprise calmness, happiness, vitality, heart injury or surprise;
s2, if the video synthesis module can synthesize the video of the user A, the synthesized content comprises a mouth shape corresponding to the spoken utterance of the digital person of the user A and emotion decision and classification results obtained according to the S1, and the artificially and intelligently synthesized video of the user A is played; if the video synthesis module can not synthesize the video of the user A or the synthesized effect is not good enough, S3 is entered;
and S3, calling a corresponding user A to input the stored original short video in a recording and analyzing unit according to the emotion decision and classification result obtained in the S1: when replying to user B: 1) If the reply is the original short video which needs speaking, the user A is called to speak calmly, happy, angry, hurry or surprised according to the emotion type of the reply characters; 2) If the reply does not need to speak, calling the original short video which is calm, happy, angry, sad or surprised and is not spoken by the user A according to the emotion category of the reply;
and S4, according to the description of the user A self-defined short videos, if the keywords in the question of the user B and the related concepts thereof activate and describe the corresponding short videos or the user B says that the user B wants to see other videos of the user A, the short videos are played.
CN202210702927.4A 2022-06-21 2022-06-21 Man-machine interaction system and method for realizing digital immortal Pending CN115186148A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210702927.4A CN115186148A (en) 2022-06-21 2022-06-21 Man-machine interaction system and method for realizing digital immortal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210702927.4A CN115186148A (en) 2022-06-21 2022-06-21 Man-machine interaction system and method for realizing digital immortal

Publications (1)

Publication Number Publication Date
CN115186148A true CN115186148A (en) 2022-10-14

Family

ID=83516181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210702927.4A Pending CN115186148A (en) 2022-06-21 2022-06-21 Man-machine interaction system and method for realizing digital immortal

Country Status (1)

Country Link
CN (1) CN115186148A (en)

Similar Documents

Publication Publication Date Title
Alberti et al. Your perfect right: Assertiveness and equality in your life and relationships
Forman Grassroots spirituality: What it is, why it is here, where it is going
Elcheson et al. Spectrum women: Walking to the beat of autism
Anyabwile What is a Healthy Church Member?
CN111857343B (en) System capable of realizing digital immortalization and interacting with user
Cotton Say anything to anyone, anywhere: 5 keys to successful cross-cultural communication
Coates Divorce with decency: The complete how-to handbook and survivor’s Guide to the legal, emotional, economic, and social issues
CN115186148A (en) Man-machine interaction system and method for realizing digital immortal
Beitler III Seasoned speech: Rhetoric in the life of the church
Nelson The gospel according to Oprah
Coppedge African Literacies and Western Oralities?: Communication Complexities, the Orality Movement, and the Materialities of Christianity in Uganda
Al-Surmi Authentic ESL spoken materials: Soap opera and sitcom versus natural conversation
Cathcart Preaching and culture: An in-depth analysis of the engagements with culture in the sermons of Rob Bell, Timothy Keller, and Michael Pfleger
Bishop More than a language, a travel agency: Ideology and performance in the Israeli Judeo-Spanish revitalization movement
Aikman Creating kin, remaking kinship: an exploration of queer experiences of motherhood in Aotearoa New Zealand
Utell Teaching Modernist Women's Writing in English
JP7350384B1 (en) Dialogue system and dialogue method
Smith The anchor dat keeps um from driftin': The responses of African American fourth and fifth graders to African American literature
Adams " A Proper Little Lady" and other twisted tales of adolescent femininity
Philpot English B for the IB Diploma English B Coursebook
Armstrong Evaluating trust levels of virtual assistants utilizing different communication channels
Harrison Who am I and why does it matter: A phenomenological study of adoption, attachment, and identity formation
Lammervo Significance of cultural heritage, language and identity to second and third generation migrants: the case of Finns in Australia
Wilt Spiritual Practices and Artistic Professionals of Faith
Hiemstra Simple Faith: Something Worth Living For

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination