CN111627444A - Chat system based on artificial intelligence - Google Patents

Chat system based on artificial intelligence Download PDF

Info

Publication number
CN111627444A
CN111627444A CN202010441870.8A CN202010441870A CN111627444A CN 111627444 A CN111627444 A CN 111627444A CN 202010441870 A CN202010441870 A CN 202010441870A CN 111627444 A CN111627444 A CN 111627444A
Authority
CN
China
Prior art keywords
module
voice
database
intelligent terminal
served
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010441870.8A
Other languages
Chinese (zh)
Inventor
江洪华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010441870.8A priority Critical patent/CN111627444A/en
Publication of CN111627444A publication Critical patent/CN111627444A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Mechanical Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a chatting system based on artificial intelligence, which fully utilizes the mature human face recognition technology, psychological general knowledge and the like at present, and integrates and analyzes visual expression, processes voice signals and coordinates the actions of an expression robot by continuously judging the real meaning of a chatting person, takes visual information as the kernel of expression analysis and assists the fusion method of voice tone intensity analysis, fully utilizes the advantages of various methods, reflects the result of visual expression analysis and reflects the state of voice emotion, thereby adjusting the chatting content, leading the chatting person to obtain the near-real chatting feeling and greatly improving the reality sense and the accuracy of the chatting system; the intelligent terminal can automatically identify the position of the served main body, can capture the sound information of the user, can obtain the voice command of the user by further analyzing the sound information, can feed back the voice command of the user, and is convenient for the user to use.

Description

Chat system based on artificial intelligence
Technical Field
The invention belongs to the technical field of chat systems, and particularly relates to a chat system based on artificial intelligence.
Background
The facial expression robot has positive promotion significance for realizing natural human-computer interaction and reducing the emotional distance between human beings and the robot. Scholars at home and abroad develop a great deal of research work on the research direction: representative expression robots include Kismet of the national institute of technology and engineering laboratory of Massachusetts, SAYA robot developed by Tokyo university of technology, K-bot and "Einstein" of the American Hansen robotics, Nao designed by the university of Herford, UK, RerieeQ 1, RerieeQ 2 and GeminodTMF of the university of Osaka, and H & F series robots of the national institute of Hartmann. Different from the traditional industrial robot with fixed stations, fixed processes and fixed operation scenes, the facial expression robot has higher requirements on interactivity, intelligence and autonomy, the research of the facial expression robot relates to the knowledge in multiple fields such as mechanical design, automatic control, computer intelligence, psychology and cognitive science, and the like, and the facial expression robot has typical multidisciplinary cross characteristics.
The chat robot is a robot which can recognize the semantic meaning of characters through an artificial intelligence technology and reasonably select answers to simulate real human chat. At present, the development of chat robots focuses on improving the stage of recognizing the semantic meaning of the opposite party and selecting a reasonable answer, and related researches are numerous and are limited to the text communication of both parties. However, the actual language communication is not limited to language at all, and includes expressions, body movements, and the like. The same language, coupled with different expressions and moods, may express completely opposite meanings. For example, a person says "do you please me to eat? ", matching an unfamiliar expression might indicate rejection, while matching a surprise expression, is clearly acceptable. The development trend of the chat robot is to integrate the related contents to reach a communication process as real and credible as possible, so that the chatting person can have an illusion of chatting with a real person. With the rapid development of computer image recognition technology, artificial intelligence technology and other emerging technologies, computers play an indispensable role in human production and life. A robot is a machine device that automatically performs work. Robots are currently used in many fields of application, such as production, construction or hazardous work.
In the process of executing tasks, the robot may involve the recognition of various targets, such as the recognition of images, the recognition of sounds, and the like, and under different scenes, the robot has different recognition accuracy requirements, and various interferences and obstacles may exist in the recognition process, which requires that the robot has strong signal acquisition and processing capabilities, and the requirements on software and hardware of the robot are high.
When the existing service robot serves people, the position of direct or indirect service personnel cannot be identified, and the experience effect of a user is seriously influenced.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an artificial intelligence-based chat system which aims to adjust chat contents by fully utilizing the mature face recognition technology, psychological common sense and the like at present and continuously judging the real meaning of a chat person, so that the chat person obtains a near-real chat feeling, and the reality sense and the accuracy of a chat robot are greatly improved.
In order to achieve the above purposes, the technical scheme adopted by the invention is as follows: an intelligent system capable of automatically identifying service objects comprises an intelligent terminal and a personnel identification module:
the personnel identification module is arranged at an indoor entrance and used for identifying personnel information of personnel in the room and sending the personnel information to the intelligent terminal;
and the intelligent terminal updates a database according to the personnel information and automatically searches and identifies the position of the served main body according to the database.
The intelligent system capable of automatically identifying the service object comprises an identification module and a first communication module connected with the identification module, wherein the identification module is used for collecting identification information of people entering and leaving the room and sending the identification information to the intelligent terminal through the first communication module.
The above intelligent system capable of automatically identifying the service object, the identification module comprises facial identification and/or voice identification and/or pupil identification and/or fingerprint identification.
The intelligent system capable of automatically identifying the service object comprises a processor, a voice recognition module, a memory and a second communication module, wherein the voice recognition module, the memory and the second communication module are respectively connected with the processor: the intelligent terminal establishes wireless connection with the first communication module through the second communication module and acquires the identification information from the identification module;
the voice recognition module is used for receiving a voice instruction and recognizing the served main body according to the voice instruction;
the memory is used for storing a database;
and the processing module updates the database according to the identification information and searches the position of the served main body according to the database.
A method for automatically identifying service objects by an intelligent system comprises the following steps:
step A, identifying personnel information located indoors, and sending the personnel information to an intelligent terminal;
and B, updating a database according to the personnel information, and automatically searching and identifying the position of the served main body according to the database.
The method for automatically identifying the service object by the intelligent system comprises the step A1 of collecting identification information for people entering and exiting the room.
In the method for automatically identifying the service object by the intelligent system, the step B includes a step B1 of identifying the served subject according to the voice command.
In the method for automatically identifying a service object by an intelligent system, the step B of automatically finding and identifying a location of a served subject according to the database includes:
step B21, the intelligent terminal judges whether the served subject is a subject sending a voice command;
step B22, if yes, the intelligent terminal reads the database, obtains the indoor of the person according to the database, and obtains the direction of the voice command so as to identify the position of the served main body;
step B23, if not, the intelligent terminal matches the served subject according to the database;
step B24, if the matching is successful, the intelligent terminal identifies the position of the served subject;
step B25, if the matching fails or the identification of the location of the served subject fails, the intelligent terminal inquires the location of the served subject.
A chat system based on artificial intelligence, comprising the above intelligent system capable of automatically identifying service objects, further comprising:
the control module is used for receiving information sent by other modules and transferring the received information to other modules for processing according to the requirements;
the image acquisition module is electrically connected with the input end of the control module and is used for regularly collecting facial features of the chatting person, converting the facial features into digital images and storing the digital images in a database;
the camera module is electrically connected with the input end of the control module and is used for identifying facial features in different expressions of the human face and head movements, double-shoulder movements and hand movements of a chatting person;
the voice output module is electrically connected with the output end of the control module and is used for playing corresponding voice;
and the display module is electrically connected with the output end of the control module and is used for displaying corresponding expressions according to the facial features of the chatting person.
The chat system based on artificial intelligence further comprises:
the storage module is connected with the control module and used for storing the opening words started by the chat system;
the voice receiving module is connected with the storage module and the control module and is used for detecting voice signals;
and the judging module is connected with the control module and used for judging whether the detected keywords are matched with the opening words in the storage module.
The invention has the advantages that: the chatting system based on artificial intelligence fully utilizes the mature face recognition technology, psychological common sense and the like at present, and continuously judges the real meaning of a chatting person, so that visual expression analysis, voice signal processing and the action coordination and integration of an expression robot are integrated and fused, visual information is used as an inner core of the expression analysis, and a fusion method of voice tone intensity analysis is assisted; can discern the personnel information who is located indoor to give intelligent terminal with personnel information transmission, intelligent terminal seeks the position by the service subject according to personnel information, can automatic identification by the position of service subject, and man-machine interaction can be strong, and is more humanized, can catch user's sound information, can obtain user's voice command to its further analysis, and the voice command to user that can be fine feeds back convenience of customers ' use.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art.
The invention provides an intelligent system capable of automatically identifying a service object, which comprises an intelligent terminal and a personnel identification module:
the personnel identification module is arranged at an indoor entrance and used for identifying personnel information of personnel in the room and sending the personnel information to the intelligent terminal;
and the intelligent terminal updates a database according to the personnel information and automatically searches and identifies the position of the served main body according to the database.
The personnel identification module arranged at the indoor entrance is used for collecting personnel information of personnel entering and exiting the room and sending the collected personnel information to the intelligent terminal, and the intelligent terminal updates the database according to the personnel information so as to automatically find and identify and position the position of the served main body according to the database. The served body is a body served by the intelligent terminal.
Further, in a preferred embodiment of the intelligent system capable of automatically identifying the service object, identification information is collected for people who enter or exit the room, and the identification information is sent to the intelligent terminal through the first communication module.
Preferably, the first communication module and the second communication module are WIFI modules, and may also be bluetooth modules. The first communication module is connected with the identification module, and the identification module is used for collecting identification information of people entering and exiting the room and sending the collected identification information to the intelligent terminal through the first communication module.
Further, in a preferred embodiment of the intelligent system capable of automatically identifying the service object, the identification module includes facial recognition and/or voice recognition and/or pupil recognition and/or fingerprint recognition.
Further, in a preferred embodiment of the intelligent system capable of automatically identifying the service object according to the present invention, the intelligent terminal includes a processor, a voice recognition module, a memory, and a second communication module, and the voice recognition module, the memory, and the second communication module are respectively connected to the processor: the intelligent terminal establishes wireless connection with the first communication module through the second communication module and acquires the identification information from the identification module;
the voice recognition module is used for receiving a voice instruction and recognizing the served main body according to the voice instruction;
the memory is used for storing a database;
and the processing module updates the database according to the identification information and searches the position of the served main body according to the database.
Wherein, the processing module searches the position of the served subject according to the database, and specifically comprises: the processing module judges whether the served main body is a main body for sending a voice instruction; if yes, the processing module reads the database, obtains the indoor space of a person according to the database and obtains the direction of the voice command so as to identify the position of the served main body; if not, the processing module matches the served main body according to the database; if the matching is successful, the processing module identifies the position of the served main body; and if the matching fails or the position of the served subject fails to be identified, the intelligent terminal inquires the position of the served subject.
The processing module acquires the direction of the voice instruction through the voice recognition module, wherein the voice recognition module acquires the direction of the voice instruction through sound acquisition and sound processing. The voice recognition module collects the voice data of at least four channels through the voice collector to locate the direction of the voice instruction. Specifically, the sound collector comprises 4 omnidirectional microphones, and is installed around intelligent terminal's top, and every passageway sampling data is 2048 data, and sampling frequency is 10000 Hz.
Since the time of the voice command to each microphone is different, the position of the voice command is calculated by calculating the time difference of arrival of the voice command at each microphone. The voice recognition module divides the turning into 8 sound production areas, then determines which area the voice command comes from according to the arrival time, and calculates the direction of the voice command according to the time difference.
The intelligent terminal can also comprise a first person identification module, and the first person identification module is connected with the processing module and used for acquiring the face information of the served main body.
When the information of the members in the database needs to be changed, a user triggers an information acquisition function of the intelligent terminal, the intelligent terminal acquires first person information through the first person identification module, and the processing module matches the first person information with the person information stored in the database; if the first person information is successfully matched with the person information stored in the database, the user enters unique identification information, such as a name, corresponding to the first person information; the processing module stores the first person information and the unique identification information into a database; if the first person information fails to be matched with the person information stored in the database, the intelligent terminal reminds the user whether to change the person information; if the personnel information is changed, the processing module changes the database; and if the personnel information is not changed, ending the process. The database includes the names of the serviced persons, corresponding personal information, voice information and current location information.
A method for automatically identifying service objects by an intelligent system comprises the following steps:
step A, identifying personnel information located indoors, and sending the personnel information to an intelligent terminal;
and B, updating a database according to the personnel information, and automatically searching and identifying the position of the served main body according to the database.
Further, in a preferred embodiment of the method for automatically identifying a service object by an intelligent system according to the present invention, the step a includes a step a1 of collecting identification information for people entering and exiting the room.
Further, in a preferred embodiment of the method for automatically identifying a service object by an intelligent system according to the present invention, the step B includes a step B1 of identifying the serviced subject according to a voice command.
Further, in a preferred embodiment of the method for automatically identifying a service object by an intelligent system of the present invention, the step B of automatically finding and identifying a location of a served subject according to the database includes:
step B21, the intelligent terminal judges whether the served subject is a subject sending a voice command;
step B22, if yes, the intelligent terminal reads the database, obtains the indoor of the person according to the database, and obtains the direction of the voice command so as to identify the position of the served main body;
step B23, if not, the intelligent terminal matches the served subject according to the database;
step B24, if the matching is successful, the intelligent terminal identifies the position of the served subject;
step B25, if the matching fails or the identification of the location of the served subject fails, the intelligent terminal inquires the location of the served subject.
An artificial intelligence based chat system, comprising the above intelligent system capable of automatically identifying service objects, further comprising:
the control module is used for receiving information sent by other modules and transferring the received information to other modules for processing according to the requirements;
the image acquisition module is electrically connected with the input end of the control module and is used for regularly collecting facial features of the chatting person, converting the facial features into digital images and storing the digital images in a database;
the camera module is electrically connected with the input end of the control module and is used for identifying facial features in different expressions of the human face and head movements, double-shoulder movements and hand movements of a chatting person;
the voice output module is electrically connected with the output end of the control module and is used for playing corresponding voice;
and the display module is electrically connected with the output end of the control module and is used for displaying corresponding expressions according to the facial features of the chatting person.
Further, in a preferred embodiment of the chat system based on artificial intelligence of the present invention, the chat system further includes:
the storage module is connected with the control module and used for storing the opening words started by the chat system;
the voice receiving module is connected with the storage module and the control module and is used for detecting voice signals;
and the judging module is connected with the control module and used for judging whether the detected keywords are matched with the opening words in the storage module.
The chat system judgment method based on artificial intelligence comprises the following steps:
step A, establishing a facial expression database, and storing facial features of a human face under different expressions;
b, regularly collecting digital images of the chatting people through an image collecting device;
step C, identifying the five sense organs of the chatting person from the digital image, and counting the facial features of the chatting person;
and step D, obtaining the expression corresponding to the position relation from the facial expression database according to the facial features of the chatting person, and taking the expression as the expression of the chatting person.
And selecting a recognition training sample library for expression analysis according to different application requirements. There are two alternatives, one is to select an existing expression library, such as JAFFE library provided by the japanese ART media information science laboratory in view of the present invention for chinese facial features, which provides 7 basic expressions of 10 young japanese women, better suited for facial analysis recognition by female operators. And the other type can select a designated operator to serve as a self-built expression library for a specific purpose under the same operation scene according to specific requirements. After the expression library is selected, the expression library is classified and stored in a designated directory according to expression categories, and feature vectors are extracted from expression images of known expression categories in the expression library. In the invention, expressions in an expression library are classified into a plurality of expression groups of different classes, and feature vectors are extracted from expression images in each expression group to be used as training samples; and forming an expression feature space by using the feature vectors, and identifying the face image to be identified based on the expression feature space.
The voice information processing is divided into an input part and an output part. In the voice input process, the voice input and the input voice reference model are subjected to DTW analysis, different tone intensity quantization indexes are made according to the voice input and the tone intensity, and the voice expression is recognized. In the embodiment, three tone intensity quantifications of excitement-pleasure-negation are made for the speaker state. In the multitask coordination module, the mood intensity quantization index and the basic emotion obtained by visual analysis are subjected to linear fusion, the emotional expressions and the sound expressions are subjected to linear fusion, and the obtained quantity of the composite expression instructions is the weighted sum of the quantity of the sound expressions and the quantity of the emotional expressions. The present embodiment can obtain 21 kinds of composite voice output instructions. And selecting corresponding voice stream data from the voice output library according to the composite voice output instruction and a predefined output voice reference model, so that the voice output fed back to the user reflects the result of the visual emotional expression analysis and also reflects the state of the auditory sound expression.
Preferably, in the step a, facial features in different expressions of the face are stored according to the following columns: species, sex, age.
Preferably, facial features of the following expressions of the human face are stored in step a: blankness, happiness, sadness, anxiety, anger, worry, disgust, surprise, slight.
Preferably, the facial features include: eyebrow direction, distance between two eyebrows, eye size, pupil size, angle of two mouth corners, mouth size, shape of upper and lower lips, and diameter of nostril.
Facial features under different expressions mainly include: common expressions such as blankness, happiness, sadness, anxiety, anger, worry, disgust, surprise, slight, and the like. Various expressions will have corresponding facial features, such as: hurting the heart. Facial features include squinting, tightening of eyebrows, pulling down the corners of the mouth, and lifting or tightening of the chin. Fear is felt. In fear, the mouth and eyes are open, the eyebrows are raised, and the nostrils are enlarged. Anger. At this time, the eyebrows sag, the forehead is wrinkled, and the eyelids and lips are tensed. Aversion to it. Aversive expressions include a Michler's nose, lifting of the upper lip, drooping of the eyebrows, and squinting. Surprisingly. Surprisingly, the jaw sagged, lips and mouth relaxed, eyes enlarged, eyelids and eyebrows slightly raised. Light thin strips. The famous feature of light bamboo strip is that the mouth is lifted up and booming or laughing is performed. The current mood of the chat user can be judged by capturing the change of the facial features of the chat user. And according to: the race, sex, age are classified because the facial features when expressing expressions are definitely different for people of different races, sexes, and ages. For example, in many caucasians, the facial expression is more exaggerated and the action amplitude is larger than that of the yellow. In addition, abundant expressions and insufficient expressions can be distinguished, and people with abundant expressions can capture the real psychology of people only through local slight change, just like people who are not happy and hang on the face. According to the historical chat records of the chatting people, the robot can classify the chatting people to a certain extent: rich expression or not rich expression. Therefore, different judgment strategies are adopted to improve the judgment accuracy.
Preferably, the step C further comprises identifying and acquiring the head motion, the shoulder motion and the hand motion of the chat user from the digital image. Shoulder actions include common actions such as shoulder shrugging; the head movements comprise common movements such as nodding and shaking; the hand movements include swinging, waving, pressing, and lifting.
Preferably, step D is preceded by:
step D1, the voice receiving module continuously detects the sound source and detects whether there is a voice signal;
step D2, extracting keywords in the detected voice signal;
step D3, matching the keyword with the opening words stored in the storage module;
and D4, if the matching is successful, sending a wake-up instruction, and enabling the chat robot to enter an open state.
Before inputting the voice signal, the user needs to initialize the robot, and the user can speak to the robot: and storing data, wherein the intelligent equipment enters a storage interface for establishing voice recognition to prompt a user to start voice input of an opening word.
The judgment module of the chat robot is connected with the control module through a serial port. A user inputs a voice signal through the voice receiving module; and matching the keyword in the voice signal with the opening word stored in the storage module, and if the keyword exists, enabling the chat robot to enter a standby state.
Preferably, the method further comprises the following steps:
step E, obtaining the chat subject according to the chat content, and judging the real attitude of the chat subject of the chat user according to the language and expression of the current chat user, for example, when the chat robot states a certain event, the chat user replies' impossible bar? "the chat robot can be witnessed by its emotions, which are surprised by the chat owner, indicating that he may have accepted this fact; if the chatting person emotionally libel, displaying the fact that the chatting person does not recognize the fact;
step F, adjusting the chat strategy according to the obtained real attitude of the chat user to the current chat subject: changing the chat theme, correcting the view of the robot to the chat theme, and adjusting the language style of the robot. After all, the chat robot is only a tool for accompanying people to chat, but not a real person, does not need to dispute with a chat person for chat contents, and is relatively inclined to the view and advocate of the chat person, so that the chat person can more easily obtain a pleasant chat feeling.
The chatting system based on artificial intelligence fully utilizes the mature face recognition technology, psychological common sense and the like at present, and continuously judges the real meaning of a chatting person, so that visual expression analysis, voice signal processing and the action coordination and integration of an expression robot are integrated and fused, visual information is used as an inner core of the expression analysis, and a fusion method of voice tone intensity analysis is assisted; can discern the personnel information who is located indoor to give intelligent terminal with personnel information transmission, intelligent terminal seeks the position by the service subject according to personnel information, can automatic identification by the position of service subject, and man-machine interaction can be strong, and is more humanized, can catch user's sound information, can obtain user's voice command to its further analysis, and the voice command to user that can be fine feeds back convenience of customers ' use.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. The intelligent system capable of automatically identifying the service object is characterized by comprising an intelligent terminal and a personnel identification module:
the personnel identification module is arranged at an indoor entrance and used for identifying personnel information of personnel in the room and sending the personnel information to the intelligent terminal;
and the intelligent terminal updates a database according to the personnel information and automatically searches and identifies the position of the served main body according to the database.
2. The intelligent system capable of automatically identifying the service object as claimed in claim 1, wherein the personnel identification module comprises an identification module and a first communication module connected with the identification module, the identification module is configured to collect identification information of personnel entering and exiting the room, and send the identification information to the intelligent terminal through the first communication module.
3. An intelligent system capable of automatically identifying a service object as claimed in claim 2, wherein the identification module comprises facial recognition and/or voice recognition and/or pupil recognition and/or fingerprint recognition.
4. The intelligent system capable of automatically identifying the service object as claimed in claim 2, wherein the intelligent terminal comprises a processor, a voice recognition module, a memory and a second communication module, the voice recognition module, the memory and the second communication module are respectively connected with the processor: the intelligent terminal establishes wireless connection with the first communication module through the second communication module and acquires the identification information from the identification module;
the voice recognition module is used for receiving a voice instruction and recognizing the served main body according to the voice instruction;
the memory is used for storing a database;
and the processing module updates the database according to the identification information and searches the position of the served main body according to the database.
5. A method for automatically identifying service objects by an intelligent system is characterized by comprising the following steps:
step A, identifying personnel information located indoors, and sending the personnel information to an intelligent terminal;
and B, updating a database according to the personnel information, and automatically searching and identifying the position of the served main body according to the database.
6. The method for intelligent system to automatically identify service objects as claimed in claim 7, wherein said step A comprises a step A1 of collecting identification information for personnel entering and exiting said room.
7. The method for automatically identifying the service object by the intelligent system as claimed in claim 5, wherein the step B comprises a step B1 of identifying the serviced subject according to the voice command.
8. The method of claim 5, wherein the step B of automatically finding and identifying the location of the served object according to the database comprises:
step B21, the intelligent terminal judges whether the served subject is a subject sending a voice command;
step B22, if yes, the intelligent terminal reads the database, obtains the indoor of the person according to the database, and obtains the direction of the voice command so as to identify the position of the served main body;
step B23, if not, the intelligent terminal matches the served subject according to the database;
step B24, if the matching is successful, the intelligent terminal identifies the position of the served subject;
step B25, if the matching fails or the identification of the location of the served subject fails, the intelligent terminal inquires the location of the served subject.
9. An artificial intelligence based chat system comprising an intelligent system capable of automatically identifying service objects as claimed in claims 1-4, further comprising:
the control module is used for receiving information sent by other modules and transferring the received information to other modules for processing according to the requirements;
the image acquisition module is electrically connected with the input end of the control module and is used for regularly collecting facial features of the chatting person, converting the facial features into digital images and storing the digital images in a database;
the camera module is electrically connected with the input end of the control module and is used for identifying facial features in different expressions of the human face and head movements, double-shoulder movements and hand movements of a chatting person;
the voice output module is electrically connected with the output end of the control module and is used for playing corresponding voice;
and the display module is electrically connected with the output end of the control module and is used for displaying corresponding expressions according to the facial features of the chatting person.
10. An artificial intelligence based chat system according to claim 9, further comprising:
the storage module is connected with the control module and used for storing the opening words started by the chat system;
the voice receiving module is connected with the storage module and the control module and is used for detecting voice signals;
and the judging module is connected with the control module and used for judging whether the detected keywords are matched with the opening words in the storage module.
CN202010441870.8A 2020-05-22 2020-05-22 Chat system based on artificial intelligence Withdrawn CN111627444A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010441870.8A CN111627444A (en) 2020-05-22 2020-05-22 Chat system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010441870.8A CN111627444A (en) 2020-05-22 2020-05-22 Chat system based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN111627444A true CN111627444A (en) 2020-09-04

Family

ID=72271065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010441870.8A Withdrawn CN111627444A (en) 2020-05-22 2020-05-22 Chat system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN111627444A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115641837A (en) * 2022-12-22 2023-01-24 北京资采信息技术有限公司 Intelligent robot conversation intention recognition method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115641837A (en) * 2022-12-22 2023-01-24 北京资采信息技术有限公司 Intelligent robot conversation intention recognition method and system

Similar Documents

Publication Publication Date Title
CN108009490A (en) A kind of determination methods of chat robots system based on identification mood and the system
US9031293B2 (en) Multi-modal sensor based emotion recognition and emotional interface
CN109176535B (en) Interaction method and system based on intelligent robot
CN106294774A (en) User individual data processing method based on dialogue service and device
CN107765852A (en) Multi-modal interaction processing method and system based on visual human
Liu et al. A multimodal emotional communication based humans-robots interaction system
CN108363706A (en) The method and apparatus of human-computer dialogue interaction, the device interacted for human-computer dialogue
CN102298694A (en) Man-machine interaction identification system applied to remote information service
Niewiadomski et al. Automated laughter detection from full-body movements
KR20100001928A (en) Service apparatus and method based on emotional recognition
CN105046238A (en) Facial expression robot multi-channel information emotion expression mapping method
CN106097835B (en) Deaf-mute communication intelligent auxiliary system and communication method
CN111144359B (en) Exhibit evaluation device and method and exhibit pushing method
Vu et al. Emotion recognition based on human gesture and speech information using RT middleware
CN114399818A (en) Multi-mode face emotion recognition method and device
CN106791565A (en) Robot video calling control method, device and terminal
CN111368053A (en) Mood pacifying system based on legal consultation robot
CN111696559A (en) Providing emotion management assistance
CN109542389B (en) Sound effect control method and system for multi-mode story content output
CN108052250A (en) Virtual idol deductive data processing method and system based on multi-modal interaction
CN112232127A (en) Intelligent speech training system and method
Jazouli et al. Automatic detection of stereotyped movements in autistic children using the Kinect sensor
CN111384778B (en) Intelligent operation and maintenance system for power distribution network equipment
CN111627444A (en) Chat system based on artificial intelligence
CN114254096A (en) Multi-mode emotion prediction method and system based on interactive robot conversation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200904