CN113852849A - Intelligent hotel room management method - Google Patents

Intelligent hotel room management method Download PDF

Info

Publication number
CN113852849A
CN113852849A CN202111127248.0A CN202111127248A CN113852849A CN 113852849 A CN113852849 A CN 113852849A CN 202111127248 A CN202111127248 A CN 202111127248A CN 113852849 A CN113852849 A CN 113852849A
Authority
CN
China
Prior art keywords
voice
instruction
information
hotel room
management method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111127248.0A
Other languages
Chinese (zh)
Inventor
郑学刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Yichuangshi Technology Co ltd
Original Assignee
Sichuan Yichuangshi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Yichuangshi Technology Co ltd filed Critical Sichuan Yichuangshi Technology Co ltd
Priority to CN202111127248.0A priority Critical patent/CN113852849A/en
Publication of CN113852849A publication Critical patent/CN113852849A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Abstract

The invention discloses an intelligent hotel room management method, which relates to the technical field of hotel management, and after the method is adopted, the experience of a client who enters a hotel room is better, and the client can communicate with an intelligent robot through voice in the whole course to complete the control of intelligent equipment in a room, including a system, application in the system and the room; meanwhile, an original menu navigation interaction mode and a full-intelligent robot mode are supported, a user can freely switch the interaction mode by sending an instruction, the method is very friendly to users in all age groups, and the learning cost of the user is reduced; the innovative approximate tone analysis mode and the instruction analysis mode help the intelligent robot to better understand the user intention so as to achieve accurate recognition and intention identification of the user voice; the intelligent robot replaces a hotel attendant to respond to the user demand in time, the hotel service efficiency is improved, and the labor cost is saved.

Description

Intelligent hotel room management method
Technical Field
The invention relates to the technical field of hotel management, in particular to an intelligent hotel room management method.
Background
Most wisdom hotel is equipped with through the installation of smart machine in the existing market, realizes the equipment intelligent control in hotel, combines voice control to realize simple wisdom hotel and live in and experience. Consumers have higher and higher requirements on the hotel experience, and simple intelligent control cannot meet the requirements of people on the comfort and the convenience of hotels. A large number of hotels begin to use intelligent sound boxes, intelligent equipment and a multi-channel content delivery introduction mode to meet the requirements of comfort and convenience of consumers. However, the hotel is required to introduce more equipment, the cost is increased, and finally, the cost can only be transferred to the consumer by increasing the price of the guest room.
Disclosure of Invention
The invention provides an intelligent hotel room management method, which is characterized in that a microphone array of a television remote controller is used for collecting user voices and issuing instructions to an intelligent voice library, and finally a virtual robot is used for completing service.
The technical scheme adopted by the invention is as follows:
the invention provides an intelligent hotel room management method, which comprises the following steps: when a customer enters a hotel room, the television terminal automatically senses and starts, and the television screen displays the virtual animation robot to interact with the customer in a voice mode; the voice interaction comprises at least the following steps:
s1, the customer inputs the voice information to the voice instruction library through the microphone array of the remote controller;
s2, the voice instruction library analyzes the voice information into a text character string through a voice analysis module;
s3, judging the type of the text character string by the voice instruction library;
s4, if the type of the text character string is a question-answer type, the virtual animation robot obtains a response information strip from a question-answer library in a matching way and feeds the response information strip back to a client in a voice mode; if the type of the text character string is an instruction type, the virtual animation robot obtains an instruction information strip from an instruction library in a matching mode;
s5, if the instruction information strip is an end instruction, the virtual animation robot ends the current voice interaction without entering a sub-process; if the instruction information strip is an equipment control sub-process calling instruction, the virtual animation robot calls a corresponding equipment control sub-process, and finishes the current voice interaction after instruction control is carried out on corresponding equipment; and if the instruction information strip is an application software sub-flow calling instruction, the virtual animation robot calls a corresponding application software sub-flow and opens the corresponding application software, and after the execution of the internal flow of the application software is finished, the virtual animation robot finishes the current voice interaction.
In a preferred embodiment of the present invention, after the customer enters the hotel room and inserts the room card into the room card slot, the television terminal automatically identifies the check-in information of the customer.
In a preferred embodiment of the present invention, the voice command library has a mandarin accent recognition function to ensure correct pinyin of the text string.
In a preferred embodiment of the present invention, the speech parsing module has word slot groups obtained by grouping according to parts of speech, and word slot word banks under all the word slot groups store chinese text character strings and pinyin full spellings, respectively.
In a preferred embodiment of the present invention, in the process of performing speech analysis, the speech information obtained by the similar sound recognition needs to be compared with the word groove word banks in different part-of-speech groups according to the sequence of verb- > scene mode- > noun, and then compared twice according to the sequence of chinese text string- > pinyin complete spelling.
In a preferred embodiment of the present invention, when the client is required to specify multiple parameters according to the acquired voice information, the virtual animation robot pushes the device and parameter information supported by the system to the client as the dialogue information.
In a preferred embodiment of the present invention, the tv terminal automatically memorizes the context of the client in the same session with the virtual animation robot, and extracts the specific information of the command operation object, the operation parameter and the operation behavior from the context, and combines them to obtain the final complete execution command.
In a preferred embodiment of the present invention, all the instructions extracted from the user speech information are recorded in the execution state, and the additional learning is performed on more primitive word slots according to the maintenance of the system user feedback module and the knowledge base management module.
In a preferred embodiment of the present invention, in the corresponding device execution instruction and the corresponding application software execution internal flow, the virtual animation robot all feeds back all the links in the execution process to the client in a form of voice.
Compared with the prior art, the invention has the beneficial effects that: the client can communicate with the intelligent robot through voice in the whole process to complete the control of the system, the application in the system and the intelligent equipment in the room; meanwhile, an original menu navigation interaction mode and a full-intelligent robot mode are supported, a user can freely switch the interaction mode by sending an instruction, the method is very friendly to users in all age groups, and the learning cost of the user is reduced; the innovative approximate tone analysis mode and the instruction analysis mode help the intelligent robot to better understand the user intention so as to achieve accurate recognition and intention identification of the user voice; the intelligent robot replaces a hotel attendant to respond to the user demand in time, the hotel service efficiency is improved, and the labor cost is saved.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of an intelligent hotel room management method according to the present invention;
fig. 2 is a system schematic block diagram of the intelligent hotel room management method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 and 2, the present invention provides an intelligent hotel room management method, including: when a customer enters a hotel room, the television terminal automatically senses and starts, and the television screen displays the virtual animation robot to interact with the customer in a voice mode; the voice interaction comprises at least the following steps:
s1, the customer inputs the voice information to the voice instruction library through the microphone array of the remote controller;
s2, the voice instruction library analyzes the voice information into a text character string through a voice analysis module;
s3, judging the type of the text character string by the voice instruction library;
s4, if the type of the text character string is a question-answer type, the virtual animation robot obtains a response information strip from a question-answer library in a matching way and feeds the response information strip back to a client in a voice mode; if the type of the text character string is an instruction type, the virtual animation robot obtains an instruction information strip from an instruction library in a matching mode;
s5, if the instruction information strip is an end instruction, the virtual animation robot ends the current voice interaction without entering a sub-process; if the instruction information strip is an equipment control sub-process calling instruction, the virtual animation robot calls a corresponding equipment control sub-process, and finishes the current voice interaction after instruction control is carried out on corresponding equipment; and if the instruction information strip is an application software sub-flow calling instruction, the virtual animation robot calls a corresponding application software sub-flow and opens the corresponding application software, and after the execution of the internal flow of the application software is finished, the virtual animation robot finishes the current voice interaction.
In the invention, the TV screen is used as a display medium, and the voice library is utilized to analyze the voice of the user and transmit the voice to the virtual character completely. And controlling the virtual housekeeper to accurately understand the intention of the user and communicate with the user in real time. Meanwhile, the user demand response is completed, the instant response speed of hotel service is improved, and the one-to-one housekeeper type intimate service living experience of customers is completed. The invention integrates a microphone array (far and near field reception), a self-owned voice library, an intelligent robot animation library, a cloud server, in-system applications (including audio and video APK with copyright and self-online shopping mall applications) and intelligent equipment. The voice demand of the user is collected through the microphone array, the user intention is recognized through the voice library operated by the cloud server, and the voice demand is completely transmitted to the intelligent robot. The intelligent robot has a self-learning function, the intelligent housekeeper responds and guides the user demands in a voice and animation mode, fully understands the user intention and transmits the user intention to corresponding equipment or application in the process of communicating with the user voice response, and the response is completed.
In the invention, the system judges the instruction type of the voice according to the received voice character string information through the instruction matching rule of the self-owned voice library. For example, if the voice information is "I hungry", the full spelling "wo e le" is acquired. The text strings "i hungry" and "wo e le" are then used to match different part-of-speech word slots in the system knowledge base. And finding the word of hungry in the scene mode part-of-speech word groove library to be successfully matched. The meal ordering plate with the voice operation object as the system can be obtained according to the established rule.
In the present invention, the self-owned voice library includes: the system comprises a word slot management platform, an audio approximate sound management api program, an instruction composition analysis api program, a question and answer knowledge base management platform and the like. The word slot management platform stores and manages words of different types and word representative intentions; the audio approximate sound management is used for processing and analyzing different pronunciation habits of different crowds to obtain real intention audio information; the instruction composition analysis program obtains an intention instruction by arranging and combining the audio character strings of the user according to rules; the question-answer knowledge base management platform maintains the question-answer of the common scenes and the voice information sent by the self-learning users.
The intelligent animation library comprises: the system comprises a hotel guest room scene library, an intelligent robot animation library and an intelligent robot association instruction library. The intelligent robot compares the instruction transmitted by the voice library with the associated instruction library to determine the relevant scene, and simultaneously analyzes whether the intelligent robot needs to communicate with the user according to the specific instruction and the scene so as to call the animation library and control the system application and the intelligent equipment.
In the invention, the audio approximate tone recognition process of the voice information is as follows:
1) converting the user voice information into a text character string through a third-party tool;
2) converting a text string of the voice information into a pinyin full pinyin;
3) expanding the pinyin full spelling of the text according to the pronunciation rules of all dialects to obtain the pinyin full spelling under different dialect pronunciation habits, such as H, F partial regional pronunciations which are not divided;
4) and comparing the pinyin full pinyin of the dialect pronunciation habit with the pinyin word groove library to obtain the possible pronunciation combination of the user.
In the present invention, the voice command composition analysis includes:
1) grouping word slots under the full scene mode according to the part of speech
2) The word groove word library under all the word groove groups respectively stores Chinese text character strings and pinyin full pinyin
3) And comparing the word slot word libraries under different part-of-speech groups respectively according to the sequence of verbs- > scene modes- > nouns by using the user pronunciation combination obtained by the approximate voice recognition. And performing comparison twice according to the sequence of the Chinese text string- > Pinyin full spelling.
4) If the comparison result is obtained in the step 3), the instruction property of the user voice can be obtained according to the existing rule. And if the matching result is not obtained by the comparison, the obtained user voice information does not belong to the voice instruction.
In the invention, the user intention recognition function and the robot dialogue self-learning process are specifically as follows:
1) and analyzing and obtaining the actual intention of the user according to the operating equipment and the operating parameters corresponding to the word slot of the system knowledge base by using the acquired voice information with the instruction property.
2) And when the user needs to specify the multiple parameters according to the acquired voice information, the robot pushes the equipment and the parameter information supported by the system to the user as dialogue information.
3) And automatically memorizing the context of the user in the same communication session with the robot, and extracting equipment and specific parameter information from the context to obtain a final execution instruction.
4) And recording the execution state of all instructions extracted from the user voice, and performing supplementary learning on more basic word slots according to the maintenance of the system user feedback module and the knowledge base management module.
In an optional embodiment of the invention, the television terminal performs sensing identification on the client through a thermal infrared human body sensor.
In an optional embodiment of the present invention, in the corresponding device execution instruction and the corresponding internal flow of the application software execution, the virtual animation robot feeds back all the links in the execution process to the client in a form of voice.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. An intelligent hotel room management method is characterized by comprising the following steps: when a customer enters a hotel room, the television terminal automatically senses and starts, and the television screen displays the virtual animation robot to interact with the customer in a voice mode; the voice interaction comprises at least the following steps:
s1, the customer inputs the voice information to the voice instruction library through the microphone array of the remote controller;
s2, the voice instruction library analyzes the voice information into a text character string through a voice analysis module;
s3, judging the type of the text character string by the voice instruction library;
s4, if the type of the text character string is a question-answer type, the virtual animation robot obtains a response information strip from a question-answer library in a matching way and feeds the response information strip back to a client in a voice mode; if the type of the text character string is an instruction type, the virtual animation robot obtains an instruction information strip from an instruction library in a matching mode;
s5, if the instruction information strip is an end instruction, the virtual animation robot ends the current voice interaction without entering a sub-process; if the instruction information strip is an equipment control sub-process calling instruction, the virtual animation robot calls a corresponding equipment control sub-process, and finishes the current voice interaction after instruction control is carried out on corresponding equipment; and if the instruction information strip is an application software sub-flow calling instruction, the virtual animation robot calls a corresponding application software sub-flow and opens the corresponding application software, and after the execution of the internal flow of the application software is finished, the virtual animation robot finishes the current voice interaction.
2. The intelligent hotel room management method according to claim 1, wherein the television terminal automatically identifies the check-in information of the customer when the customer enters the hotel room and inserts the room card into the room card slot.
3. The intelligent hotel room management method of claim 1, wherein the voice command library has a mandarin accent recognition function to ensure that the pinyin for the text string is correct.
4. The intelligent hotel room management method as claimed in claim 3, wherein the voice analysis module has a word slot group obtained by grouping according to part of speech, and word slot lexicons under all the word slot groups respectively store Chinese text character strings and pinyin full spellings.
5. The intelligent hotel room management method as claimed in claim 4, wherein in the process of performing voice parsing, the voice information obtained by the proximity voice recognition is compared with word groove word banks under different part of speech groups according to the sequence of verb- > scene mode- > noun, and then compared twice according to the sequence of Chinese text string- > Pinyin full spelling.
6. The intelligent hotel room management method as recited in claim 5, wherein the virtual animation robot pushes the device and parameter information supported by the system to the client as dialogue information when the client is required to specify multiple parameters according to the acquired voice information.
7. The intelligent hotel room management method as claimed in claim 6, wherein the television terminal automatically memorizes the context of the client in the same communication session with the virtual animation robot, extracts the instruction operation object, the operation parameters and the operation behavior specific information from the context, and combines the extracted information to obtain the final complete execution instruction.
8. The intelligent hotel room management method as recited in claim 7, wherein the instructions extracted from the voice messages of all users are recorded in the execution state, and additional learning is performed on more primitive word slots according to the maintenance of the system user feedback module and the knowledge base management module.
9. The intelligent hotel room management method of claim 1, wherein in the internal process of executing the corresponding device execution instruction and the corresponding application software, the virtual animation robot feeds back all links in the execution process to the client in a voice form.
CN202111127248.0A 2021-09-26 2021-09-26 Intelligent hotel room management method Pending CN113852849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111127248.0A CN113852849A (en) 2021-09-26 2021-09-26 Intelligent hotel room management method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111127248.0A CN113852849A (en) 2021-09-26 2021-09-26 Intelligent hotel room management method

Publications (1)

Publication Number Publication Date
CN113852849A true CN113852849A (en) 2021-12-28

Family

ID=78980078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111127248.0A Pending CN113852849A (en) 2021-09-26 2021-09-26 Intelligent hotel room management method

Country Status (1)

Country Link
CN (1) CN113852849A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117219071A (en) * 2023-09-20 2023-12-12 北京惠朗时代科技有限公司 Voice interaction service system based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117219071A (en) * 2023-09-20 2023-12-12 北京惠朗时代科技有限公司 Voice interaction service system based on artificial intelligence
CN117219071B (en) * 2023-09-20 2024-03-15 北京惠朗时代科技有限公司 Voice interaction service system based on artificial intelligence

Similar Documents

Publication Publication Date Title
KR102351670B1 (en) Method of construction and registration of gates and device therefor
CN111010586B (en) Live broadcast method, device, equipment and storage medium based on artificial intelligence
CN107481720B (en) Explicit voiceprint recognition method and device
WO2020253509A1 (en) Situation- and emotion-oriented chinese speech synthesis method, device, and storage medium
JP6876752B2 (en) Response method and equipment
JP6182272B2 (en) Natural expression processing method, processing and response method, apparatus, and system
CN114401438B (en) Video generation method and device for virtual digital person, storage medium and terminal
US20240070397A1 (en) Human-computer interaction method, apparatus and system, electronic device and computer medium
CN112309365A (en) Training method and device of speech synthesis model, storage medium and electronic equipment
CN114495927A (en) Multi-modal interactive virtual digital person generation method and device, storage medium and terminal
WO2007069512A1 (en) Information processing device, and program
CN113411674A (en) Video playing control method and device, electronic equipment and storage medium
CN114974253A (en) Natural language interpretation method and device based on character image and storage medium
CN113852849A (en) Intelligent hotel room management method
CN113850898A (en) Scene rendering method and device, storage medium and electronic equipment
CN116737883A (en) Man-machine interaction method, device, equipment and storage medium
CN116189663A (en) Training method and device of prosody prediction model, and man-machine interaction method and device
CN115936002A (en) Conference identification method based on algorithm, terminal and storage medium
CN115171673A (en) Role portrait based communication auxiliary method and device and storage medium
CN111556096B (en) Information pushing method, device, medium and electronic equipment
CN112380871A (en) Semantic recognition method, apparatus, and medium
CN113961680A (en) Human-computer interaction based session processing method and device, medium and electronic equipment
CN115167733A (en) Method and device for displaying live broadcast resources, electronic equipment and storage medium
CN113836932A (en) Interaction method, device and system, and intelligent device
CN111160051A (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination