CN113573029A - Multi-party audio and video interaction method and system based on IOT - Google Patents

Multi-party audio and video interaction method and system based on IOT Download PDF

Info

Publication number
CN113573029A
CN113573029A CN202111127329.0A CN202111127329A CN113573029A CN 113573029 A CN113573029 A CN 113573029A CN 202111127329 A CN202111127329 A CN 202111127329A CN 113573029 A CN113573029 A CN 113573029A
Authority
CN
China
Prior art keywords
interaction
information
service
audio
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111127329.0A
Other languages
Chinese (zh)
Other versions
CN113573029B (en
Inventor
宋旭
时磊
朱庆祥
丁旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Ketianshichang Information Technology Co ltd
Original Assignee
Guangzhou Ketianshichang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Ketianshichang Information Technology Co ltd filed Critical Guangzhou Ketianshichang Information Technology Co ltd
Priority to CN202111127329.0A priority Critical patent/CN113573029B/en
Publication of CN113573029A publication Critical patent/CN113573029A/en
Application granted granted Critical
Publication of CN113573029B publication Critical patent/CN113573029B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Transfer Between Computers (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a multi-party audio and video interaction method and a system based on IOT, wherein the method comprises the following steps: acquiring identity information and service request information of an access user, identifying identity types and service types and executing corresponding service modes, wherein the execution of the corresponding service modes comprises the following steps: when the access user is a merchant, acquiring and managing service information based on an IOT facility at a merchant end arranged in the merchant; when the access user is a client and the service type is online interaction, sending an interaction request to a client IOT facility pre-distributed by an interaction party, acquiring request feedback and making an interaction response; the interactive response includes: after receiving the interaction request, connecting an interaction party by an audio/video interaction channel, identifying interaction information, and executing interaction post-processing; the interactive post-processing includes updating the service information according to the interactive information. The method and the system have the advantages that management service of the merchant is assisted, and the customer trip service experience is better.

Description

Multi-party audio and video interaction method and system based on IOT
Technical Field
The application relates to the technical field of travel, in particular to a multi-party interaction method and system applied to travel.
Background
For outgoing and traveling, whether the vehicles are riding or the accommodation, people are limited by the current geographic location, and most people are inconvenient to know the specific situation of the other party, which leads to various common riding disputes, hotel accommodation disputes and the like.
Aiming at the problems, each online business platform on the market is gradually developed into a platform which can be used for displaying relevant service pictures and videos provided by merchants for clients to refer to and help the clients to know the conditions of the merchants; however, the inventors believe that: the relevant information real-time of the platform is relatively poor, the utilization rate is low, and people who go out and travel are helped relatively less, so that the application provides a new technical scheme.
Disclosure of Invention
In order to better help people who go out and travel to finish traveling, the application provides a multi-party audio and video interaction method based on IOT.
In a first aspect, the present application provides a multi-party audio/video interaction method based on IOT, which adopts the following technical scheme:
a multi-party interaction method applied to travel comprises the steps of obtaining identity information and service request information of an access user, identifying identity types and service types and executing corresponding service modes, wherein the execution of the corresponding service modes comprises the following steps:
when the access user is a merchant, acquiring and managing service information based on an IOT facility at a merchant end arranged in the merchant; the service information comprises service associated characters, images and videos;
when the access user is a client and the service type is online interaction, sending an interaction request to a client IOT facility pre-distributed by an interaction party, acquiring request feedback and making an interaction response;
the interactive response includes: after receiving the interaction request, connecting an interaction party by an audio/video interaction channel, identifying interaction information, and executing interaction post-processing; the interactive post-processing includes updating the service information according to the interactive information.
Optionally, the voice information in the interactive information is translated into text information, and keywords and sentences are extracted.
Optionally, the keywords or sentences are extracted based on an LDA algorithm.
Optionally, the updating the service information according to the interaction information includes: and performing industrial matching analysis on the obtained keywords, if words and sentences which accord with service hardware introduction exist in the interactive information, positioning time nodes where the keywords or the sentences appear, extracting frames from the audio and video of the interactive information by using the time nodes to obtain an alternative image, intercepting the audio and video to form alternative audio and video, and updating the service information by using the alternative image and the alternative audio and video.
Optionally, the identifying the interaction information includes: identifying the identity and behavior of the merchant personnel in the interaction information; wherein the behaviors comprise speaking behaviors and action behaviors;
the interactive post-processing comprises: and performing service rating on the merchant personnel according to a preset rating standard, and storing a rating result into service information.
Optionally, the interactive response further includes an interactive process; the inter-interaction processing includes: and judging whether the illegal behavior exists or not based on the behavior of the merchant personnel, and if so, sending prompt information to a preset merchant administrator.
Optionally, the executing the corresponding service mode includes: when the visiting user is a customer and the service category is a travel record, acquiring time nodes at the beginning and the end of the travel, interactive information, associated information of merchants selected in the process and activity audios and videos in the process, and generating a travel collection, a travel bill bag and a travel file at the time nodes at the end of the travel.
In a second aspect, the present application provides a multi-party audio/video interaction system based on IOT, which adopts the following technical solution:
a multi-party audio and video interaction system based on IOT comprises a memory and a processor, wherein the memory is stored with a computer program which can be loaded by the processor and can execute any one of the multi-party audio and video interaction methods.
In summary, the present application includes at least one of the following beneficial technical effects:
1. enabling real-time audio and video to be given to a trip platform, providing service technical support for trips such as hotels, network appointment vehicles and aviation stations and enabling customer trip service experience to be better;
2. the method has the advantages that the audio and video which are communicated with each other in multiple ways are reused, service information provided by a merchant is actively updated based on intelligent identification and analysis of the audio and video, service personnel are rated after service in a fair and fair manner, and the merchant can conveniently manage and make reference for customers to select; meanwhile, the communication process is supervised, and when inappropriate behaviors appear in the communication process of business personnel, related personnel are timely discovered and notified, so that the service experience of customers is further improved.
Drawings
FIG. 1 is a main flow diagram of the present application;
FIG. 2 is a flow chart of the interaction mechanism of the present application;
fig. 3 is a flowchart of the process of the present application based on the identified interaction information.
Detailed Description
The present application is described in further detail below with reference to figures 1-3.
The embodiment of the application discloses a multi-party audio and video interaction method based on IOT.
The IOT-based multi-party audio and video interaction method comprises the following steps: and acquiring the identity information and the service request information of the access user, identifying the identity type and the service type and executing a corresponding service mode.
Referring to fig. 1, in this embodiment, the identities of users are mainly divided into two major categories, one is a merchant, and the other is a subclass according to the type of the merchant and the organization and management architecture thereof, and specifically, when a user is registered, a registrant, i.e., a temporary administrator uploads data setting, such as a hotel, which can be subdivided into a foreground, a lobby manager and an administrator; the other is a service object of a customer, namely a merchant; the user identity is identified according to the identification code, the label and the like of the mark after registration.
Service request information, namely specific function items corresponding to each access user; it is understood that the present embodiment is only exemplified in the following, but the actual service request may also include other service functions disclosed by the existing platform.
The executing the corresponding service mode comprises: when the access user is identified as a merchant, service information is acquired and managed based on IOT facilities at merchant terminals distributed in the merchant.
The IOT facilities of the merchant terminal comprise networked terminals, computers, tablets, mobile phones and the like; and the following client IOT facilities are similar.
The service information comprises service associated characters, images and videos; taking a hotel as an example, the profile of the hotel, the price of each room, the image and short video of each area of the hotel, etc. And the related personnel can choose to supplement the service information into the corresponding UI interface for the client to view and reference.
Not only hotels, but also merchants can be network car booking drivers, service sites of all regions of airlines and the like so as to cover all aspects of travel and provide more comprehensive service experience for customers.
Based on the above, the user checks the information of each merchant according to the requirement, and sends the corresponding service request information when needing to know about a certain merchant specifically.
Referring to fig. 2, the method further includes: and when the access user is identified as a client and the service type is online interaction, sending an interaction request to a client IOT facility pre-distributed by an interaction party, acquiring request feedback and making an interaction response.
The interaction request is sent according to the equipment identification code and the network address of the merchant access terminal, and popup information on a UI (user interface) can be embodied in the terminal; considering the interference that the same merchant may receive multiple requests for a short time; in the method, a merchant side is set as administrator configurable authority, a plurality of users are added as operators, and interaction requests are sent to the operators; meanwhile, an interaction mechanism is set in a telephone busy mode, and when a certain operator is in an online interaction process, if a new interaction request exists, busy line prompt information is fed back to a customer.
The interactive response specifically includes: and after receiving the interaction request, connecting an interaction party by an audio/video interaction channel, identifying interaction information and executing interaction post-processing.
The interaction party is connected by an audio-video interaction channel, namely, the customer and the operator of the merchant carry out online audio-video communication. Audio and video communication can improve the effectiveness of communication; by taking a hotel as an example, the method can answer conventional customer problems, and can also perform online service of audio and video house watching, audio and video house checking and audio and video house booking, so that the service authenticity is improved, the acceptance of customers is obtained, and the house booking rate is improved.
Referring to fig. 3, regarding the identification of the interaction information, it includes: and translating the voice information in the interactive information into character information, and extracting keywords or sentences.
When in translation, the audio frequency is separated from the audio and video frequency, and then the audio translation program or platform is used for translating the audio frequency to obtain the text information. The configurable method is that when determining the industry and the region to which the business belongs according to the merchant information, namely, voice translation selection, the voice factors of the industry and the region are considered, a database is prepared, and a corresponding parameter optimization algorithm is adjusted to improve the accuracy of voice translation; preferably, industry-based and region-based submodels are established in a targeted manner, and are trained by using data acquired in the process by using a convolutional neural network algorithm.
The business and the customer do audio and video communication, which is different from the general extraction of keywords from written documents, in some scenes, communication may occur once, but no vocabulary embodying a main body appears once, or the frequency is very few, so that when extracting keywords or sentences, the extraction effectiveness of keywords by a general statistical method and a latitude analysis method is relatively poor, and the method is characterized in that: extracting key words or sentences based on an LDA algorithm; the LDA algorithm, i.e., the latent dirichlet distribution algorithm, can fit the distribution of words, documents, and topics according to the analysis of co-occurrence information of the words, and further map the words and texts into a semantic space, thereby extracting keywords, for example, completing LDA-based keyword extraction through a Gensim library.
According to the algorithm, keywords which can reflect the activity content of the wiring personnel can be better extracted, and by taking a hotel as an example, the method comprises the following steps: introducing room environment, service content, catering content and the like.
Based on the keyword extraction, the method performs interactive post-processing, and comprises the following steps: and performing industrial matching analysis on the obtained keywords.
The industrial matching analysis is carried out, a corresponding vocabulary database is established in advance according to the industry, and vocabularies in the database are obtained by manual verification of corresponding key vocabularies in the process of introducing service hardware facilities and service contents by service staff in the industry; and searching whether the extracted keywords are recorded in the database or not based on the established database.
And when words and sentences which accord with the introduction of service hardware exist in the interactive information, positioning time nodes in which the keywords or the sentences appear, extracting frames from the audio and video of the interactive information by using the time nodes to obtain an alternative image, intercepting the audio and video to form alternative audio and video, and updating the service information by using the alternative image and the alternative audio and video.
Specifically, for example, a merchant says that "a room is seen by you", wherein the keywords are "see the room" and "room", and a time node of the room is a frame extraction starting point and an interception starting point; the extraction frequency can be 0.5S/time, and the extraction is continuously performed for 5-10 times; and intercepting the audio and video time, preferably 5-30S. If in the process, the merchant personnel also introduces specific scene information, such as restaurants, house numbers and the like, labels are correspondingly marked on the alternative pictures and the alternative audios and videos, and the service information is conveniently updated in a targeted manner.
According to the content, after the method is applied, not only can the interaction be carried out by utilizing the audio and video, but also the interaction audio and video can be further utilized, the service introduction information of a merchant can be actively updated based on the intelligence of the interaction audio and video, and the timeliness of the information can be improved.
In the method, identifying the interaction information further comprises: identifying the identity and behavior of the merchant personnel in the interaction information; meanwhile, the interactive post-processing further comprises: and performing service rating on the merchant personnel according to a preset rating standard, and storing a rating result into service information.
Wherein, regarding identity and behavior recognition, comprising:
1. performing face recognition on the figure in the audio and video, and obtaining identity information matched with the face data based on a prestored identity information base;
2. recognizing the speaking behaviors of tasks in the audios and videos (separating audio frequencies and then recognizing), namely, translating audio contents into characters by sound and recognizing certain contents;
3. and identifying the action behaviors of the characters in the audio and video.
The scoring standard is established based on a violation vocabulary library and a violation action library and is in a reduction and grading mode; if the A vocabulary in the illegal vocabulary library is deducted by 2 points, and the B action in the illegal action library is deducted by 3 points, etc.
Therefore, service rating, namely based on the identity of the identified merchant personnel, calling information of the merchant personnel and binding a subsequent processing result; and judging whether the violation exists or not based on the recognized behaviors, and if so, deducting the violation score based on the violation behaviors and the violation vocabulary in the communication process on the original score of each merchant personnel to obtain the current service score (rating result).
According to the content, the method further utilizes the audio and video in the communication process, supervises each merchant personnel based on relative fairness and justice, and grades the service for other subsequent customers or manages related personnel by merchant management personnel.
Only after the fact, the communication process is processed, and various hidden dangers exist, so the method further comprises the following steps: and (3) processing in interaction, wherein the processing in interaction comprises the following steps: judging whether violation behaviors exist or not (according with the violation database) based on behavior of merchant personnel, and if so, sending prompt information to a preset merchant administrator; the information sending mode can be short message, App information and the like.
Namely, by the method, multi-party interactive audio and video can be monitored in real time, and related personnel are informed in time when illegal behaviors occur to business personnel, so that business service capacity is improved, and service experience of customers is improved.
In another embodiment of the method, the executing the corresponding service mode further includes: when the visiting user is a customer and the service category is a travel record, acquiring time nodes at the beginning and the end of the travel, interactive information, associated information of merchants selected in the process and activity audios and videos in the process, and generating a travel collection, a travel bill bag and a travel file at the time nodes at the end of the travel.
Specifically, the method comprises the following steps:
the time nodes of the beginning and the end of the trip are determined by the client independently, and the two time nodes can be set before the trip, respectively set before and after the trip or set after the trip.
And configuring a database for storing images, audios and videos shot in the traveling process by a user and recording the images, audios and videos in a time line.
Based on the above, after the time nodes of the start and the end of the trip are determined, the images and audios and videos of the corresponding time periods are called, and the audios and videos are synthesized in a certain mode, namely the trip collection; the method can be as follows:
1. synthesizing images, audios and videos in a time line direction;
2. synthesizing based on image recognition and audio-video recognition results; if the image is identified, the image which meets the standards of happiness is screened out, the audio and video is identified, and the audio and video with smiling face and smiling sound is screened out and synthesized.
The travel bill packet is obtained according to the associated information of the selected merchant in the process, and in the process, after the merchant provides service, the corresponding service bill is obtained; and after the starting time node and the ending time node of the trip, automatically calling all the bills in the time period to generate a data packet.
And (4) the trip archive records all the data in the corresponding time period according to the time node to form an archive.
The embodiment of the application also discloses a multi-party audio and video interaction system based on the IOT.
A multi-party audio and video interaction system based on IOT comprises a memory and a processor, wherein the memory is stored with a computer program which can be loaded by the processor and can execute the multi-party audio and video interaction method.
For the application, after the platform is used as a server core building platform, the edge computing thought can be referred to, the platform server is configured with the content, and the merchant terminal connected with the platform is utilized, for example, when a customer interacts with a person of a certain merchant, the link of identifying interaction information is converted into processing by the terminal of the corresponding merchant, and the processed terminal feeds back a processing result to be used by the server, so that the functions are realized.
Above-mentioned setting, it can reduce on the one hand and utilize and do make full use of to the surplus computing power at trade company's terminal, and on the other hand can reduce the pressure of the platform of building, reduces use cost etc..
The above embodiments are preferred embodiments of the present application, and the protection scope of the present application is not limited by the above embodiments, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.

Claims (8)

1. A multi-party audio and video interaction method based on IOT comprises the steps of obtaining identity information and service request information of an access user, identifying identity types and service types and executing corresponding service modes, and is characterized in that the corresponding service modes are executed by the steps of:
when the access user is a merchant, acquiring and managing service information based on an IOT facility at a merchant end arranged in the merchant; the service information comprises service associated characters, images and videos;
when the access user is a client and the service type is online interaction, sending an interaction request to a client IOT facility pre-distributed by an interaction party, acquiring request feedback and making an interaction response;
the interactive response includes: after receiving the interaction request, connecting an interaction party by an audio/video interaction channel, identifying interaction information, and executing interaction post-processing; the interactive post-processing includes updating the service information according to the interactive information.
2. The IOT-based multiparty audio-video interaction method of claim 1, wherein the identifying interaction information comprises: and translating the voice information in the interactive information into character information, and extracting keywords and sentences.
3. The IOT-based multiparty audio-video interaction method of claim 2, wherein: and extracting keywords or sentences based on an LDA algorithm.
4. The IOT-based multi-party audio-video interaction method of claim 3, wherein: the updating the service information according to the interaction information includes: and performing industrial matching analysis on the obtained keywords, if words and sentences which accord with service hardware introduction exist in the interactive information, positioning time nodes where the keywords or the sentences appear, extracting frames from the audio and video of the interactive information by using the time nodes to obtain an alternative image, intercepting the audio and video to form alternative audio and video, and updating the service information by using the alternative image and the alternative audio and video.
5. The IOT-based multiparty audio-video interaction method of claim 2, wherein the identifying interaction information comprises: identifying the identity and behavior of the merchant personnel in the interaction information; wherein the behaviors comprise speaking behaviors and action behaviors;
the interactive post-processing comprises: and performing service rating on the merchant personnel according to a preset rating standard, and storing a rating result into service information.
6. The IOT-based multiparty audio-video interaction method of claim 2, wherein: the interactive response further comprises an in-interaction process; the inter-interaction processing includes: and judging whether the illegal behavior exists or not based on the behavior of the merchant personnel, and if so, sending prompt information to a preset merchant administrator.
7. The IOT-based multiparty audio-video interaction method of claim 1, wherein said performing a corresponding traffic pattern comprises: when the visiting user is a customer and the service category is a travel record, acquiring time nodes at the beginning and the end of the travel, interactive information, associated information of merchants selected in the process and activity audios and videos in the process, and generating a travel collection, a travel bill bag and a travel file at the time nodes at the end of the travel.
8. A multi-party audio and video interactive system based on IOT is characterized in that: comprising a memory and a processor, said memory having stored thereon a computer program which can be loaded by the processor and which performs the method of multiparty audio-video interaction as claimed in any one of claims 1 to 7.
CN202111127329.0A 2021-09-26 2021-09-26 Multi-party audio and video interaction method and system based on IOT Active CN113573029B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111127329.0A CN113573029B (en) 2021-09-26 2021-09-26 Multi-party audio and video interaction method and system based on IOT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111127329.0A CN113573029B (en) 2021-09-26 2021-09-26 Multi-party audio and video interaction method and system based on IOT

Publications (2)

Publication Number Publication Date
CN113573029A true CN113573029A (en) 2021-10-29
CN113573029B CN113573029B (en) 2022-01-04

Family

ID=78174585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111127329.0A Active CN113573029B (en) 2021-09-26 2021-09-26 Multi-party audio and video interaction method and system based on IOT

Country Status (1)

Country Link
CN (1) CN113573029B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115422514A (en) * 2022-09-22 2022-12-02 北京广知大为科技有限公司 Information interaction method, system, equipment and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103260082A (en) * 2013-05-21 2013-08-21 王强 Video processing method and device
CN107833096A (en) * 2014-08-25 2018-03-23 张琴 Wisdom life range e-commerce system and its method of work based on cloud service
CN108305632A (en) * 2018-02-02 2018-07-20 深圳市鹰硕技术有限公司 A kind of the voice abstract forming method and system of meeting
CN111340596A (en) * 2020-03-05 2020-06-26 山西集目看看信息技术有限公司 Visual shopping method and system based on Internet of things
WO2020184750A1 (en) * 2019-03-12 2020-09-17 신새봄 Trip record generating server and method
CN112200697A (en) * 2020-12-04 2021-01-08 深圳市房多多网络科技有限公司 Remote video room watching method, device, equipment and computer storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103260082A (en) * 2013-05-21 2013-08-21 王强 Video processing method and device
CN107833096A (en) * 2014-08-25 2018-03-23 张琴 Wisdom life range e-commerce system and its method of work based on cloud service
CN108305632A (en) * 2018-02-02 2018-07-20 深圳市鹰硕技术有限公司 A kind of the voice abstract forming method and system of meeting
WO2020184750A1 (en) * 2019-03-12 2020-09-17 신새봄 Trip record generating server and method
CN111340596A (en) * 2020-03-05 2020-06-26 山西集目看看信息技术有限公司 Visual shopping method and system based on Internet of things
CN112200697A (en) * 2020-12-04 2021-01-08 深圳市房多多网络科技有限公司 Remote video room watching method, device, equipment and computer storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115422514A (en) * 2022-09-22 2022-12-02 北京广知大为科技有限公司 Information interaction method, system, equipment and program product
CN115422514B (en) * 2022-09-22 2023-07-18 北京广知大为科技有限公司 Information interaction method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN113573029B (en) 2022-01-04

Similar Documents

Publication Publication Date Title
CN103346957B (en) A kind of system and method according to contact person's message alteration contact head image expression
CN102802114B (en) Method and system for screening seat by using voices
CN110072075A (en) Conference management method, system and readable storage medium based on face recognition
CN110610705B (en) Voice interaction prompter based on artificial intelligence
CN109145204A (en) The generation of portrait label and application method and system
JP5599409B2 (en) Automatic intention collection system and method
CN107784051A (en) Online customer service answering system and method
CN109684455A (en) The implementation method and device of intelligent customer service system, equipment, readable storage medium storing program for executing
CN112434501B (en) Method, device, electronic equipment and medium for intelligent generation of worksheet
CN109643314A (en) Information processing unit, information processing method and program
CN109671438A (en) It is a kind of to provide the device and method of ancillary service using voice
CN109493866A (en) Intelligent sound box and its operating method
CN107784033A (en) A kind of dialogue-based method and apparatus recommended
CN108933730A (en) Information-pushing method and device
CN111583023A (en) Service processing method, device and computer system
CN111599359A (en) Man-machine interaction method, server, client and storage medium
CN111027838A (en) Crowdsourcing task pushing method, device, equipment and storage medium thereof
CN112163155B (en) Information processing method, device, equipment and storage medium
CN111626061A (en) Conference record generation method, device, equipment and readable storage medium
CN110782341A (en) Business collection method, device, equipment and medium
CN113573029B (en) Multi-party audio and video interaction method and system based on IOT
US10701006B2 (en) Method and system for facilitating computer-generated communication with user
CN107948312B (en) Information classification and release method and system with position points as information access ports
CN112287092A (en) Intelligent elevator service query method and related products
CN117172795A (en) Intelligent technical service fee online consultation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant