CN110928521A - Intelligent voice communication method and intelligent voice communication system - Google Patents
Intelligent voice communication method and intelligent voice communication system Download PDFInfo
- Publication number
- CN110928521A CN110928521A CN202010095301.2A CN202010095301A CN110928521A CN 110928521 A CN110928521 A CN 110928521A CN 202010095301 A CN202010095301 A CN 202010095301A CN 110928521 A CN110928521 A CN 110928521A
- Authority
- CN
- China
- Prior art keywords
- user
- virtual partner
- data
- virtual
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004891 communication Methods 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000012545 processing Methods 0.000 claims description 17
- 238000005516 engineering process Methods 0.000 claims description 11
- 230000009467 reduction Effects 0.000 claims description 5
- 238000013500 data storage Methods 0.000 claims 2
- 230000001815 facial effect Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013481 data capture Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000001508 eye Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 210000001331 nose Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Geometry (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- General Engineering & Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses an intelligent voice communication method and a communication system thereof, wherein the method comprises the following steps: responding to a user starting command, and obtaining user identification information; acquiring a virtual partner corresponding to the user according to the user identification information; collecting user voice and image information; and according to the collected voice and image information of the user, the virtual partner communicates with the user. According to the method and the device, the intelligent voice communication matched with the user can be provided for the user according to the personalized characteristics of the user.
Description
Technical Field
The present application relates to the field of computers, and in particular, to an intelligent voice communication method and system.
Background
With the development of computer technology, intelligent speech recognition technology becomes one of the developing hotspots. However, the existing intelligent speech recognition only recognizes the speech of the speaking object, and usually cannot interact with the user except for setting functions, and even cannot accompany the user.
Furthermore, the existing voice recognition technology needs a processing chip with strong performance due to high instantaneity threshold for data capture, and the simple set top box has the defects of low processing speed and weak instantaneity. Meanwhile, when the existing voice processing technology is combined with a virtual partner for presentation, the tone and the expression are fixed, so that the voice processing technology is very hard and monotonous, a user can lose freshness quickly, and the technology cannot be properly combined into an application scene.
Disclosure of Invention
The system has the technical effect of performing bidirectional interaction with a user by effectively recognizing the voice of the user.
The application requests to protect an intelligent voice communication method, which comprises the following steps: responding to a user starting command, and obtaining user identification information; acquiring a virtual partner corresponding to the user according to the user identification information; collecting user voice and image information; and according to the collected voice and image information of the user, the virtual partner communicates with the user.
Preferably, the pre-establishing of the virtual partner comprises the following sub-steps: collecting user initial data in advance; the initial data of the user is analyzed, and a virtual partner is selected for the user according to the analysis result.
Preferably, wherein the virtual buddy is selected for the user based on user initial data; further, the virtual partner is divided into a plurality of component parts.
Preferably, the collecting of the user voice and image information comprises the steps of: collecting voice by using near-field collection and far-field collection technologies; collecting image information of a user by using a camera; carrying out noise reduction processing on the collected voice data; and denoising the acquired image.
Preferably, wherein the virtual partner communicates with the user based on the collected user voice and image information, comprising the sub-steps of: recognizing the collected voice information and converting the voice information into recognizable command information I; identifying the collected image information and converting the image information into identifiable command information II; and according to the command information I and the command information II, the component parts of the driving virtual partner respectively respond.
The application also claims an intelligent voice communication system, comprising the following components: the mobile equipment receives a user starting command, acquires user identification information, and acquires a virtual partner corresponding to a user according to the user identification information; collecting user voice and image information; according to the collected voice and image information of the user, the virtual partner communicates with the user; and the client is used for receiving the instruction sent by the mobile equipment and displaying the virtual partner according to the instruction. Cloud server: the system is used for storing the data uploaded by the mobile device and feeding back the data to the mobile device or the client.
Preferably, the cloud server pre-establishes the virtual partner, and the method includes the following sub-steps: collecting user initial data in advance; the initial data of the user is analyzed, and a virtual partner is selected for the user according to the analysis result.
Preferably, the cloud server establishes a model for the user according to the initial data of the user, and selects a virtual partner for the user according to the established model; the virtual partner is divided into a plurality of component parts.
Preferably, wherein the mobile device collecting user voice and image information comprises the steps of: collecting voice by using near-field collection and far-field collection technologies; collecting image information of a user by using a camera; carrying out noise reduction processing on the collected voice data; and denoising the acquired image.
Preferably, wherein the virtual partner communicates with the user based on the collected user voice and image information, comprising the sub-steps of: recognizing the collected voice information and converting the voice information into recognizable command information I; identifying the collected image information and converting the image information into identifiable command information II; and according to the command information I and the command information II, the component parts of the driving virtual partner respectively respond.
According to the method and the device, the intelligent voice communication matched with the user can be provided for the user according to the personalized characteristics of the user.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a system block diagram of an intelligent voice communication system of the present application;
FIG. 2 is a flow chart of a method of the intelligent voice communication method of the present application;
fig. 3 is a flowchart of a method for virtual partner establishment according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application requests to protect an intelligent voice communication method and an intelligent voice communication system, and barrier-free communication between a user and a virtual partner can be achieved through the system.
Example 1
The present application provides an intelligent voice communication system 100, as shown in fig. 1, including a mobile device 110, a client 120 and a cloud server 130, wherein:
the mobile device 110 receives a user start command, obtains user identification information, and obtains a virtual partner corresponding to the user according to the user identification information; collecting user voice and image information; and according to the collected voice and image information of the user, the virtual partner communicates with the user.
And the client 120 is used for receiving the instruction sent by the mobile device and displaying the virtual partner according to the instruction.
And the cloud server 130 is configured to store the data uploaded by the mobile device and feed back the data to the mobile device or the client.
The intelligent voice communication method executed by the system is shown in fig. 2, and comprises the following steps:
step S210, responding to a user starting command, and acquiring user identification information;
the mobile device receives a user start command, and the start command may be a conductive signal or a start request sent by a client.
As an embodiment the mobile device has an activation key, by pressing which the mobile device is activated to enter the operational mode.
As another embodiment, the mobile device enters the operating mode by receiving a start request sent by the client.
After entering the operating mode, the mobile device obtains identification information of the user, which may be a user name entered by the user or identification information of the mobile device.
Step S220, obtaining a virtual partner corresponding to the user according to the user identification information;
the mobile device or the cloud server obtains a virtual partner corresponding to the user according to the user identification information, wherein the virtual partner is a cartoon image pre-established by the cloud server, when the mobile device is connected with the cloud server, the cloud server can issue virtual partner data to the mobile device, or when the mobile device is connected with the cloud server, the cloud server does not issue the virtual partner data to the mobile device, but sends the user identification information to the cloud server after the mobile device obtains the user identification information, and the cloud server obtains the virtual partner corresponding to the user according to the user identification information.
A fixed virtual buddy may be specified for the user, and the user directly obtains the fixed virtual buddy according to the usage identification. The user may specify the virtual partner directly by himself or by the system, wherein the process of the system specifying the virtual partner is as follows:
step S2201, according to the user identification of a user (marked as being used for one), obtaining historical use data of a user I, and according to the historical use data of the user I, calculating a similar user subset I which has similar historical use data with the user I in a user set;
step S2202, selecting a virtual buddy for the first user according to the virtual buddies of the users in the first subset of similar users.
For example, the most virtual buddies may be selected as their virtual buddies for a user one of the subset one of the selected similar users.
Step S230, collecting user voice and image information; the method comprises the following substeps:
step S2301, collecting voice by using near field collection and far field collection technologies;
an external acquisition device is arranged in the mobile device, and comprises a near-field acquisition device and a far-field acquisition device; setting the number of threads according to the number of external acquisition equipment; each thread is responsible for a voice acquisition channel, and all threads share a voice flag bit. The sound zone bit is used for marking the general state of the external acquisition equipment; and each thread controls the near-field acquisition equipment and the far-field acquisition equipment to acquire sound according to the sound mark bit.
And saving the multi-channel sound collection data collected by the thread by using a memory.
Step S2302, collecting image information of a user by using a camera;
the facial information of the person is collected by the camera and is accurate to the position of the five sense organs. The method specifically comprises the following steps:
carrying out face detection on a picture shot by a camera by using a face detection algorithm, and acquiring a face pixel point set from the picture;
and positioning the facial pixel point set by the facial contour, and identifying facial coordinates.
Step S2303, noise reduction processing is carried out on the collected voice data;
and step S2304, denoising the acquired image.
And step S240, the virtual partner communicates with the user according to the collected voice and image information of the user. The method specifically comprises the following steps:
step S2401, identifying the collected voice information, and converting the voice information into recognizable command information I;
and recognizing the voice information, judging the information and converting the information into recognizable command information. For example, the voice information uttered by the user is: "ask for a day is the day of the week" today. After recognition of the speech information, it is converted into a command message, i.e. another virtual partner answers "today is sunday".
Step S2402, identifying the collected image information and converting the image information into identifiable command information II;
the image information is identified, and the information is judged and converted into recognizable command information. For example, if the facial features of the user are recognized as smiling expressions, the command information is converted into recognizable command information that another virtual partner also makes smiling expressions.
And step S2403, driving the components of the virtual partner to respectively respond according to the first command information and the second command information.
Wherein the respective components of the virtual buddy are driven to respond, respectively, based on the command information one and the command information two, for example, to make the virtual buddy smile and emit the voice "today is sunday".
Example 2
The structure and the working principle of the intelligent voice communication method and the intelligent voice communication system are described above, and the establishment process of the virtual partner is introduced below. As shown in fig. 3, the method comprises the following steps:
step S310, collecting user initial data;
the initial data of the user may be obtained by user input or user usage data. For example, the user inputs basic data such as age and sex, and further, the mobile terminal or the client may obtain user usage data such as video data viewed by the user, data listened to by the user, and the like. These data are all used as initial data of the user.
And step S320, selecting a virtual partner for the user according to the initial data of the user.
The method specifically comprises the following steps:
step S3201, a plurality of initial virtual partners are established in advance, and keywords are distributed to the initial virtual partners;
step S3202, analyzing the initial data of the user, and extracting a keyword set from the initial data;
and matching with the keywords of the initial virtual partner according to the extracted keyword set to obtain the matched initial virtual partner.
Step 33202, personalize the initial virtual partner to obtain the user-specific initial virtual partner.
The initial virtual partner is composed of all component parts, and personalized processing can be carried out on all the component parts, so that a virtual partner which is more matched with a user is built. For example, the virtual partner comprises a head, a trunk, four limbs, and the like, the head further comprises eyes, ears, mouths and noses, and each component of the virtual partner is personalized according to initial data of the user, for example, the initial data of the user shows that the user is a girl, the virtual partner selected for the user is a female image, further, the initial data of the user shows that the user likes sports, the sports image is designed for the virtual partner, and the like.
In particular, the storage medium can be a general-purpose storage medium, such as a removable disk, a hard disk, etc., on which a computer program can be executed when executed to perform the above-described method.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. An intelligent voice communication method comprises the following steps:
responding to a user starting command, and obtaining user identification information;
acquiring a virtual partner corresponding to the user according to the user identification information;
collecting user voice and image information;
according to the collected voice and image information of the user, the virtual partner communicates with the user;
wherein, according to the user identification information, obtaining the virtual partner corresponding to the user comprises:
obtaining historical usage data of the user according to the user identification, obtaining historical usage data of the user according to the historical usage data of the user,
calculating a first similar user subset in a user set, wherein the first similar user subset has similar historical usage data with the user;
and selecting a virtual partner for the user identified by the user according to the virtual partner of the user in the first subset of similar users.
2. The intelligent voice communication method according to claim 1, wherein the virtual partner is established in advance, comprising the sub-steps of:
collecting user initial data in advance;
analyzing initial data of a user and selecting a virtual partner for the user according to an analysis result, comprising:
establishing a plurality of initial virtual partners in advance, and distributing keywords for the initial virtual partners;
analyzing initial data of a user, and extracting a keyword set from the initial data;
matching with the keywords of the initial virtual partner according to the extracted keyword set to obtain a matched initial virtual partner;
carrying out personalized processing on the initial virtual partner to obtain a user-specific initial virtual partner;
the initial virtual partner is composed of all component parts, and personalized processing can be carried out on all the component parts, so that a virtual partner which is more matched with a user is built.
3. The intelligent voice communication method of claim 2, wherein the virtual partner is divided into a plurality of component parts.
4. The intelligent voice communication method according to claim 1, wherein the collecting of the user voice and image information comprises the steps of:
collecting voice by using near-field collection and far-field collection technologies;
collecting image information of a user by using a camera;
carrying out noise reduction processing on the collected voice data;
and denoising the acquired image.
5. The intelligent voice communication method as claimed in claim 1, wherein the virtual partner communicates with the user based on the collected user voice and image information, comprising the sub-steps of:
recognizing the collected voice information and converting the voice information into recognizable command information I;
identifying the collected image information and converting the image information into identifiable command information II;
and according to the command information I and the command information II, the component parts of the driving virtual partner respectively respond.
6. An intelligent voice communication system comprises the following components:
the mobile equipment receives a user starting command, acquires user identification information, and acquires a virtual partner corresponding to a user according to the user identification information; collecting user voice and image information; according to the collected voice and image information of the user, the virtual partner communicates with the user;
the client is used for receiving the instruction sent by the mobile equipment and displaying the virtual partner according to the instruction;
cloud server: the system comprises a data storage module, a data processing module and a data processing module, wherein the data storage module is used for storing data uploaded by the mobile equipment and feeding back the data to the mobile equipment or a client;
the mobile device obtaining the virtual partner corresponding to the user according to the user identification information includes:
obtaining historical use data of a user according to the user identification, and calculating a similar user subset I which has similar historical use data with the user in a user set according to the historical use data of the user;
and selecting a virtual partner for the user identified by the user according to the user virtual partner in the first similar user subset.
7. The intelligent voice communication system of claim 6, wherein the cloud server pre-establishes the virtual partner, comprising the sub-steps of:
collecting user initial data in advance;
analyzing initial data of a user and selecting a virtual partner for the user according to an analysis result, comprising:
establishing a plurality of initial virtual partners in advance, and distributing keywords for the initial virtual partners;
analyzing initial data of a user, and extracting a keyword set from the initial data;
matching with the keywords of the initial virtual partner according to the extracted keyword set to obtain a matched initial virtual partner;
carrying out personalized processing on the initial virtual partner to obtain a user-specific initial virtual partner;
the initial virtual partner is composed of all component parts, and personalized processing can be carried out on all the component parts, so that a virtual partner which is more matched with a user is built.
8. The intelligent voice communication system of claim 6, wherein the virtual partner is divided into a plurality of component parts.
9. The intelligent voice communication system of claim 6, wherein the mobile device collecting user voice and image information comprises the steps of:
collecting voice by using near-field collection and far-field collection technologies;
collecting image information of a user by using a camera;
carrying out noise reduction processing on the collected voice data;
and denoising the acquired image.
10. The intelligent voice communication system of claim 6, wherein the virtual partner communicates with the user based on the collected user voice and image information, comprising the sub-steps of:
recognizing the collected voice information and converting the voice information into recognizable command information I;
identifying the collected image information and converting the image information into identifiable command information II;
and according to the command information I and the command information II, the component parts of the driving virtual partner respectively respond.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010095301.2A CN110928521A (en) | 2020-02-17 | 2020-02-17 | Intelligent voice communication method and intelligent voice communication system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010095301.2A CN110928521A (en) | 2020-02-17 | 2020-02-17 | Intelligent voice communication method and intelligent voice communication system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110928521A true CN110928521A (en) | 2020-03-27 |
Family
ID=69854861
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010095301.2A Pending CN110928521A (en) | 2020-02-17 | 2020-02-17 | Intelligent voice communication method and intelligent voice communication system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110928521A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102017649A (en) * | 2008-04-24 | 2011-04-13 | 三星电子株式会社 | Method and apparatus for recommending broadcast contents |
CN102947844A (en) * | 2010-06-22 | 2013-02-27 | 微软公司 | Social task lists |
CN103500244A (en) * | 2013-09-06 | 2014-01-08 | 雷路德 | Virtual friend conversational system and method thereof |
CN105830048A (en) * | 2013-12-16 | 2016-08-03 | 纽昂斯通讯公司 | Systems and methods for providing a virtual assistant |
US20160379107A1 (en) * | 2015-06-24 | 2016-12-29 | Baidu Online Network Technology (Beijing) Co., Ltd. | Human-computer interactive method based on artificial intelligence and terminal device |
CN108664654A (en) * | 2018-05-18 | 2018-10-16 | 北京奇艺世纪科技有限公司 | A kind of main broadcaster's recommendation method and device based on user's similarity |
-
2020
- 2020-02-17 CN CN202010095301.2A patent/CN110928521A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102017649A (en) * | 2008-04-24 | 2011-04-13 | 三星电子株式会社 | Method and apparatus for recommending broadcast contents |
CN102947844A (en) * | 2010-06-22 | 2013-02-27 | 微软公司 | Social task lists |
CN103500244A (en) * | 2013-09-06 | 2014-01-08 | 雷路德 | Virtual friend conversational system and method thereof |
CN105830048A (en) * | 2013-12-16 | 2016-08-03 | 纽昂斯通讯公司 | Systems and methods for providing a virtual assistant |
US20160379107A1 (en) * | 2015-06-24 | 2016-12-29 | Baidu Online Network Technology (Beijing) Co., Ltd. | Human-computer interactive method based on artificial intelligence and terminal device |
CN108664654A (en) * | 2018-05-18 | 2018-10-16 | 北京奇艺世纪科技有限公司 | A kind of main broadcaster's recommendation method and device based on user's similarity |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6926339B2 (en) | Image clustering methods and devices, electronic devices and storage media | |
CN107153496B (en) | Method and device for inputting emoticons | |
CN110519636B (en) | Voice information playing method and device, computer equipment and storage medium | |
CN104598644B (en) | Favorite label mining method and device | |
CN108920640B (en) | Context obtaining method and device based on voice interaction | |
CN108280166B (en) | Method and device for making expression, terminal and computer readable storage medium | |
CN111339806B (en) | Training method of lip language recognition model, living body recognition method and device | |
CN110516083B (en) | Album management method, storage medium and electronic device | |
US20240070397A1 (en) | Human-computer interaction method, apparatus and system, electronic device and computer medium | |
EP3410258B1 (en) | Method for pushing picture, mobile terminal and storage medium | |
CN111050023A (en) | Video detection method and device, terminal equipment and storage medium | |
CN113067953A (en) | Customer service method, system, device, server and storage medium | |
JP2022088304A (en) | Method for processing video, device, electronic device, medium, and computer program | |
CN107507620A (en) | Voice broadcast sound setting method and device, mobile terminal and storage medium | |
CN114598933B (en) | Video content processing method, system, terminal and storage medium | |
CN111800650B (en) | Video dubbing method and device, electronic equipment and computer readable medium | |
CN114429767A (en) | Video generation method and device, electronic equipment and storage medium | |
CN110347869B (en) | Video generation method and device, electronic equipment and storage medium | |
KR100686076B1 (en) | Wireless Communication Terminal with Message Transmission According to Feeling of Terminal-User and Method of Message Transmission Using Same | |
CN112149599A (en) | Expression tracking method and device, storage medium and electronic equipment | |
CN110928521A (en) | Intelligent voice communication method and intelligent voice communication system | |
CN114930278A (en) | Screen recording method and device and computer readable storage medium | |
CN111768729A (en) | VR scene automatic explanation method, system and storage medium | |
CN111383326A (en) | Method and device for realizing multi-dimensional virtual character | |
CN115484474A (en) | Video clip processing method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200327 |
|
RJ01 | Rejection of invention patent application after publication |