CN108170278A - Link up householder method and device - Google Patents
Link up householder method and device Download PDFInfo
- Publication number
- CN108170278A CN108170278A CN201810017513.1A CN201810017513A CN108170278A CN 108170278 A CN108170278 A CN 108170278A CN 201810017513 A CN201810017513 A CN 201810017513A CN 108170278 A CN108170278 A CN 108170278A
- Authority
- CN
- China
- Prior art keywords
- communication
- real
- video flowing
- user
- audio stream
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Abstract
The present invention proposes to link up householder method and device.Method includes:In communication process, the video flowing and audio stream of communication object that real-time reception augmented reality AR equipment acquires in real time;Collected video flowing or/and audio stream are analyzed in real time, obtain the key message in video flowing and audio stream, the key message is input in real-time communication guidance model and is calculated, real-time communication guidance is obtained, the guidance of obtained real-time communication is supplied to the user linked up with linking up object by AR equipment.The present invention realizes efficient link up in time and instructs.
Description
Technical field
The present invention relates to VR (Virtual Reality, virtual reality) technical fields more particularly to link up householder method and
Device.
Background technology
AR (Augmented Reality, augmented reality) technology be it is a kind of in real time calculate video camera image position and
Angle and the technology for adding respective image, video, 3D models, the target of this technology are that virtual world is sleeved on now on the screen
The real world simultaneously carries out interaction.
AR technologies are a kind of by real world information and the new technology of virtual world information Seamless integration-, are originally existing
It is difficult that the entity information (visual information, sound, taste, tactile etc.) experienced passes through in the certain time spatial dimension in the real world
The science and technology such as computer are superimposed after analog simulation, virtual Information application to real world are perceived by human sensory again,
So as to reach the sensory experience of exceeding reality.True environment and virtual object have been added to same picture or sky in real time
Between exist simultaneously.In the augmented reality of visualization, user utilizes Helmet Mounted Display, and real world is overlapped into computer graphic
Together, the true world can be seen around it.
The characteristics of AR systems tool is there are three protruding:Real world and virtual information integration;With real-time, interactive;Three
Increase positioning dummy object in dimension scale space.AR technologies are not only in the application field similar with VR technologies, such as tip force
The fields such as device, the development of aircraft and exploitation, the visualization of data model, virtual training, amusement and art, which have, widely should
With, and since it has the characteristic that enhancing display output can be carried out to true environment, in medical research and dissection training, essence
The fields such as close instrument manufacturing and repair, military aircraft navigation, engineering design and tele-robotic control, have than VR technology more
Apparent advantage.
AR technologies contain multimedia, three-dimensional modeling, real-time video show and control, Multi-sensor Fusion, real-time tracking
And new technologies and the new tool such as registration, scene fusion, it provides under normal circumstances, different from the appreciable information of the mankind.
Three components needed for AR system worked wells include:1st, head-mounted display;2nd, tracking system;3rd, mobile computing
Ability.The target of AR developer is that these three components are integrated into a unit, is placed in the equipment bound with belt,
The equipment can be wirelessly by information dissemination to the display similar to common spectacles on.
Invention content
The present invention, which provides, links up householder method and device, and guidance is linked up so that realization is efficient in time.
The technical proposal of the invention is realized in this way:
A kind of communication householder method, this method include:
In communication process, the video flowing and audio of communication object that real-time reception augmented reality AR equipment acquires in real time
Stream;
Collected video flowing or/and audio stream are analyzed in real time, obtain the crucial letter in video flowing and audio stream
Breath, the key message is input in real-time communication guidance model and is calculated, and obtains real-time communication guidance, the reality that will be obtained
When link up guidance and be supplied to and the user that is linked up of communication object by AR equipment.
It is described that collected video flowing or/and audio stream are analyzed in real time, obtain the pass in video flowing and audio stream
Key information includes:
Human body edge detection and positioning are carried out to each frame image in video flowing, obtain dress ornament or/and accessories region
Image, and color analysis is carried out, obtain the color of dress ornament or/and accessories;Or/and
Face datection is carried out to each frame image in video flowing, feature extraction is carried out to the face that detects, according to carrying
The signature analysis taken links up the expression of object;Or/and
Limbs detection and positioning are carried out to each frame image in video flowing, feature extraction is carried out to the limbs detected,
According to the signature analysis limb action of extraction, according to the meaning that pre-defined different limb actions represent, current limbs are determined
Act the meaning represented;Or/and
The theme of current conversation content is extracted from audio stream;Or/and
The intonation and word speed for linking up object are extracted from audio stream, analyzes to obtain according to the intonation and word speed of linking up object
Link up the emotional state of object;
And the key message is input in real-time communication guidance model calculate and is included:
The expression of the color or/and communication object of the dress ornament or/and accessories or/and current limb action are represented
The theme of meaning or/and current conversation content or/and, the emotional state for linking up object is input in real-time communication guidance model
It is calculated.
Further comprise after the key message obtained in video flowing and audio stream:
The expression of the color or/and communication object of the dress ornament or/and accessories or/and current limb action are represented
The theme of meaning or/and current conversation content or/and, the emotional state for linking up object is supplied to user by AR equipment.
The method further includes:
Before user links up with communication object, intend linking up range of information according to predefined, obtained to user and record category
This respectively plan linked up in the range of this links up information, is searched in linked database is linked up and each plan communication information association
Information;
The plan of this communication is linked up into information and the related information found input is linked up in master plan model and counted
It calculates, obtains the communication master plan of this communication, which is supplied to user.
The method further includes:
In communication process, receive the collected user of AR equipment video flowing or/and audio stream, to the video flowing or/
It is analyzed with audio stream, frowns action if identifying in video streaming or/and identify questioning intonation and again from audio stream
Returning to customs keyword then learns being intended to current conversation content or/and not understood to the keyword for user, then obtains current conversation
The explanation of content or/and the keyword, and the explanation is supplied to user by AR equipment;Or/and
In communication process, the video flowing of the collected user of AR equipment is received, by eyeball tracking method, finds user
Sight focus on an object, then obtain the relevant information of the object, and the relevant information of the object is supplied to by AR equipment
User.
A kind of communication auxiliary device, the device include:
Analysis module:In communication process, real-time reception augmented reality AR equipment acquire in real time communication object video
Stream and audio stream;Collected video flowing or/and audio stream are analyzed in real time, obtain the key in video flowing and audio stream
Information;
Real-time communication instructs module:The key message is input in real-time communication guidance model and is calculated, is obtained
Real-time communication instructs, and the guidance of obtained real-time communication is supplied to the user linked up with linking up object by AR equipment.
The analysis module in real time analyzes collected video flowing or/and audio stream, obtains video flowing and audio
Key message in stream includes:
Human body edge detection and positioning are carried out to each frame image in video flowing, obtain dress ornament or/and accessories region
Image, and color analysis is carried out, obtain the color of dress ornament or/and accessories;Or/and
Face datection is carried out to each frame image in video flowing, feature extraction is carried out to the face that detects, according to carrying
The signature analysis taken links up the expression of object;Or/and
Limbs detection and positioning are carried out to each frame image in video flowing, feature extraction is carried out to the limbs detected,
According to the signature analysis limb action of extraction, according to the meaning that pre-defined different limb actions represent, current limbs are determined
Act the meaning represented;Or/and
The theme of current conversation content is extracted from audio stream;Or/and
The intonation and word speed for linking up object are extracted from audio stream, analyzes to obtain according to the intonation and word speed of linking up object
Link up the emotional state of object;
And the real-time communication instructs module that the key message is input in real-time communication guidance model to calculate
Including:
The expression of the color or/and communication object of the dress ornament or/and accessories or/and current limb action are represented
The theme of meaning or/and current conversation content or/and, the emotional state for linking up object is input in real-time communication guidance model
It is calculated.
The analysis module obtains further comprising after the key message in video flowing and audio stream:
The expression of the color or/and communication object of the dress ornament or/and accessories or/and current limb action are represented
The theme of meaning or/and current conversation content or/and, the emotional state for linking up object is supplied to user by AR equipment.
Described device further comprise link up master plan module, for user with link up object link up before, according to pre-
The plan of definition links up range of information, is obtained to user and records this each plan communication information linked up belonged in the range of this,
It links up in linked database and searches and each information for intending linking up information association;The plan of this communication is linked up into information and is found
Related information input, which is linked up in master plan model, to be calculated, and obtains the communication master plan of this communication, which is referred to
The plan of leading is supplied to user.
Described device further comprises control response module, in communication process, receiving the collected use of AR equipment
The video flowing or/and audio stream at family, analyze the video flowing or/and audio stream, if identify in video streaming frown it is dynamic
Make or/and questioning intonation and duplicate key word are identified from audio stream, then learn being intended to current conversation content for user
Or/and the keyword is not understood, then current conversation content or/and the explanation of the keyword are obtained, and the explanation is passed through into AR
Equipment is supplied to user;Or/and
In communication process, the video flowing of the collected user of AR equipment is received, by eyeball tracking method, finds user
Sight focus on an object, then obtain the relevant information of the object, and the relevant information of the object is supplied to by AR equipment
User.
The present invention is by communication process, the video of communication object that real-time reception augmented reality AR equipment acquires in real time
Stream and audio stream;Collected video flowing or/and audio stream are analyzed in real time, obtain the key in video flowing and audio stream
The key message is input in real-time communication guidance model and calculates by information, real-time communication guidance is obtained, by what is obtained
Real-time communication guidance is supplied to the user linked up with linking up object by AR equipment, realizes efficient link up in time and refers to
It leads.
Description of the drawings
Fig. 1 is the communication aided process flow sheet figure that one embodiment of the invention provides;
Fig. 2 is the communication aided process flow sheet figure that another embodiment of the present invention provides;
Fig. 3 is the structure diagram provided in an embodiment of the present invention for linking up auxiliary device.
Specific embodiment
Below in conjunction with the accompanying drawings and specific embodiment the present invention is further described in more detail.
Fig. 1 is the communication aided process flow sheet figure that one embodiment of the invention provides, and is as follows:
Step 101:In communication process, real-time reception augmented reality AR equipment acquire in real time communication object video flowing
And audio stream.
Step 102:Collected video flowing or/and audio stream are analyzed in real time, obtained in video flowing and audio stream
Key message.
Step 103:Obtained key message is input in real-time communication guidance model and is calculated, obtains real-time communication
The guidance of obtained real-time communication is supplied to the user linked up with linking up object by guidance by AR equipment.
Step 102 specifically may include:
Human body edge detection and positioning are carried out to each frame image in video flowing, obtain dress ornament or/and accessories region
Image, and color analysis is carried out, obtain the color of dress ornament or/and accessories;Or/and
Face datection is carried out to each frame image in video flowing, feature extraction is carried out to the face that detects, according to carrying
The signature analysis taken links up the expression of object;Or/and
Limbs detection and positioning are carried out to each frame image in video flowing, feature extraction is carried out to the limbs detected,
According to the signature analysis limb action of extraction, according to the meaning that pre-defined different limb actions represent, current limbs are determined
Act the meaning represented;Or/and
The theme of current conversation content is extracted from audio stream;Or/and
The intonation and word speed for linking up object are extracted from audio stream, analyzes to obtain according to the intonation and word speed of linking up object
Link up the emotional state of object;
And in step 103, by expression or/and the current limbs of the color or/and communication object of dress ornament or/and accessories
Act represent meaning or/and current conversation content theme or/and, the emotional state for linking up object is input to real-time communication
It is calculated in guidance model.
In practical applications, also by the color of dress ornament or/and accessories or/and the expression of object or/and current can be linked up
Limb action represent meaning or/and current conversation content theme or/and, the emotional state for linking up object passes through AR equipment
It is supplied to user.
In practical applications, before user links up with communication object, intend linking up range of information according to predefined, to user
It obtains and records this each plan communication information linked up belonged in the range of this, searched and each plan ditch in linked database is linked up
Communication ceases associated information;The plan of this communication is linked up into information and master plan model is linked up in the related information found input
In calculated, obtain this communication communication master plan, which is supplied to user.
In practical applications, in communication process, the video flowing or/and audio stream of the collected user of AR equipment are received,
The video flowing or/and audio stream are analyzed, acts or/and is identified from audio stream if identifying frown in video streaming
Go out questioning intonation and duplicate key word, then learn being intended to current conversation content or/and not understood to the keyword for user,
Current conversation content or/and the explanation of the keyword are then obtained, and the explanation is supplied to user by AR equipment;Or/and
In communication process, the video flowing of the collected user of AR equipment is received, by eyeball tracking method, it is found that the sight of user focuses on
One object then obtains the relevant information of the object, and the relevant information of the object is supplied to user by AR equipment.
Fig. 2 is the communication aided process flow sheet figure that another embodiment of the present invention provides, and is as follows:
Step 201:Before user links up with communication object, link up auxiliary device and intend linking up information model according to predefined
It encloses, obtained to user and records this each plan communication information linked up belonged in the range of this.
Intend linking up the plan communication information included in range of information such as:Time for communication links up place, links up type, communication pair
As background information etc..
Link up type such as:Blind date, commercial negotiation, psychological consultation, judicial adjudication etc..
Link up object background information such as:Name, age, height, gender, native place, hobby, educational background, experience, phone number
One of code, WeChat ID, QQ number, microblogging account, identification card number etc. or arbitrary combination.
Step 202:It links up auxiliary device and information is linked up according to this each plan linked up of record, linking up linked database
Middle lookup and each information for intending linking up information association.
It links up and the various related informations for intending linking up information is saved in linked database, such as:The corresponding note of telephone number
Volume information (including:Name, identification card number, home address etc.), the corresponding log-on message of WeChat ID, the corresponding surname of identification card number
Name, address, photo etc..
Step 203:Auxiliary device is linked up to link up the plan communication information that this is linked up with the related information input found
It is calculated in master plan model, obtains the communication master plan of this communication, which is supplied to user.
It is trained in advance to link up master plan model.Specifically, multiple communication training samples, each ditch are acquired in advance
Logical training sample links up information using plan and its related information represents, and provides corresponding communication for each communication training sample
Master plan.Then, it according to all communication training samples and corresponding communication master plan, trains and links up master plan mould
Type.After communication master plan model training is good, inputs any one group of plan and link up information and its related information, the communication master plan
Model will calculate the group information, obtain corresponding communication master plan.
Master plan is linked up to include:Communication strategy with linking up object etc..Communication strategy is such as:With linking up in the communication of object
Hold (such as:Current events, sport, literature, art etc. can also be refined further, such as:Specific current events event, sport event etc.), link up
Attitude and the tone are (such as:It is strong, mild etc.) etc..
Step 204:In communication process, regarding for the communication object that auxiliary device real-time reception AR equipment acquires in real time is linked up
Frequency stream and audio stream.
Before communication starts, the user linked up with linking up object will wear wearable AR equipment.
Step 205:It links up auxiliary device and human body edge inspection is carried out to each frame image in collected video flowing in real time
It surveys and positions, obtain the image in dress ornament region, and carry out color analysis (such as:RGB is analyzed), obtain the color of dress ornament;It is meanwhile real
When Face datection is carried out to each frame image, feature extraction is carried out to the face that detects, according to the signature analysis face of extraction
Expression;Meanwhile limbs detection and positioning are carried out to image in real time, feature extraction is carried out to the limbs detected, according to extraction
Signature analysis limb action, according to the meaning that pre-defined different limb actions represent, determine that current limb action represents
Meaning.
The expression of face is such as:Happy, sadness, is sick of, is not considered worth doing indignation.Limb action is such as:It leans forward, swing back, shake the head,
It leans forward to typically represent and gets close to, is interested, layback, which represents, becomes estranged, loses interest in, and representative of shaking the head is not accepted, etc..
Further, it also may recognize that the brand logo in dress ornament region etc..It also may recognize that accessories (such as:Cap, necklace,
Wrist-watch etc.) information, such as:Color, brand logo of accessories etc..
Step 206:Auxiliary device is linked up to extract the theme of current conversation content from audio stream and link up object
Intonation, word speed are analyzed to obtain the emotional state for linking up object according to the intonation and word speed of linking up object.
Step 207:Link up auxiliary device will link up object dress ornament information, expression, limb action represent meaning and
The emotional state for linking up object is supplied to user by AR equipment.
The meaning and the feelings of communication object that AR equipment can represent the dress ornament information, expression, limb action for linking up object
Not-ready status is supplied to user in the form of voice or word etc..
Step 208:Link up auxiliary device will link up object dress ornament information, expression, limb action represent meaning and
The theme of current conversation content, the emotional state of communication object are input to real-time communication guidance model and are calculated in real time, obtain
Real-time communication instructs, which is instructed to be supplied to user by AR equipment.
Real-time communication guidance model is trained in advance.Specifically, multiple real-time communication training samples are acquired in advance, often
One kind or arbitrary combination expression in following real-time communication information can be used in a real-time communication training sample:Link up the dress ornament of object
Information, expression, the meaning that limb action represents, current conversation content theme, link up the emotional state information of object, and be directed to
Each real-time communication training sample provides corresponding real-time communication guidance.Then, according to all real-time communication training samples and right
The real-time communication guidance answered, trains real-time communication guidance model.After real-time communication guidance model trains, any one group is inputted
Real-time communication information, the real-time communication guidance model will calculate the group information, obtain corresponding real-time communication guidance.
Real-time communication guidance includes:It links up object analysis and links up real-time communication strategy of object etc..
Such as:It links up object and wears brightly painted dress ornament, then by retrieving corresponding character analysis database, can obtain
Link up the possible anlage of object:It is export-oriented, active;Wherein, various personality corresponding spies defined in character analysis database
Sign.
Real-time communication strategy is such as:With linking up the communication of object (such as:Current events, sport, literature, art etc., can also be into one
Step refinement, such as:Specific current events event, sport event etc.), link up attitude and the tone (such as:It is strong, mild etc.) etc..
AR equipment can instruct real-time communication to be supplied to user by forms such as voice, word, pictures.
In practical applications, user can also intervene communication process by predefined instruction.Such as:User can pass through to
AR equipment sends control information active obtaining and links up auxiliary information.
Such as:When communication object refers to that some keyword user does not understand, user can pass through the default action frowned
It, can also be by repeating the keyword as preset instructions using the intonation of query as preset instructions, AR equipment collects user
Video and audio after be sent to communication auxiliary device, link up the action frowned in auxiliary device identification video image or sound
Questioning intonation and duplicate key word in frequency, so as to learn being intended to perhaps the keyword is paid no attention in current conversation of user
Solution, then perform respective handling such as:According to the predefined database of the keyword search or web search is carried out, is worked as so as to provide
The explanation of preceding conversation content or the keyword, and the explanation is supplied to user in the form of voice, word, picture etc..
For another example:AR equipment will be sent to communication auxiliary device after the video of collected user and audio after, auxiliary is linked up
Device finds that the sight of user focuses on object (such as by eyeball tracking:Link up the watch of object) duration when being more than default
It is long, then the relevant information of the watch is obtained from predefined database or network such as:Brand, the place of production, price etc., and by the wrist
The relevant information of table is supplied to user by forms such as voice, word, pictures.
The application example of the present invention is given below:
This application example is directed to hearing scene.
Step 01:The plan communication information that auxiliary device obtains this hearing is linked up, it is specific as follows:
Communication themes:Object background information is linked up in hearing:Man, 32 years old, height 178cm, identification card number:********.
Step 02:It links up auxiliary device and information is linked up according to this plan inquested of acquisition, searched in database is linked up
With each information for intending linking up information association.
Step 03:It links up auxiliary device and the plan communication information that this is inquested is linked up into mould with the related information input found
It is calculated in type, obtains linking up master plan.
Step 04:Before communication starts, link up primary user and wear wearable AR equipment, and ensure that AR equipment has turned on, AR
The camera face of equipment links up object;In communication process, AR equipment acquires the video flowing for linking up object and is transferred to ditch in real time
Logical auxiliary device.
Step 05:It links up auxiliary device and edge detection, face and body fixed position, feature recognition is carried out to video flowing in real time,
Meaning representated by the current facial expression of the object of analysis communication in real time and body language, and generate auxiliary information and pass through AR equipment
It is presented to user.
Step 06:In communication process, AR equipment acquires audio information transmissions to auxiliary device is linked up in real time, links up auxiliary
Device starts speech recognition, and conversation content, word speed, the intonation of object are linked up in identification, and analysis is linked up the emotional state of object, passed through
AR equipment shows user.
Step 07:Linking up auxiliary device will be in the current expression for linking up object, body language, intonation, word speed, current conversation
Appearance is input to real-time communication guidance model with theme and is calculated, and obtains real-time communication guidance, and real-time communication guidance is passed through AR
Equipment is supplied to user, to carry out online communication guidance to user.
Fig. 3 is the structure diagram provided in an embodiment of the present invention for linking up auxiliary device, which mainly includes:Analyze mould
Block 31 and real-time communication instruct module 32, wherein:
Analysis module 31:In communication process, communication object that real-time reception augmented reality AR equipment acquires in real time regards
Frequency stream and audio stream;Collected video flowing or/and audio stream are analyzed in real time, obtain the pass in video flowing and audio stream
Obtained key message is sent to real-time communication and instructs module 32 by key information.
Real-time communication instructs module:By the key message that analysis module 31 is sent be input in real-time communication guidance model into
Row calculates, and obtains real-time communication guidance, and the guidance of obtained real-time communication by AR equipment is supplied to and carries out ditch with linking up object
Logical user.
Further, analysis module 31 in real time analyzes collected video flowing or/and audio stream, obtains video flowing
Include with the key message in audio stream:
Human body edge detection and positioning are carried out to each frame image in video flowing, obtain dress ornament or/and accessories region
Image, and color analysis is carried out, obtain the color of dress ornament or/and accessories;Or/and
Face datection is carried out to each frame image in video flowing, feature extraction is carried out to the face that detects, according to carrying
The signature analysis taken links up the expression of object;Or/and
Limbs detection and positioning are carried out to each frame image in video flowing, feature extraction is carried out to the limbs detected,
According to the signature analysis limb action of extraction, according to the meaning that pre-defined different limb actions represent, current limbs are determined
Act the meaning represented;Or/and
The theme of current conversation content is extracted from audio stream;Or/and
The intonation and word speed for linking up object are extracted from audio stream, analyzes to obtain according to the intonation and word speed of linking up object
Link up the emotional state of object;
And real-time communication instructs module 32 that the key message that analysis module 31 is sent is input to real-time communication guidance model
In carry out calculate include:
The expression of the color or/and communication object of the dress ornament or/and accessories or/and current limb action are represented
The theme of meaning or/and current conversation content or/and, the emotional state for linking up object is input in real-time communication guidance model
It is calculated.
Further, analysis module 31 obtains further comprising after the key message in video flowing and audio stream:
The expression of the color or/and communication object of the dress ornament or/and accessories or/and current limb action are represented
The theme of meaning or/and current conversation content or/and, the emotional state for linking up object is supplied to user by AR equipment.
Further, the device include link up master plan module, for user with link up object link up before, according to pre-
The plan of definition links up range of information, is obtained to user and records this each plan communication information linked up belonged in the range of this,
It links up in linked database and searches and each information for intending linking up information association;The plan of this communication is linked up into information and is found
Related information input, which is linked up in master plan model, to be calculated, and obtains the communication master plan of this communication, which is referred to
The plan of leading is supplied to user.
Further, which includes control response module, in communication process, receiving the collected use of AR equipment
The video flowing or/and audio stream at family, analyze the video flowing or/and audio stream, if identify in video streaming frown it is dynamic
Make or/and questioning intonation and duplicate key word are identified from audio stream, then learn being intended to current conversation content for user
Or/and the keyword is not understood, then current conversation content or/and the explanation of the keyword are obtained, and the explanation is passed through into AR
Equipment is supplied to user;Or/and in communication process, the video flowing of the collected user of AR equipment is received, passes through eyeball tracking
Method finds that the sight of user focuses on an object, then obtains the relevant information of the object, and the relevant information of the object is passed through
AR equipment is supplied to user.
The advantageous effects of the present invention are as follows:
First, by communication process, the video flowing of communication object that real-time reception augmented reality AR equipment acquires in real time
And audio stream;Collected video flowing or/and audio stream are analyzed in real time, obtain the crucial letter in video flowing and audio stream
Breath, the key message is input in real-time communication guidance model and is calculated, and obtains real-time communication guidance, the reality that will be obtained
When link up guidance and be supplied to by AR equipment and link up the user that is linked up of object, realize efficient link up in time and instruct;
2nd, before communication, information and its related information is linked up by plan, communication master plan is calculated, so as to fulfill
Communication guidance is carried out in advance to user;
The present invention can carry out real-time on-line intelligence to the people for being conserved, exchanging and link up auxiliary, promote communication exchange
Quality;Also special occupation scene (psychological consultation, hearing etc.) can be directed to, more professional powerful exchange auxiliary is provided, improve profession
Communication effectiveness.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
God and any modification, equivalent substitution, improvement and etc. within principle, done, should be included within the scope of protection of the invention.
Claims (10)
1. a kind of communication householder method, which is characterized in that this method includes:
In communication process, the video flowing and audio stream of communication object that real-time reception augmented reality AR equipment acquires in real time;
Collected video flowing or/and audio stream are analyzed in real time, obtain the key message in video flowing and audio stream, it will
The key message is input in real-time communication guidance model and is calculated, and obtains real-time communication guidance, the real-time ditch that will be obtained
Logical guidance is supplied to the user linked up with linking up object by AR equipment.
It is 2. according to the method described in claim 1, it is characterized in that, described in real time to collected video flowing or/and audio stream
It is analyzed, the key message obtained in video flowing and audio stream includes:
Human body edge detection and positioning are carried out to each frame image in video flowing, obtain dress ornament or/and the image in accessories region,
And color analysis is carried out, obtain the color of dress ornament or/and accessories;Or/and
Face datection is carried out to each frame image in video flowing, feature extraction is carried out to the face detected, according to extraction
Signature analysis links up the expression of object;Or/and
Limbs detection and positioning are carried out to each frame image in video flowing, feature extraction is carried out to the limbs detected, according to
The signature analysis limb action of extraction according to the meaning that pre-defined different limb actions represent, determines current limb action
The meaning of representative;Or/and
The theme of current conversation content is extracted from audio stream;Or/and
The intonation and word speed for linking up object are extracted from audio stream, analyzes and is linked up according to the intonation and word speed of linking up object
The emotional state of object;
And the key message is input in real-time communication guidance model calculate and is included:
Contain what the expression of the color or/and communication object of the dress ornament or/and accessories or/and current limb action represented
The theme of justice or/and current conversation content or/and, link up object emotional state be input in real-time communication guidance model into
Row calculates.
3. according to the method described in claim 2, it is characterized in that, the key message obtained in video flowing and audio stream it
After further comprise:
Contain what the expression of the color or/and communication object of the dress ornament or/and accessories or/and current limb action represented
The theme of justice or/and current conversation content or/and, the emotional state for linking up object is supplied to user by AR equipment.
4. method according to any one of claims 1 to 3, which is characterized in that the method further includes:
Before user is linked up with linking up object, intend linking up range of information according to predefined, obtain and record to user and belong to this
In the range of this link up it is each intend linking up information, searched in linked database is linked up and each letter for intending communication information association
Breath;
The plan of this communication is linked up into information and the related information found input is linked up in master plan model and calculated, is obtained
To the communication master plan of this communication, which is supplied to user.
5. method according to any one of claims 1 to 3, which is characterized in that the method further includes:
In communication process, the video flowing or/and audio stream of the collected user of AR equipment are received, to the video flowing or/and sound
Frequency stream is analyzed, and frowns action if identifying in video streaming or/and questioning intonation is identified from audio stream and repeats to close
Keyword then learns being intended to current conversation content or/and not understood to the keyword for user, then obtains current conversation content
Or/and the explanation of the keyword, and the explanation is supplied to user by AR equipment;Or/and
In communication process, the video flowing of the collected user of AR equipment is received, by eyeball tracking method, finds the mesh of user
Light focuses on an object, then obtains the relevant information of the object, and the relevant information of the object is supplied to user by AR equipment.
6. a kind of communication auxiliary device, which is characterized in that the device includes:
Analysis module:In communication process, real-time reception augmented reality AR equipment acquire in real time communication object video flowing and
Audio stream;Collected video flowing or/and audio stream are analyzed in real time, obtain the crucial letter in video flowing and audio stream
Breath;
Real-time communication instructs module:The key message is input in real-time communication guidance model and is calculated, is obtained in real time
Guidance is linked up, the guidance of obtained real-time communication is supplied to the user linked up with linking up object by AR equipment.
7. according to the method described in claim 6, it is characterized in that, the analysis module in real time to collected video flowing or/
It is analyzed with audio stream, the key message obtained in video flowing and audio stream includes:
Human body edge detection and positioning are carried out to each frame image in video flowing, obtain dress ornament or/and the image in accessories region,
And color analysis is carried out, obtain the color of dress ornament or/and accessories;Or/and
Face datection is carried out to each frame image in video flowing, feature extraction is carried out to the face detected, according to extraction
Signature analysis links up the expression of object;Or/and
Limbs detection and positioning are carried out to each frame image in video flowing, feature extraction is carried out to the limbs detected, according to
The signature analysis limb action of extraction according to the meaning that pre-defined different limb actions represent, determines current limb action
The meaning of representative;Or/and
The theme of current conversation content is extracted from audio stream;Or/and
The intonation and word speed for linking up object are extracted from audio stream, analyzes and is linked up according to the intonation and word speed of linking up object
The emotional state of object;
And the real-time communication instructs module that the key message is input in real-time communication guidance model to carry out calculating packet
It includes:
Contain what the expression of the color or/and communication object of the dress ornament or/and accessories or/and current limb action represented
The theme of justice or/and current conversation content or/and, link up object emotional state be input in real-time communication guidance model into
Row calculates.
8. device according to claim 7, which is characterized in that the analysis module obtains the pass in video flowing and audio stream
Further comprise after key information:
Contain what the expression of the color or/and communication object of the dress ornament or/and accessories or/and current limb action represented
The theme of justice or/and current conversation content or/and, the emotional state for linking up object is supplied to user by AR equipment.
9. according to any device of claim 6 to 8, which is characterized in that described device further comprises linking up guidance meter
Module is drawn, for before user links up with communication object, intending linking up range of information according to predefined, obtaining and record to user
This each plan linked up belonged in the range of this links up information, is searched in linked database is linked up and intends communication information association with each
Information;The plan of this communication is linked up into information and the related information found input is linked up in master plan model and counted
It calculates, obtains the communication master plan of this communication, which is supplied to user.
10. according to any device of claim 6 to 8, which is characterized in that described device further comprises control response mould
Block in communication process, receives the video flowing or/and audio stream of the collected user of AR equipment, to the video flowing or/and
Audio stream is analyzed, and questioning intonation and repetition are acted or/and identified from audio stream if identifying frown in video streaming
Keyword is then learnt being intended to current conversation content or/and not understood to the keyword for user, is then obtained in current conversation
The explanation of appearance or/and the keyword, and the explanation is supplied to user by AR equipment;Or/and
In communication process, the video flowing of the collected user of AR equipment is received, by eyeball tracking method, finds the mesh of user
Light focuses on an object, then obtains the relevant information of the object, and the relevant information of the object is supplied to user by AR equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810017513.1A CN108170278A (en) | 2018-01-09 | 2018-01-09 | Link up householder method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810017513.1A CN108170278A (en) | 2018-01-09 | 2018-01-09 | Link up householder method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108170278A true CN108170278A (en) | 2018-06-15 |
Family
ID=62517772
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810017513.1A Pending CN108170278A (en) | 2018-01-09 | 2018-01-09 | Link up householder method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108170278A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108958869A (en) * | 2018-07-02 | 2018-12-07 | 京东方科技集团股份有限公司 | A kind of intelligent wearable device and its information cuing method |
CN108986191A (en) * | 2018-07-03 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | Generation method, device and the terminal device of figure action |
CN111144287A (en) * | 2019-12-25 | 2020-05-12 | Oppo广东移动通信有限公司 | Audio-visual auxiliary communication method, device and readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103064188A (en) * | 2011-11-30 | 2013-04-24 | 微软公司 | Head-mounted display based education and instruction |
US20140160157A1 (en) * | 2012-12-11 | 2014-06-12 | Adam G. Poulos | People-triggered holographic reminders |
CN102792320B (en) * | 2010-01-18 | 2016-02-24 | 苹果公司 | The individualized vocabulary of digital assistants |
CN105975622A (en) * | 2016-05-28 | 2016-09-28 | 蔡宏铭 | Multi-role intelligent chatting method and system |
CN106104512A (en) * | 2013-09-19 | 2016-11-09 | 西斯摩斯公司 | System and method for active obtaining social data |
CN106683672A (en) * | 2016-12-21 | 2017-05-17 | 竹间智能科技(上海)有限公司 | Intelligent dialogue method and system based on emotion and semantics |
-
2018
- 2018-01-09 CN CN201810017513.1A patent/CN108170278A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102792320B (en) * | 2010-01-18 | 2016-02-24 | 苹果公司 | The individualized vocabulary of digital assistants |
CN103064188A (en) * | 2011-11-30 | 2013-04-24 | 微软公司 | Head-mounted display based education and instruction |
US20140160157A1 (en) * | 2012-12-11 | 2014-06-12 | Adam G. Poulos | People-triggered holographic reminders |
CN106104512A (en) * | 2013-09-19 | 2016-11-09 | 西斯摩斯公司 | System and method for active obtaining social data |
CN105975622A (en) * | 2016-05-28 | 2016-09-28 | 蔡宏铭 | Multi-role intelligent chatting method and system |
CN106683672A (en) * | 2016-12-21 | 2017-05-17 | 竹间智能科技(上海)有限公司 | Intelligent dialogue method and system based on emotion and semantics |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108958869A (en) * | 2018-07-02 | 2018-12-07 | 京东方科技集团股份有限公司 | A kind of intelligent wearable device and its information cuing method |
CN108986191A (en) * | 2018-07-03 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | Generation method, device and the terminal device of figure action |
CN108986191B (en) * | 2018-07-03 | 2023-06-27 | 百度在线网络技术(北京)有限公司 | Character action generation method and device and terminal equipment |
CN111144287A (en) * | 2019-12-25 | 2020-05-12 | Oppo广东移动通信有限公司 | Audio-visual auxiliary communication method, device and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Park et al. | A metaverse: Taxonomy, components, applications, and open challenges | |
AU2021258005B2 (en) | System and method for augmented and virtual reality | |
US11736756B2 (en) | Producing realistic body movement using body images | |
Specht et al. | Dimensions of mobile augmented reality for learning: a first inventory | |
CN106110627B (en) | Sport and Wushu action correction device and method | |
CN107798932A (en) | A kind of early education training system based on AR technologies | |
Rakkolainen et al. | Technologies for multimodal interaction in extended reality—a scoping review | |
CN108170278A (en) | Link up householder method and device | |
US20190045270A1 (en) | Intelligent Chatting on Digital Communication Network | |
US10955911B2 (en) | Gazed virtual object identification module, a system for implementing gaze translucency, and a related method | |
CN111414506A (en) | Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium | |
Cowling et al. | Augmenting reality for augmented reality | |
Chen et al. | Virtual, Augmented and Mixed Reality: Interaction, Navigation, Visualization, Embodiment, and Simulation: 10th International Conference, VAMR 2018, Held as Part of HCI International 2018, Las Vegas, NV, USA, July 15-20, 2018, Proceedings, Part I | |
CN114063784A (en) | Simulated virtual XR BOX somatosensory interaction system and method | |
CN110780786B (en) | Method for carrying out personalized intelligent application on electronic map identification | |
Morillas-Espejo et al. | Sign4all: A low-cost application for deaf people communication | |
Moustakas et al. | Using modality replacement to facilitate communication between visually and hearing-impaired people | |
Kerdvibulvech | A novel integrated system of visual communication and touch technology for people with disabilities | |
Mathis | Everyday Life Challenges and Augmented Realities: Exploring Use Cases For, and User Perspectives on, an Augmented Everyday Life | |
CN115951787B (en) | Interaction method of near-eye display device, storage medium and near-eye display device | |
CN114666307B (en) | Conference interaction method, conference interaction device, equipment and storage medium | |
Nijholt | Social augmented reality: A multiperspective survey | |
Bari et al. | An Overview of the Emerging Technology: Sixth Sense Technology: A Review | |
Arasu et al. | A Review on Augmented Reality Technology | |
Kanel | Sixth sense technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180615 |