CN104769645A - Virtual companion - Google Patents

Virtual companion Download PDF

Info

Publication number
CN104769645A
CN104769645A CN201480002468.2A CN201480002468A CN104769645A CN 104769645 A CN104769645 A CN 104769645A CN 201480002468 A CN201480002468 A CN 201480002468A CN 104769645 A CN104769645 A CN 104769645A
Authority
CN
China
Prior art keywords
virtual
virtual companion
companion
user
device described
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201480002468.2A
Other languages
Chinese (zh)
Inventor
王·维克多
邓硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GeriJoy Inc
Zhe Rui Co Ltd
Original Assignee
Zhe Rui Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/939,172 external-priority patent/US20140125678A1/en
Application filed by Zhe Rui Co Ltd filed Critical Zhe Rui Co Ltd
Publication of CN104769645A publication Critical patent/CN104769645A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/825Fostering virtual characters
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Abstract

The virtual companion described herein is able to respond realistically to tactile input, and through the use of a plurality of live human staff, is able to converse with true intelligence with whomever it interacts with. The exemplary application is to keep older adults company and improve mental health through companionship.

Description

Virtual companion
The cross reference of related application
Temporary patent application 61/774,591 are filed on March 8th, 2013
Temporary patent application 61/670,154 are filed on July 11st, 2012
Above content of the patent is included into as quoting at this, but all non-for present patent application based on prior art.
Background of invention
In China, U.S. and all over the world, the elderly's quantity increases fast.The research of medical health system shows that the degree of concern of society for old man colony is well below expection.At present, have health care for the aged professional person owing to lacking, the service that nursing for the aged industry provides can not meet the demand of old man to physiology and mental health.
Research shows, feeling of lonely and be senile dementia with society and other people isolated, and depression, physical function goes down, an even dead obvious inducement.The seriousness of this problem manifests day by day: in the U.S., just has one to suffer from A Herebyhai and write from memory disease (senile dementia) in the old man of every 8 over-65s.Depression in old age is common existence also: have 9.4% to have depressive symptom in old solitary people, and the old man's depression ratio in home for destitute lived in is then up to 42%.
In addition, research shows, the psychology that the household of physically and mentally healthy problem to old man of old man also causes and physiological harmful effect.Because American Nursing labour cost is expensive, average price is up to 21 dollars/hour, the household of a lot of old man is unable engages round-the-clock nurse to serve for old man, old man can only be stayed at home separately for a long time, or rest, the working time of sacrificing oneself look after old man.In the U.S., so old man household is because engage nurse, or reduces oneself working time and look after old man, total cost is up to 3 trillion dollars/year.
On the other hand, exist on the market at present to promote that the sci-tech product for the purpose of old man's social activity needs user to have certain computer, internet knowledge and operating experience usually.For old man, particularly to the unfamiliar old man of computer, or there is the old man of senile dementia and inapplicable.
Invention brief introduction
This invention comprises: a virtual companion; A set of multiple person cooperational, the operating process of the virtual companion of Long-distance Control; And a set of system that backstage employee is connected by network with virtual companion.
Above function, and the application advantage of each function, will elaborate hereinafter.
Picture is explained
Fig. 1: the 2 dimension examples of virtual companion.Virtual companion with the image display of pet to user, the interactions such as user can carry out feeding by action such as finger click, dragging etc. to virtual companion, stroke.
Fig. 2: virtual companion's feeding interactive interface example: beverage is fed to virtual companion by touching by user.
Fig. 3: virtual companion's variable-appearance 3 ties up example: virtual companion can change outward appearance according to user preferences.This figure illustrates one with the virtual companion of the image display of dog.In figure, the forelimb that dog lifts is that it touches the reaction carried out to user.
Fig. 4: virtual companion's variable-appearance 3 ties up example: virtual companion can change outward appearance according to user preferences.This figure illustrates a virtual companion shown with cartoon character.
Fig. 5: virtual companion's picture display function example: virtual companion according to the requirement of user, can obtain picture from internet, and show user.
Fig. 6: virtual partner system backstage log-in interface example.
Fig. 7: virtual partner system backstage multiwindow monitoring interface.This interface is for virtual companion backstage personnel's Long-distance Control.Each window display is sent from client, has carried out the live video stream (8 picture window namely shown in figure) of bandwidth optimization; Audio intensity bar (strip region in corresponding diagram below each picture window); The reservation item list list area of wherein five belows of eight windows (in the corresponding diagram); Early warning information (region that in corresponding diagram, the display X of other three belows of eight windows marks); Current service period information (in figure upper left a line character area); User and virtual companion's information (a line word above each video window); Chat window (in figure, lower right is with the region of scroll bar) between background service personnel; Other employee's current service object information prompting (hourglass icons in the window of upper right side, the Hand icon in the window of lower right and subsidiary username information); Exit Button Login (the most upper right side at whole interface).
Fig. 8: backstage is service interface one to one, comprising: current service period information (in figure upper left a line character area); Send live video stream () from client figure with user picture exemplarily; The eye focus position (in user picture, in the middle of eyes, being marked with the light border circular areas of " Look " printed words) of virtual companion's current fixation user; History interaction record (the two hurdles list on the right side of video flowing); The user of preset in advance and virtual companion's exchange time table (the two-column table lattice of below are recorded in history interaction); The image (the penguin image below video flowing) that virtual companion presents in client; Can input text frame, be used for inputting the content (rectangular region below penguin image) needing virtual companion to read; Virtual companion's action control button (being arranged in 12 buttons of penguin image both sides); Virtual companion's state setting district (three on the left of penguin image left side button can choose item and three slips arrange bar); Background service team informing (bottom-left quadrant, whole interface); Chat window (region, lower right, whole interface) between background service Team Member; Many label constructions miscellaneous information gathers district's (being marked with label and the underlying empty region of ' tab ' on the left of video flowing window); Return multiwindow monitoring interface button (upper right side, whole interface is marked with the button of " Monitor All ").
Fig. 9: based on the system schematic of unified modeling language (UML).Illustrate front end user and the possible several exchanged forms of background service personnel.The function be marked with in the wire frame of " Prototype 1 " realizes, for the illustrative functions of verification system feasibility.
Figure 10: based on the background work person works process flow diagram of unified modeling language (UML).This figure describes when after backstage employee's login system, during different interface in application drawing 6-8, needs a series of activities flow process followed.
Figure 11: the system based on unified modeling language (UML) disposes schematic diagram.This figure illustrates a feasible deployment example.In this example, the virtual companion of front end is connected with the control inerface of rear end by a central control server by system.Central control server, by connecting multiple front ends panel computer, carrys out controlling run virtual companion thereon; Also connect the computer that multiple stage is being used by background work personnel.In this example, in order to alleviate network delay, video, audio stream information are directly transmitted between front end panel computer and background work personnel computer by special communication protocol (such as, RTMP agreement), do not pass through central control server.
Invention is introduced in detail
Virtual partner system front end
Virtual companion front end to user, thus helps user to set up with it and be similar to interpersonal emotional connection with the image display of pet.This kind of emotional connection contributes to the psychosomatic health improving old solitary people, also for old man provide one very maneuverable, the means of obtaining information from internet.Compared to use desktop computer, laptop computer or traditional panel computer application program, when using virtual companion, old man only needs to send instruction by natural language, does not need to be grasped any computer capacity.Details are as follows for the implementation of this invention.
Virtual companion's exhibition method
Virtual companion can be illustrated on LCD or OLED display screen curtain, projection or display panels with the form of 2 dimensions (as Fig. 1, Fig. 2) or 3 dimensions (as Fig. 3, Fig. 4).The outward appearance of virtual companion can be the image (as Fig. 1, Fig. 2) of cartooning, zoomorphism (as Fig. 3) true to nature, or between authentic image and cartoon (as Fig. 4).The image of virtual companion can be people or class people cartoon character, zoomorphism true to nature, such as penguin (as Fig. 1, Fig. 2) or dog (as Fig. 3); Or any imaginary image, such as dragon, unicorn, ball etc.; It can be even the combination (if the image in Fig. 4 comprehensively generates according to the image of dog, cat, sea dog) of multiple living thing image.Utilize fabricate image advantage be that user can not have the definition of firsting impressions are strongest to virtual companion in subconsciousness, can not expect virtual companion should/should not have specific behavior.Therefore, when virtual companion can not make some action, user can not feel disappointed because expectation in advance cannot reach; In addition, user also can create oneself favorite virtual companion's image according to the hobby of oneself, the imagination.
For a user, the image of virtual companion can be unified, random variation, or is bringing into use virtual companion according to the hobby of user or in use specified by user.When user wishes to select virtual companion, user can select from series of images kind, comprise dog, cat, outman etc.After user have selected the kind of virtual companion, self-defined setting can be carried out by an interactive interface to the appearance color of image, size, body parts ratio.Another kind of use scenes is that the image of virtual companion is set in advance by other user.This user can for being suitable for household or other guardians of the old man of virtual companion.The behavior irrelevant with outward appearance can also be comprised to the self-defined setting of virtual companion to set, will in hereafter specific definition.Once the image of virtual companion has set, virtual companion can show user when not doing any background introduction, or show from screen in other intention modes: such as, hatch from egg, take out from gift box packaging, or introduce user etc. in other touching modes.Fig. 4 is an example of virtual companion's image, and this companion's image is a dog between true to nature and cartoon, and not yet carries out any self-defined.
From the angle of technology, the displaying of virtual companion can be played up by a series of 2 dimensions, 3 d image, Video processing realizes; Or can to show virtual companion's body parts, by independently arranging displacement animation to realize the simulation to virtual companion's body action to each picture with a series of 2 dimension static images; Or body parts can be shown with a series of polar plot, carry out action simulation by mathematical vector method; When 3 dimensions are shown, the health of virtual companion is defined by a series of point, line, surface, and with material information, carries out texturing by the two-dimensional points system of battle formations.In addition, virtual companion also can be a tangible machine people, with the gearing, tactilely-perceptible device etc. of control machine human action in robot.
The definition of virtual companion's outward appearance is defined except static charge image, also comprises the definition to its action.Action definition can be a series of key frame image data that virtual companion preserves under different body shape, or the 3 n dimensional vector n weight definition to one 3 dimension image.In an exemplary (as shown in Figure 4), virtual companion can be that 3 dimension module built by 3 dimension modeling softwares add a series of actions definition.First by the skeleton of 3 dimension modeling software defining virtual companion images, the structure of this skeleton needs to set up with reference to real animals skeleton, thus can according to the operation action of skeleton motion situation defining virtual companion in real animals motion process.In addition, need to add additional virtual bone, so that the facial expression action of follow-up defining virtual companion at the face of virtual companion.After skeletal definition completes, can by changing the putting position definition action key frame of some bone.The action of virtual companion comprises idle posture and other postures of an acquiescence, such as hunt, nod, chin is lifted, lift left fore, lift right fore, sad facial expression, glad face table, breathe, blink, look around surrounding, to wag its tail etc., bark, mouth action when speaking etc.All model outward appearances and action message can export as general FBX file layout.When virtual companion shows with the image of tangible machine people, the action of defining virtual companion can be carried out by a series of driving device motor state of predefine.
Use user in the process of virtual companion, can show by model setting (comprising Skeletal size, skin pattern etc.) of the virtual companion of adjustment the process that virtual companion is grown up by a growing animal.In addition, change in the process that the multi-point touch mutual-action behavior will mentioned hereinafter also can be grown up in simulation.In addition, the action of the virtual companion defined above also can change along with the growth of virtual companion.
Virtual companion is to the reaction of multi-point touch
One of key point of this invention is that user can touch virtual companion.After virtual companion receives touch signal, a series of movement response true to nature can be carried out.Exemplary hereafter supposes that virtual companion's running software is at the panel computer (iPad that can accept multi-point touch panel input information, Android is dull and stereotyped) on, when virtual companion's running software is at other platforms, the input information also by obtaining other types carries out similar operation.Such as, except touch-screen, user can also by the slip of computer mouse, while or left, center, right key of clicking the mouse separately, click the mouse and to drag etc. and simulate different touch condition.When virtual companion shows with the form of tangible machine people, robot can obtain input information by subsidiary touch input sink.In most basic example embodiment, virtual pet is illustrated in the LCD display of panel computer.This type of tangible display screen can learn the current region touched by user by the electric capacity or resistance change detecting zones of different.An innovative point of this invention is, when display screen detects that the multiple point of screen is touched, virtual companion can carry out independently action feedback to each touch position (such as head, left fore etc.), and merged by everything by 3 dimension softwares, thus represent the visual effect of smooth multiple location action simultaneously to user.This kind to multiple point touching be first separated the mode that merges again finally will for user present true to nature, be similar to the experience with true pet interaction.In addition, by the analysis to information such as touching intensity, track length, virtual pet can also differentiate different types of touch manner (such as dab, stab and hit), thus makes different reactions.
Realizing virtual companion to the first step touching reaction is monitoring to touch event.When virtual companion is by game engine or other software runtime environments, the inquiry touching perceptron state is realized in the major cycle of program.In addition, the reception that can also realize touch signal by the mode such as trigger mechanism, callback mechanism.
Touching the mutual second step realized is obtain virtual companion's body position corresponding to touch location.In panel computer or other two-dimentional tangible planar devices, the coordinate position of touched point relatively whole touch-screen when equipment can inform that software touches generation at every turn.When virtual companion presents to user in two dimensional image mode, can by the coordinate of the current position that is touched of comparison, and the coordinate point set position that the geometric envelope line representing each body part of virtual companion comprises, thus learn which body part of virtual companion is touched.When virtual companion presents to user with 3 dimension figures, need to set up three-dimensional geometric envelope body to each bone in three-dimensional model.Such as, for the leg bone of a model, the enveloping solid (right cylinder, two tops are dome-type) of a capsule shape can be set up in modeling software.The length of enveloping solid arranges and ensures that whole leg bone is coated on enveloping solid inside completely, and definition enveloping solid and bone can not exist relative displacement in a model, and namely enveloping solid can move to relevant position according to moving of bone at any time.Different body parts can define different enveloping solid shapes.Such as, for a short and thick body part, enveloping solid can be defined as spherical.In theory, enveloping solid geometric configuration can with define in model, the geometric configuration of the visible each body part of user overlaps completely.But due to when defining body part geometric configuration, consider aesthetic measure, the point, line, surface One's name is legion that geometric configuration comprises, if therefore enveloping solid is identical with it, then the point, line, surface definition that enveloping solid number is various will run subsequent software and bring larger calculated amount burden.In addition, enveloping solid is defined as simple geometric configuration and cumulative volume is also slightly larger than the benefit of actual body position geometric configuration, when the positional information that touch signal returns and actual value have some errors, program still can judge that active user has touched certain body part of virtual companion.In order to allowable error, another kind of method is that definition one is similar to this body part shape, but the enveloping solid that size is bigger, but still there is the heavy problem of calculated amount burden in this solution.When defining enveloping solid, can define an enveloping solid for each root bone of three-dimensional model, also can be only that the bone that body part that in those programs, setting can be movable is corresponding defines enveloping solid.In order to better receive multiple point touching signal, can also define multiple enveloping solid to a bone, each is made a response to the point of in multiple point touching.Therefore, for a virtual companion, tangible part and body part are not man-to-man relation, but many-to-one relation.In addition, if the running environment of virtual companion is not supported to set up multiple enveloping solid to a bone, can by creating multiple virtual skeleton, the mode of the subsidiary enveloping solid of each bone realizes the process to multiple point touching.In order to improve the fluency of virtual companion's running software, all enveloping solids are preferably pre-set in modeling, thus operationally, the d engine presenting virtual companion only needs the rotation and the displacement information that record each enveloping solid frame by frame.When virtual companion is in three dimensions in current, the touch signal coordinates of 2 dimensions are projected onto with it three-dimensional virtual companion, the body part of first enveloping solid crossing with the incident line is then corresponding user the wants virtual companion of touch.In addition, to other object definitions enveloping solids in screen, and the interaction relation of other objects and virtual companion can also be defined.When user touches other objects, there is the body part of the virtual companion of interaction relation also will make respective reaction.Touch input except can be corresponding with the body part of pet, can also be further subdivided into " starting to touch " (if this touch signal does not occur in former frame), " stop touching " and (in previous frame, have touch information, do not have in present frame), " touching continuously " (touch information in present frame also exists in previous frame).In " touching continuously " state, virtual companion's software can be recorded in the two-dimensional coordinate position of this touch signal in previous frame, the three dimensional body location information of projection correspondence etc.
For buffer memory and the analysis of touch condition in the short time, other character of touch can be taken out.In virtual companion's software simulating, the enveloping solid for each body part both defines one " touch buffer memory ", is used for preserving the multiframe information that touches this body part of user.In an exemplary embodiment, can analyze by touching preservation information in buffer memory to each, calculate " persistence ", " intermittently counting ", " displacement " three quantizating index.Persistence index counts from 0, and in the touch buffer memory of a body part, former frame and have received a touch signal, when this frame receives again a touch signal, persistence index adds one.If do not meet this condition, then persistence index subtracts 1, until reducing to till 0.Therefore, after the touch of user to a body part terminates, persistence index finally can reduce to 0.Therefore, persistence index was directly proportional to the time of this body part of user's sustained touch.Interrupted number counts from 0.If do not have touch signal in present frame, but former frame comprises one touches end signal, then interrupted number adds 1.If do not meet this condition, then interrupted number minimizing fixed value x (x<1) after every frame, until reducing to till 0.For a known x, when user is to click (touch of repetition, release) screen higher than certain frequency, interrupted number can continue to increase.Therefore, interrupted number defines the number of times that user knocks a certain body part of virtual companion continuously.In actual software is write, a fixing higher limit should be set to interrupted number.Displacement can be defined as the vector of multidimensional, is recorded in the length of movement on every one dimension, or scalar, records the movement locus length of a lasting touch.Under vector and scalar two kinds of situations, can be calculated by following method: displacement counts from 0, all exist at present frame and previous frame if touched, then in the current frame, calculate the distance that current be touched coordinate points and former frame are touched between coordinate points.After touch terminates, displacement can reduce by a fixed value frame by frame, until be 0, also can reduce by a value relevant to current persistent index frame by frame, until be 0.Such as, the multiple of persistence index can be reduced frame by frame.Displacement describes user when touching continuously virtual companion body part, the length of touch track.To sum up, persistence, interrupted number, displacement quantitative description are across the touch event of multiframe.
Touch condition quantizating index except persistence, interrupted number, displacement also can touch input signal by bottom and carry out analysis extraction.The quantitative calculation method of persistence, interrupted number, displacement is also not limited to the several of middle description above, can also be calculated by additive method.Such as: each quantizating index can in time according to exponential increase and decrease.Or when index increases, each recruitment reduces along with the increase of current criteria value, thus without the need to each setup measures higher limit, because along with the increase of desired value, growth rate can be more and more slower.In order to simplify calculated amount, multiple point touching also can be reduced to single-point and touch, and such as, when there being two finger touch screens, program receives only the touch information of the finger of first contact, ignores second finger.In addition, can desired value in each touch buffer memory, or add random noise in the increment of each desired value.Touch in buffer memory add that a certain amount of random noise can make virtual companion autonomous make tic or other random actions.When multiple body part reacts touch simultaneously, add random noise and the motion transition between body part also can be made more natural, avoid saltus step.
Multiple point touching convert information, by animation hybrid technology, according to calculating each quantitative target value touched in buffer memory in each major cycle of program, is dynamic by virtual companion, movement response true to nature.The a series of existing mature technology multiple independent animation of 3 dimension animated character being merged into one group of continuous action of animation mixing general reference.Such as, multiple displacements of the neck bone in one of virtual companion action of the bowing corresponding time period, rotation animation.Another of virtual companion by then corresponding for the action of the head inclined right displacement of other one group of neck bone, rotate animation.These two self contained functions are carried out animation mixing, can obtain virtual companion bow simultaneously head to the mixing action of right avertence, the mean value of bone displacement in corresponding two self contained functions of amplitude of action, rotational steps.The implementation of another kind of animation mixing does not define the absolute displacement of bone and rotation amount when defining single movement but records relative displacement, rotation amount.The process of animation mixing be then to multiple relative variable add and, mixed movement range wants large relative to front a kind of implementation.In these two kinds of implementations, different weights can be set to each mixed self contained function, thus the large self contained function of weight is visually more partial in finally mixed action.
An innovative point of the present invention is the input control signal mixed as animation with multiple point touching signal.In an exemplary embodiment, software is a series of attitude of virtual companion's predefine (the idle attitude under default conditions is one of them).Comprise two key frame informations in the definition of each attitude, first key frame is the state that virtual companion is in idle attitude, and second key frame is the state that virtual companion is in the current attitude be defined.The virtual displacement of companion's model each several part bone between these two key frames, the difference of rotation amount can be used for the animation carried out based on relative quantity and mix.If (animation carried out based on absolute magnitude mixes, then only need a key frame surface to be defined each skeletal status in attitude.) corresponding virtual companion of attitude of each definition touches the reactiveness of time to a lasting single-point.Such as, a left front leg position of lifting the corresponding virtual companion of attitude of left front leg is by reactiveness during sustained touch.For another example, a head is facial by reactiveness during sustained touch on the left of the corresponding virtual companion of attitude of right avertence.Above two examples are all the reactions that virtual companion touches for continuation.Another kind of attitude can also be defined, the reaction that corresponding virtual companion touches intermittence.Such as, virtual companion's head is subject to the interrupted reaction touched to the corresponding virtual companion's nose of the action of swinging back.Similar, a series of attitude that displacement is touched can also be defined.
Program is run during the course, touches corresponding attitude be weighted mixing to all continuation.The weight of each attitude is the continuation desired value that continuation corresponding to this attitude touches.That is, when a body part is not touched within the quite a while, its weight is 0, and mixing action can not comprise reaction attitude when this body part predefined is touched.If pre-defined a series of continuation in program to touch reaction attitude, and pre-defined rational continuation index increase and decrease behavior, then the simple process by touching continuation just can for user present true to nature, be similar to touch entity pet time the pet visual effect of reacting.Such as, when allowing user streak the body part of multiple virtual companion continuously with a finger, the reaction attitude corresponding to each body part will be triggered, and when user's finger leaves this body part, reaction attitude reduces weight in time.Thus user can see coherent the reacting touch of a series of body part.On this basis, a series of reaction attitude that intermittence is touched can be defined, for presenting a series of emotional change true to nature.Such as, can defining virtual companion to retract the attitude of left fore, respective user touches the intermittence of virtual companion's left fore.Interrupted several index that weight and this intermittence of this attitude touch is directly proportional, thus make when user clicks left fore continuously fast, interrupted several index increases, and the weight of retraction left fore attitude in the mixed animation of entirety increases, and makes virtual companion present the action of retraction left fore.But when user repeats to click left fore slowly, interrupted number index is lower, and persistence index is higher, and the weight lifting the attitude of left fore is comparatively large, thus virtual companion presents and lifts left fore action.To the another kind of processing mode that intermittence touches, received by virtual companion all body parts within a period of time, the intermittence touch index of definition reaction attitude does not sum up separately.Add and after the random stamp of numerical value respective user to virtual companion hit number.And define the facial expression attitude of " sadness ".When stamp hits several larger, sad attitude weight is higher.In addition, can also touch to the intermittence of certain privileged site of health the mood attitude defining other.Such as, the facial expression attitude of definition " happiness " can be touched to the intermittence of head.When head accepts intermittence touch, stamp will be not counted in and hit number.Similar, can also to displacement touch definition expression attitude.Such as, can to virtual companion's definition of pet dog image when chin position be subject to displacement touch, corresponding glad expression attitude.Virtual companion is mixed with animation by the definition of a series of actions, expression attitude, can present true to nature for user, be similar to and carry out interactive visual effect with true pet.User when not by advance notice, by constantly attempting various possible touch position and touch manner, can also explore the reaction that virtual companion touches difference.
Except the above-described action behavior being controlled virtual companion by multi-point touch, other optional manner following can also be adopted.Such as, the weight of an attitude can be touched by correspondence persistence, intermittently number, displacement comprehensively determine.For another example, stochastic variable can be added in the reaction touched, make reaction more have uncertainty, more natural.In addition, the touch reaction that different body part is corresponding can change random exchange in time.Or the different touch reaction weights that same body part is corresponding can change in time according to certain properties inherent in stochastic processes predefined and change, or change according to the current emotional information of virtual companion.In addition, more complicated touch reaction can also be defined.Such as, the bone site of virtual companion can be followed touch location that displacement touches and change, thus realizes palm location following user's finger touch position of such as virtual companion and the effect that changes, or is similar to the effect that virtual companion and user shake hands.Except static attitude, the animation with multiple key frame also can be defined as the reaction to touching.Such as, the action continuing to roll can be defined, as reaction user being touched to its head to the head of virtual companion.When carrying out animation mixing, for ensureing the vivid effect of mixed action, need to calculate mixed juice specifically to limit.Such as, when the left fore that mixed animation comprises virtual pet lifts attitude, need to ensure that the weight that now right fore lifts attitude is 0, namely ensure that right fore is current and be in the state of contacting to earth, to ensure visual rationality.In addition, program also needs the restriction carried out mixed action in amplitude, to ensure that 3 dimension module of virtual pet in course of action can normally show.When virtual pet allow panel computer or other with on the equipment of acceleration transducer information time, can by the current disposing way of the reading judgment device of acceleration transducer, and using the input control parameter one of of gravity direction as virtual companion's movement response.In addition, can the image of user be obtained by the camera on equipment and carry out gesture analysis, using gesture input as the another kind of mode mutual with virtual companion.In addition, can also by the sound in the microphones environment on equipment, and using sound as the input controlling virtual companion's behavior, such as, when sound is larger, the action of agitating before and after the ear display of virtual companion.
Except above describe for touch carry out reacting and the behavior of a series of virtual companion that defines, attitude, virtual companion has also pre-defined a series of autokinesis, such as, blink, breathe, wag its tail, bark, jump, speak etc.These actions can be carried out automatically when not having touch event to occur.
When virtual companion realizes in the mode of tangible machine people, can be realized by the touch sensor of robot the information touching input, action hybrid algorithm then can be used for controlling robot motor.
The emotion control of virtual companion
Virtual companion, except carrying out except movement response to the touch of user, can also carry out movement response to the inherent mood model of software definition.In an exemplary embodiment, mood model is based upon by the PAD model basis of Albert Mehrabian and James A.Russel proposition.PAD is the abbreviation of Pleasure (joyful degree)-Arousal (activity)-Dominance (dominance).This model utilizes joyful degree, activity, dominance three quantified dimension to represent all type of emotion.PAD models applying defines in carrying out expression to virtual portrait by existing research topic before this.
In an exemplary of the present invention, virtual companion's master routine comprises two groups of PAD numerical value: long-term PAD and short-term PAD.The long-term character trait of the corresponding virtual companion of long-term PAD, short-term PAD is for representing current temporary transient emotional state.The initial value of short-term PAD and long-term PAD all can set 1 in the following manner) be set as neutral default value 2) select 3 by user) selected by the guardian of user.At a time, the value of short-term PAD can be different from long-term PAD, but from general trend, short-term PAD value can to long-term PAD value convergence, convergence mode can the funtcional relationship of corresponding any complexity, single linear with current long and short phase PAD difference between also can being.Similarly, long-term PAD also can the trend of oriented short-term PAD convergence, but trend is less obvious.Long-term PAD can make virtual companion can along with constantly accepting and the interaction of user and change character trait to short-term PAD convergence.In conjunction with the corresponding relation between the input of previously described multi-point touch and action, emotional reactions, virtual companion short-term PAD is worth change to realize in the following manner:
When the stamp that virtual companion receives hit number exceed certain threshold value time, joyful degree (Pleasure) value of PAD reduces.
When user for virtual companion define " happiness " expression reaction body part carry out intermittence touch time.Joyful degree raises.
When user for the body part that virtual companion defines the reaction of " happiness " expression carry out displacement or continuity touch time.Joyful degree raises.
Come from any of user and touch the activity (Arousal) that input can improve PAD, reduce dominance (Dominance), the index increase and decrease amount that the touch for different body part causes can be different.
The influence mode to PAD that can be different from foregoing description to the particular body portion definition of virtual companion.Such as, when the eyes of virtual companion receives touch input, reduce joyful degree, or when virtual companion's chin receives touch, significantly raise joyful degree.
Long-term PAD value is except being changed when receiving user and inputting, can also gradually change along with the time, such as, activity slowly can reduce along with the time, when not having user to touch input night, activity can not increase owing to touching reaction, can ensure to reach minimum in moment in morning activity, be consistent with the biological emotional change rule of reality.
When virtual companion's routine package is containing speech analysis function, the intonation when value of short-term PAD can also be spoken with user and changing.Such as, when user with severe or the authoritative tone exchanges with virtual companion time, the joyful degree in short-term PAD and dominance will reduce.In addition, the current respiration rate of user, facial emotions etc. can also be analyzed by voice, video input, obtain the activity information that user is current, and the activity numerical value of the virtual companion of corresponding adjustment is corresponding with it.
When the short-term PAD value of virtual companion changes, following impact can be produced on its behavior act:
When joyful angle value is higher or lower than certain numerical value, the weight of facial expression attitude in all mixed actions of corresponding " happiness " or " unhappy " will increase, thus make virtual companion significantly can show glad or facial expression out of sorts while carrying out other behavior acts.Similarly, when activity and advantage angle value change, or when joyful degree, activity and dominance change simultaneously exceed certain limit time, corresponding can improve the weight of different expression.Such as, when joyful degree is lower than certain threshold value, activity is higher than certain threshold value, and dominance is when a higher level, can increase the weight that " anger " expresses one's feelings.
Activity numerical value can affect the speed of virtual companion's respiratory movement.Activity is higher, and respiration rate is faster.The amplitude of respiratory movement also can increase with activity numerical value and increase.In addition, the amplitude of the action that wags its tail of virtual companion or other actions also can increase with activity numerical value and increase.
When joyful degree, activity, dominance numeral increase, the proportion of action in action mixing of corresponding virtual companion also can increase thereupon.Such as, for the virtual companion of a pet dog image, in the idle attitude of acquiescence, tail is in hang.But when the joyful degree of virtual companion, activity, dominance numeral increase, the weight of " tail upwarps " this attitude will increase, carry out after action mixes with idle attitude, pet dog will present the state that tail upwarps.
In virtual companion's software running process, the age of virtual companion also can increase thereupon, and along with the change of long-term PAD numerical value.Such as, long term activation number of degrees value can decline year by year.In addition, long-term PAD numerical value also can affect virtual companion's age growth rate conversely.Such as, when the long-term joyful number of degrees value of virtual companion is higher, its age growth can be slowed down, or its outward appearance can seem more young than the virtual companion with lower joyful number of degrees value, such as, has more bright-coloured hair color etc.
The nurse demand of virtual companion
Virtual companion can present virtual psychological need to user, and desirability increases in time.Virtual physiological demand comprises food needs, drinking-water demand, excrete wastes demand, requirement for cleaning of having a bath, entertainment requirements etc.In addition, spontaneous behaviour mentioned above, such as, sleep, breathe, blink etc., also can be defined as psychological need.The intensity of each psychological need can be defined as a numerical variable in master routine, and its value constantly increases in major cycle.Or can create a timer for each demand, often crossing its numerical value of certain hour increases certain amplitude.Different time in one day, increasing degree can be different.Increasing degree can also be subject to the impact of current short-term PAD value.
The intensity of part psychological need by changing the weight of attitude corresponding to virtual companion, action, can present to user intuitively.Such as, when sleep demand strengthens gradually, the weight of virtual companion's droopy eyelids action will improve constantly.In addition, the intensity of psychological need can also affect short-term PAD value.
Each demand threshold value that all correspondence one is variable.This threshold value can change along with the different time in a day, also can change with the value of current short-term PAD, or change according to certain stochastic process variable defined in advance.When the intensity of a certain item demand reaches current threshold value, virtual companion will present corresponding behavior.For picture this simple requirements nictation, when intensity exceedes threshold value, virtual companion can carry out action nictation, and demand intensity value is reset to 0.Similarly, when respiratory demand exceedes its threshold value, virtual companion can carry out respiration action and respiratory demand value be set to 0.When sleep demand exceeds threshold value, virtual companion can enter the sleep state of continuation, but can by receiving external sound, touch signal and being waken up.
In an exemplary embodiment, for the psychological need that other are more complicated, then need by meeting with the interaction of user, thus help between virtual companion and user, to set up an emotional connection needed and be required, be similar to the relation of owner Yu in gardener, pet of plant.Research proves, the relation be required is for user, and particularly the elderly, has positive effect to promotion mental health.When complicated demand exceedes certain threshold value, virtual companion can point out user to carry out corresponding interaction by representing specific behavior.Such as, entertainment requirements can be shown by the lasting jump of virtual companion.In addition, virtual companion can also point out user by sending specific sound.Such as, the corresponding virtual companion of food needs sends sound of belly rumbling etc.
Below list the concrete exemplifying embodiment of switch-activity corresponding to different nurse demand:
When the demand of virtual companion to food exceedes threshold value, can by sending the auditory tone cues user of belly rumbling, meanwhile, virtual companion can also show unhappy/extremely hungry expression, and can show the container that is equipped with food in program interface.Or the container that food is housed is present in interface always, when virtual companion has food needs, container becomes the state of partially opening or display flicker effect from cut out, attracts the notice of user.When user touches this container, container is full open position, and shows a series of alternative food picture.User can choose specific food by touching and sliding.As shown in fig. 1, user by dragging any food picture to the position of virtual companion, can realize virtual feeding action.After virtual companion receives food, body posture is converted to the state of eating food.After as fed terminates, the food needs value of virtual companion reduces, and the food species that can select with user of reducing amount is different and different.Different food selection can also produce different impacts to the long-term PAD value of virtual companion.Such as, when user selects meat food, the activity of virtual companion increases; When selecting greengrocery food, activity reduces.What food selection can also have influence on virtual companion invents growth process.Such as, what meat food can allow the outward appearance of virtual companion become is more healthy and stronger.
When the drinking-water demand of virtual companion exceedes threshold value, user can be pointed out by sending coarse breathing, showing unhappy/thirsty expression simultaneously, program interface occur the container of dress water simultaneously.When user clicks container, interface can show multiple different beverage and select for user.User can realize feeding hydrodynamic(al) and do by the touch action being similar to feeding, as shown in Figure 2.Be similar to feeding, feed water behavior and the value of long-term PAD also can be made to change accordingly, and affect the developmental process of virtual companion.Such as, unsound beverage can make virtual companion's profile become fat, and reduces the value of long term activation degree, but polysaccharide drink can cause the activity of short-term PAD to raise.
When the excretion demand of virtual companion exceedes threshold value, virtual companion can from being advanced into drainage status: namely screen shows excreta.When excreta is present in interface, the joyful degree of virtual companion will reduce, and make a series of actions and show virtual companion and do not like niff.User can represent excremental picture by touching, and is slid into outside screen; Or to sweep with instrument on excreta picture by dragging one, realize clean behavior.
When the requirement for cleaning of virtual companion exceedes threshold value, virtual companion can carry out reminding user with the action of itch, or on virtual companion's health, show the smog effect of distributing from virtual companion's health of stain or display 3 dimension, or on screen picture showing bathtub etc.User can pass through to utilize touch virtual companion's dragging to be got into the bath and realize cleaning function.Or can touch virtual companion stain with it one by one by user, the stain be touched disappears to realize cleaning function one by one.
When the entertainment requirements of virtual companion exceedes threshold value, virtual companion can promote the activity index in short-term PAD, and by send bark sound, show that jump action or the behavior picking up certain toy from screen scene carry out reminding user.User can by carrying out with virtual companion the entertainment requirements that interactive game meets virtual companion.The interaction mode of game can comprise multi-point touch interaction described above etc.Such as, game can be user by multiple point touching, toy is taken out from virtual companion's mouth.Interactive game significantly can improve joyful degree and the activity of virtual companion short-term PAD, and directly can increase joyful degree and the activity of long-term PAD.
Virtual companion's Intelligent dialogue function and backstage backup system
A gordian technique of the present invention is for virtual companion adds Intelligent dialogue function.Intelligent dialogue function can be realized by artificial intelligence, also can by network system by manually carrying out Long-distance Control.When there being artificial participation, namely virtual companion serves as the head portrait that far-end controllers represents in user interface.Compared to all utilizing artificial intelligence to realize dialogue, the advantage being controlled conversation content by remote handle is to provide the more close experience exchanged with true man for user.
Artificial assistant can distance users Long-distance Control virtual companion in geographic position far away.Such as, user is in the U.S., and artificial assistant is in Philippine or India.Artificial assistant, by running Long-distance Control interface on computers, is connected with the panel computer or other equipment running virtual companion by the Internet.In an exemplary embodiment, artificial assistant can be logged in by interface as shown in Figure 6 and enter artificial assistant software's system.After logging in, artificial assistant can by the state of the multiple virtual companion of interface monitoring as shown in Figure 7.Meanwhile, multiple artificial chaperones by logging in respective software systems, can monitor same or same group of virtual companion simultaneously.When a virtual companion needs artificial assistance, artificial assistant software can manually enter control inerface one to one as shown in Figure 8 automatically or by artificial assistant.In this interface, artificial assistant can control the behavior act of this virtual companion by the button on click interface or input characters.The interactive function comprised in Prototype 1 wire frame in above-described a series of reciprocal process corresponding diagram 9.Figure 10 shows the workflow diagram of artificial assistant.Figure 11 shows the deployment relation of artificial assistant software, virtual companion's software and central control server.
When virtual companion adopt the artificial intelligence of Practical computer teaching come and user engage in the dialogue exchange time, its artificial intelligence process program can transfer to remote handle assistant to come by having higher probabilistic interactive task.Artificial assistant software can remind long-range assistant to get involved the current interactive process with user of this virtual companion by the corresponding window display reminding information of multiwindow monitoring interface (being similar to the information display in Fig. 7).Or any one is current not to the artificial assistant that any virtual companion controls one to one, and its artificial assistant's program directly can jump to control inerface one to one as shown in Figure 8.When artificial assistant software is in multiwindow monitor state, which virtual companion of artificial assistant can be pointed out to carry out alternately with user by series of displays information.Audio volume change that the panel computer microphones that these informations comprise the Audio stream content at multiwindow interface, virtual companion runs arrives etc.Artificial assistant can select enter the control inerface one to one of certain mutual virtual companion by these informations, by the behavior that controls virtual companion for user provides better interactive experience.When artificial assistant gets involved in the midway that virtual companion and user carry out exchanging, interactive interface also can show the talk history of virtual companion and the user obtained by speech recognition technology one to one, makes artificial assistant can know the context of current session in time.Each artificial assistant gets involved a reciprocal process, and it can as the input information required for the improvement of subsequent artefacts's model of mind to the control information of virtual companion.
When artificial assistant is by when control inerface controls virtual companion one to one, the video that artificial assistant can be obtained by the current virtual companion that interface shows, audio stream information learn the state that user is current and content of speaking.Virtual companion is needed the language made to reply to be input to mode word the chat window in interface by artificial assistant.Hold virtual companion, when it receives text event detection, can text-to-speech technology be passed through, read the Word message obtained with audible.The three-dimensional model of virtual companion can make speech act simultaneously, and such as mouth continues the action of opening and closing.Action can constant, random velocity be play, or according to content and the speed variable playback of speaking.Or the Hp-synchronization information that can export with speech engine, controls frequency and the amplitude of speech act.When virtual companion reads the word content received, screen can also show word content in the mode of captions, so that there is the user of hearing problem to exchange with virtual companion simultaneously.
Hear the content of speaking of user from artificial assistant, there is the regular hour to the process by the complete reply content of input through keyboard and postpone.In order to reduce time delay, can the artificial assistant of special training, after an input part is replied, send key can be clicked immediately, and artificial assistant continues the follow-up content of input when virtual companion reads the content be currently received.Or content (after such as, receiving a series of letter and first space) can be sent to virtual companion when receiving first current word by the chat window of control inerface automatically one to one.What add that artificial intelligence obtains by speech recognition generates automatic reply content by computer and also may be displayed in dialog box input window, when artificial assistant think this respond meet the requirements time, can directly send to virtual companion.All interaction content inputs carried out under artificial assistant supervision, be no matter generated by machine or by the manual input of artificial assistant, can as training set content when improving artificial intelligence module subsequently through machine learning.In addition, control inerface can also show a series of reply that can select for artificial assistant one to one, and artificial assistant only needs to click the reply conformed to most.When artificial assistant is inputted by typed forms, can utilize " automatically completing " function, the several letters keyed at present by artificial assistant demonstrate the common-use words or common word that start with this character string, thus reduce the length that artificial assistant needs input content.Artificial assistant can also realize inputting fast by the information (client's daily record and memo information etc. by mentioning in annex) in Customer Relationship Management Services.Such as, click the user name in customer relation management interface, customer name can be shown in dialog box.Such as, or inputted by ESC and another name, when artificial assistant's input "/owner ", in chat input frame, general/owner converts the name of the client of the corresponding virtual companion of use automatically to.The data of Customer Relationship Management Services can as the input information of " the automatically completing " function above mentioned.
Conversation content between virtual companion and user can also generate by expert system or the artificial intelligence system containing expert knowledge information.Professional knowledge herein comprises the knowledge of psychology, psychiatry, Gerontal nursing science, sociology etc. aspect.This expert system according to the present situation of user, content of speaking, can generate optimized reply content, such as, aims at the question and answer content etc. exciting its brain activity suffering from patients of senile dementia design.But the limitation of this expert system is, be difficult to utilize speech recognition technology Content Transformation of user being spoken arbitrarily to be the content meeting expert system input format.Such as, when user is asked " you recently how ", expert system can do next step judgement to " I am fine ", " generally ", " poorly " three kinds of specific user responses.But when user answers ", I determines very much, can also " time, expert system cannot be determined further.Now, needing to carry out artificial judgment by artificial assistant by listening to the audio frequency sent from user, selecting immediate standardization input (in this instance, selecting " generally ").The intervention of artificial assistant can make user still can exchange with expert system with natural language, and reduces expert system and do not mate and issuable mistake owing to inputting.Artificial assistant can also select any time in dialog procedure to terminate expert system to assist, or the output content of amendment expert system, makes its more colloquial style, or revises according to user's the present situation the parameters input that expert system needs at any time.Such as, an input parameter of expert system is the pain index of active user.Artificial assistant can observe user's current face expression by video flowing, and by the state modulator slide block adjustment pain index on interface one to one.In addition, expert system can also be given by the PAD value being similar to virtual companion's moos index, it suitably can be responded the content of speaking of user under different mood.
In an exemplary embodiment, as Fig. 8, artificial assistant needs to press " Alt " key simultaneously and the content inputing to virtual companion is sent with " Enter " key.And in the chat interface of multiple artificial assistant, only needing to press " Enter " key can send.Here just adopt different send modes that artificial assistant can be avoided to send to the content of other artificial assistants to send to virtual companion by needing by mistake.
When virtual companion uses text-to-speech technology to read the content of artificial assistant input, voice can be set to lovely child's sound and word speed slowly, thus the people of hard of hearing can be made better to understand the content of speaking of virtual companion.The voice of virtual companion, intonation, word speed can change according to the value of virtual companion mood PAD, also can change according to the virtual age of virtual companion, cosmetic variation and changing.Or can be set by assistant's interface remote by artificial assistant.
Artificial chaperones marks out intonation while can inputting reply content in dialog box.After the information that virtual companion receives, the intonation setting of oneself can be adjusted in reading content simultaneously.Virtual companion can also according to current mood PAD value from Row sum-equal matrix intonation.Such as, when the activity of virtual companion is higher, its word speed, volume increase, and how to use rising tune when Statement Completion.
As shown in Figure 8, in interface one to one, show emotional status (PAD) value of current virtual companion, and with the history interaction content of user, the schedule table of user etc.The state that these information make artificial assistant understand user and the content that can mention in interaction etc.Such as, in Fig. 8 kind, according to the PAD value of virtual companion, the tone that artificial assistant can control virtual companion glad exchanges with user, can inquire the recent developments of the friend Bob of user Betty in interchange, or reminding user Betty will have lunch etc. at once.
Reply except artificial assistant typewrites to input, can also other input modes be passed through.Such as, can be replied by utterance input by artificial assistant, be converted to word by artificial assistant software by speech identifying function, then send to virtual companion.Or the audio-frequency information directly obtaining artificial chaperones sends to virtual companion, be converted to default language, intonation etc. by converter technique at virtual companion's end.These input processing modes make when different artificial assistants controls same virtual companion and user carries out mutual, and the voice of the virtual companion that user hears are consistent all the time, can not change with different artificial assistants.
As shown in Figure 8, artificial assistant is except controlling the voice dialogue function of virtual companion, can also control the psychological need of virtual companion, mood PAD value, facial expression, autokinesis behavior (entering breathing, nictation etc.), trigger certain actions (such as dance, roll about), artificial assistant can also be even that virtual companion records the action of specifying by clicking the different body parts dragged in virtual companion's display box.When user touches virtual companion, the positional information of user's current touch is passed back by network and is presented at artificial assistant interface by virtual companion.Artificial assistant can also control virtual companion's eye focus position.In fig. 8, the eyes of the virtual companion of " Look " mark display are watching the nose areas of user attentively.Artificial assistant can change its position by dragging " Look " mark, and positional information will pass to virtual companion's program, and the eyes controlling virtual companion's three-dimensional model are seen to correspondence position by program.
Virtual companion's monitor system
A key function of the present invention is work quality and the efficiency that can be improved nursing for the aged mechanism and household services mechanism by remote monitoring.Because virtual companion can meet the spiritual demand of accompanying and attending to of user, and telesecurity monitoring, nursing for the aged mechanism then only needs to be responsible for sending a small amount of personnel to complete the work needing actual person on the scene, cooks etc. as cleaning, for old man.
When artificial assistant monitors the situation of virtual companion by multiwindow monitoring interface as shown in Figure 7, multiple artificial assistant, and several supervisors, can monitor respective interface simultaneously, and by employee's chat window mutual communicating information.8 the virtual companions shown in the figure 7 are deployed in same home for destitute, and the artificial assistant of responsible this partial virtual of monitoring companion and supervisor can be designated as and aim at the service of this home for destitute.Another kind of feasible allocation scheme is that all virtual companions and the one-to-one relationship of artificial assistant are dynamic.After artificial assistant logs in the system of entering and starts current working time period, can be dynamically allocated to a series of similar virtual companion.Herein, the user context that similar finger is similar, similar virtual companion's state (such as, being all pet dog form) etc.In dynamic assignment, for each virtual companion, at least two artificial assistants can be distributed to and monitor.For any two artificial assistants, can be partly overlapped by the virtual companion of its monitoring.In the figure 7,8 virtual companions can be monitored in each multiwindow interface.In practical implementations, monitor window number dynamically to increase and decrease each artificial assistant.Increase and decrease is according to being the size of display, artificial assistant's skill level monitoring multiwindow etc. that number of video streams, artificial assistant that the network condition changing artificial assistant can receive use.In order to increase virtual companion's quantity that each artificial assistant can monitor further, in multiwindow monitoring interface, video window can be replaced with the less abstract data window of area.Show sound, video input change that virtual companion receives by a series of dynamic icon, touch input etc. in window.Meanwhile, correspondence often plants icon change, and monitoring interface can also point out artificial assistant by sound effect.Such as, when there being virtual companion to receive Speech input sudden change, interface sends a kind of prompt tone; When there being virtual companion to be touched, send another kind of prompt tone.The foundation of the artificial assistant of another kind of dynamic assignment is, for using in record in the past, use the virtual companion that artificial assistant service frequency is higher, distribute to the artificial assistant of its more redundancy, namely, in the same time, have on the monitoring interface of more artificial assistant and show this virtual companion's information.The central control system of artificial assistant software dynamically distributes virtual companion in the system of being currently connected to and artificial assistant.Central control system can distribute for everyone work assistant the virtual companion that a part needs human assistance frequency higher, and the virtual companion that a part of quenching frequency is lower, makes the distribution that the workload of each artificial assistant is average.When system as mentioned before, can automatically by when artificial assistant switches to the control inerface one to one of any virtual companion arbitrarily, if the artificial assistant be assigned to current is mutual with other virtual companion, then the virtual companion that can just be assigned with then has the long stand-by period.When central control system detects waits for too long, can automatically reallocate.The monitoring period one to one of artificial assistant and each virtual companion can be recorded in a database, as the foundation collecting the service fee temporally calculated to user.
The concrete mode of clocking can with reference to following information: one one to one monitoring interface be opened or closed the time; Audio frequency, video flowing start or the end time; Artificial assistant carries out the initial time of input through keyboard operation in interface one to one; The initial time that artificial assistant manually records; Comprehensive consideration of above-mentioned time point etc.
Artificial assistant can be divided into different grades, and the household of the artificial assistant that such as pays, artificial assistant supervisor, the artificial assistant of volunteer or even user also can become artificial assistant.
When multiple artificial assistant monitors one or one group of virtual companion simultaneously, need team member's status indication information as shown in Figure 7 to coordinate the cooperation relation between many people.State instruction can by show on the monitoring interface of each artificial assistant other manually help in mouse position.Different mouse shapes then represents the duty of other assistants current.Such as, a hourglass icons represents current has an artificial assistant be in busy or leave state, and a finger icon representation artificial assistant observes pointed location by multiwindow monitoring interface.When an artificial assistant enter one to one monitoring interface time, multiwindow monitoring interface from other artificial assistants disappears by its icon, but a line textual representation will be shown by this artificial assistant's monitoring (" BillHelper9 " such as, in Fig. 7) by the beneath window of the virtual companion monitored one to one.Before artificial assistant uses this system, will be specially trained and ensure when a certain position of its view screen, mouse is moved on to this position.Above-described team member's status information prompting can avoid multiple artificial assistant notice to be concentrated on a virtual companion simultaneously, thus maximizes monitoring range while guarantee redundancy.System can also adjust virtual companion's quantity that each artificial assistant monitors, thus maximizes virtual companion's quantity corresponding to an artificial assistant while guarantee minimum response speed.
Another critical function is after having new virtual companion to be connected to system, and system can be the artificial assistant of its dynamic assignment, and ensure have at any time be no less than an artificial assistant monitoring any one virtual companion.This dynamic allocation procedure comprises two stages:
1. learning phase.After a new virtual companion is connected into system, first system is that it distributes the artificial assistant of fixed qty, and the monitoring period of each artificial assistant does not overlap.That is, any moment one-man work assistant monitors this virtual companion.At learning phase, central control server records all mutual-action behaviors between this virtual companion and artificial assistant.Each record comprises timestamp, artificial assistant ID, interactive scoring (mark and automatically to generate according to the satisfaction of user, or given a mark by the supervisor or user being responsible for this artificial assistant).Through the learning period of two weeks, central control server, by the mutual-action behavior data analysis of this virtual companions all, sorted to the average of the artificial assistant of each interaction mistake with it.Another kind of mode is to skip learning phase, directly carries out matching stage, and carries out dynamic order to artificial assistant in the process of coupling.
2. matching stage.After learning phase terminates, central control server is that virtual companion formulates artificial assistant according to following rule: when virtual companion is in the inactive stage (user and virtual companion less in the possibility of this period interaction), central control server is the artificial assistant meeting following condition that this virtual companion distributes minimum number (such as, 1): virtual companion's number that 1) this assistant is monitored is at present minimum; And/or 2) in the past one hour, the monitoring period one to one of all virtual companion of this assistant and its monitoring is the shortest; And/or 3) this assistant score in the learning phase of this virtual companion is the highest.When making coupling and determining, above-mentioned three conditions can have different weights.When virtual companion is in activation phase (user and virtual companion higher in the possibility of this period interaction), central control server is preferably this virtual companion and distributes the artificial assistant meeting condition: virtual companion's number that 1) this assistant is monitored is at present minimum; And/or 2) in the past one hour, the monitoring period one to one of all virtual companion of this assistant and its monitoring is the shortest; And/or 3) the longest with the total duration of monitoring one to one of this virtual companion; 4) this assistant score in the learning phase and matching stage of this virtual companion is the highest.At matching stage, will be given a mark when artificial assistant monitors each virtual companion, to ensure that mark holds renewal.
Can also be realized by the speech intonation of software analysis user the marking of artificial assistant.The intonation surface user comprising higher activity and joyful degree is higher to current interactive satisfaction.In addition, can also be given a mark by other sensor informations, such as user's skin conductivity rate, skin color, pupil size change etc.Or can by analyzing user to touch quantizating index of screen etc.
Because virtual companion can obtain video, the audio frequency input of user side by the camera of operational outfit, microphone, in order to reduce the doubt of user for privacy, when virtual companion enables high resolving power input, need to the certain prompting of user to show that it can be listened by remote personnel at present, see.This information by the attitudes vibration of virtual companion, can pass to user in the natural mode of one.Such as, the neck necklace of virtual companion shinny, change virtual companion's eye color or virtual companion presents large-eyed state etc.In an exemplary embodiment, the sleep of virtual companion, clear-headed attitude surface current video, audio stream input are closed or open.Whether this exhibition method can allow user understand intuitively when not having technical background currently has audio frequency, video information collection.Other do not relate to the information of privacy, such as environmental volume change, light change, screen touch input etc., can be collected at any time and upload to backstage, and no matter whether current video, audio frequency input open.
Another critical services function of remote handle assistant to have emergency condition user or to need to contact third party personnel in time when artificial help.Third party personnel can be the staff in the home for destitute that virtual companion is deployed, or the household of user etc.The log recording that the contact method of third party personnel is relevant to other virtual companions is stored in system database together.In fig. 8, contact method may be displayed on the region being marked with " tab ".When artificial assistant clicks contact method, system can dial corresponding telephone number automatically, or ejects e-mail window.
Remote handle assistance system can also be used for carrying out remote technology support to virtual companion.One of object of technical support guarantees that the hardware of virtual companion, software can continue normal operation.Virtual companion, by network, periodically sends status information to server.Monitor this virtual companion artificial assistant can real-time reception to status information.Specific instruction can also be there is in artificial assistant to virtual companion, to obtain more information, and screenshot capture that such as virtual companion's operational outfit is current etc.Artificial assistant can also send instruction, holds software to perform, such as, " restart virtual companion's software ", " change speaker volume ", " restarting operational outfit " etc. for virtual companion.When virtual companion's running software is abnormal or at customary operation and maintenance stage every day, virtual companion can be made to recover initial operating state by performing teleinstruction.In program design, but simple stable demons can be used for guarding, control the operation of master routine.The function that master routine is then responsible for that the vision of virtual companion presents, interactive controlling etc. is more complicated, more easily break down.By receiving teleinstruction, demons can be closed, restart or carry out other analysis of problem of operation operations to master routine.Demons can periodically be called by master routine or operating system, are used for reiving/transmitting state information and instructions to be performed.
Other functions of virtual companion
Except function mentioned above, virtual companion can also realize other functions, to enrich the experience of user, improves user's quality of life.
Virtual companion can be its reciting news, weather or other any network informations based on word when user has demand.User can by voice to virtual companion's exposition need, such as " news about general election ", " weather in Tokyo " etc., and virtual companion can by speech recognition or the demand understanding user under the help of artificial assistant.After virtual companion recognizes the demand of user, can make the action of taking out newspaper or document, virtual companion's software searches for corresponding content on the internet by the network connection on backstage simultaneously.After obtaining content, virtual companion uses text-to-speech technology content to be read.Except the above-mentioned requirement according to user obtains and the function of sense information, the household of user can also provide user's possibility interested content to virtual companion in advance, and virtual companion can read content from being sent to user termly according to these contents.Such as, virtual companion to user reminding " your son today at Web realease new state ", and can read state content subsequently.
Virtual companion can also provide image information by similar mode to user.The exhibition method of image information can be that virtual companion shows a photo frame or photograph album to user, presents the picture (as shown in Figure 5) of user's needs in photo frame.In addition, the download address that virtual companion can also preset from the household of user is downloaded the photo uploaded by household and is showed user.Similarly, virtual companion can also show audio frequency (being play by virtual radio or phonograph), video data for user.Data can be downloaded according to the website of the demand of user from YouTube and so on, also can upload to appointed place in advance by the household of user.In addition, user can also send the instruction with household's Video chat by voice to virtual companion.A photo frame or televisor can be shown in screen after virtual companion receives instruction, show household's video image therein.The Video chat accounts information of household is pre-stored in the memo information of virtual companion in advance.
Virtual companion can pass through the respiratory rate of camera or microphone detecting user, and by the respiratory rate of virtual companion synchronised with it.Then, virtual companion can help the mood of stable user by the respiratory rate reducing oneself gradually.This function can be used for the patients of senile dementia of mood placating irritability or has the children of impatience of self-closing disease.
Can, by being similar to the technology of virtual reality, actual object and virtual companion be utilized to carry out interaction.Such as, we find in the process of research and development showpiece, and user likes sharing common experience with virtual companion, and such as, user wishes to take a nap together with virtual companion.Virtual companion can provide the otherwise experience such as drug administration to share by virtual reality technology.In an exemplary embodiment, user can be needed the container of the medicine taken to be placed on camera front.Virtual companion can utilize image recognition technology or help to identify current object for drug container by remote handle assistant, and after checking confirm that user now needs to take this medicine with the calendar information of user preset, by appearance one piece of virtual food in the screen display of virtual companion.Virtual food can be user take the shape of medicine, also can be other shape, such as, one piece of bone.When user starts to take medicine, virtual companion detects this behavior by similar techniques, and starts the expression of showing action and the happiness of eating food.By being connected with real medicine by virtual food, user is achieved to the function of virtual pet feeding by oneself regular medicine taking.Make user enhance personal responsibility sense while forming regular medicine taking custom, and the happiness expression that virtual companion obtains food by display provide positive feedback for user, promotes the experience of taking medicine of user.
In addition, user can also by showing form and the virtual companion interaction with the card of specific pattern to the camera of virtual companion.After virtual companion's Programmable detection to the content on card, corresponding virtual objects will be shown in the virtual environment on screen.User is by the position of mobile card in actual environment, and the position that can control corresponding article in virtual environment is moved, thus provides a mode that is new and virtual companion's interaction for user.
Part panel computer has near field communication or RFID function, and user can by placing communication tags near panel computer, and virtual companion's software receipt is to label information and in screen, show the form of corresponding article and virtual companion interaction.The panel computer at virtual companion place can attach a label accumulation draw-in groove, for receiving communication tags of marching into the arena.Collector cards slot near panel computer marches into the arena communication module, and need ensure when user throws label in draw-in groove, label can be marched into the arena communication module reading.When virtual companion's software detection is to communication reading event of marching into the arena, corresponding article will be shown be thrown in environment in screen.When panel computer does not have near field communication (NFC) function, the card landing event that the similar effect of throwing article in the virtual environment of virtual companion place can also be detected with icon by camera realizes.
Virtual companion can also realize by the mode of Webpage, and artificial assistant can only control one to one the virtual companion of webpage version within the limited time period, this user realizing mainly being convenient to contact first virtual companion tries out when not having hardware facility.For the user not registering virtual companion's account, virtual companion can be opened in a browser and try out webpage, and click start button.Now user will be in waiting status, and only have a webpage version user to wait for if current, then system can distribute artificial assistant idle at present for it, and starts to try out.There is multiple webpage version user to wait for if current, then user is entered waiting list according to the priority time receiving user and click start button event by system, after the primary user of queue tries out the regular hour, system terminates this time on probation automatically, and starts the time period on probation of second user in queue.In process on probation, the camera of the computer that virtual companion is used at present by user and microphones video, audio-frequency information.If computer is not supported to touch input, then available mouse clicks the behavior that analog subscriber touches virtual companion.User tries out after the period starts, and can trigger the countdown from regular time (such as 2 minutes) of a count-down device, after countdown terminates, on probationly automatically to terminate.Before countdown terminates, artificial assistant also can terminate current trying out by assistant interface.After current end on probation, if also have the user waited in queue, then next user is accessed artificial assistant service by system automatically.
Virtual partner system also comprises the automatic training function to artificial assistant.When artificial assistant accepts automatically to train, use the artificial assistant software of a special version.The virtual companion that this version software comprises simulation looks closely frequently, audio frequency inputs and the user of simulation inputs the touch of virtual companion.Accept all operations of artificial assistant of training, comprise input through keyboard, mouse moves, click etc., follow-up result of training will be used for check and rate by software records.
To the improvement of existing invention
The mechanisms such as home for destitute, endowment community are in paramedic's deficiency state for a long time, and personnel's rate of turnover is higher, and at some geriatric nursing home of the U.S., the turnover rate of nurse is close to 100%.Therefore, the old men of geriatric nursing home are in the state that prosthetic is accompanied the overwhelming majority time.That cause thus have negative effect with isolated sense that is society to the mood of old man and the state of mind, and can accelerate developing of the infirmitiess of age such as such as senile dementia.Because nurse is expensive, and limited time; Accompany old man to need again extra manually to look after pet with pet, therefore create a series of artificial intelligence mode that utilizes and to accompany and attend to the solution of old man.Below list several feasible program, and itself and difference of the present invention are discussed.
Paro (http://www.parorobots.com) is one and aims at the tangible machine people solving the design of the elderly's feeling of lonely.Because its manufacturing cost is higher, expensive, be difficult to promote on a large scale.In addition, its action is only limitted to simply to creep, movable four limbs etc., and facial expression is also limited, and does not have the language ability of true man's intelligence.
Children virtual pet toy (such as, U.S. Patent Application No. 2011/0086702) is an electronic game for children's research and development.Because its function is complicated, complex operation, is not suitable for the elderly and uses.In the invention of this class, some can receive touch, mouse click (such as, the U.S. Patent Application No. 2009/0204909 of user; Talking Tom:http: //outfit7.com/apps/talking-tom-cat).But it mostly is generate in advance limited several or the simple imitation to user's input to the reaction of input, makes user see the content of repetition after used a period of time, produces and be weary of sense.Compared to these inventions, the present invention is dynamically generate to the reaction touched, mouse clicks input, and visually more natural.
At present existing can the virtual backup system of processed voice input only to carry out the input of user simple repeating as output (such as Talking Tom) or by generating output content (such as U.S. Patent number 6 by artificial intelligence technology after the phonetic entry of speech recognition technology identification user, 772,989; U.S. Patent Application No. 2006/0074831; Siri: U.S. Patent Application No. 2012/0016678).Because speech recognition and artificial intelligence technology are not yet ripe, above-mentioned patent is difficult to reach in this patent the effect of talking with true man utilizing the intervention of artificial assistant to realize.
Very human-aided intelligent system (such as U.S. Patent Application No. 2011/0191681) is used to retail trade customer service system, remote monitoring etc.But be not applied to the scene being similar to virtual companion.
Other application of the present invention
When user uses virtual companion, service condition statistics can be collected and be diagnosed the analysis Data Source of user's psychosomatic health (such as, functional decline, senile dementia development) by conduct.Such as, when old man has depressive symptom or the conflict symptom with social activity, the interactive frequency of itself and virtual companion can reduce.Therefore, virtual companion usage statistics can as one more accurately, the clinical diagnosis Data Source of non-intervention type.
The present invention can also be applied to other crowd, such as young man.Due to the intervention of artificial assistant, the present invention has stronger amusement function, in addition, can also be the life assistant of user, arranges or complete other task relevant to network for its administrative time.
The present invention can also be applied to nursing and the treatment of autism children.Autism children is tended to exchange with non-human subject.Because virtual companion can with the image display of animal, and the existence of artificial assistant is for which imparts the exchange way being similar to the mankind, and therefore autism children is easy to accept virtual companion, and virtual companion auxiliary under adapt to gradually and Human communication.
The present invention can also be used as toy for children.In the case, need to increase more game interactive function, and increase state when abundanter three-dimensional model attitude carries out different treatment with respective user to virtual companion.
The present invention can be used by dentist teacher of the rectification, for its patient provides daily teeth caring nursing brand and prompting.In an orthodontic cycle, the different phase that multiple virtual companion is responsible for the cycle can be preset, each virtual companion is set the content needing in this stage to be informed user by voice or other means in advance, thus full automatic carry out the complete period accompany permitted and remind.
The present invention can be used to various three-dimensional model to the reactive mode of multiple point touching, and is not limited only to three-dimensional human model or animal model.Such as, can be the reaction of three-dimensional flower model definition multiple point touching.
The present invention can be used for the control to tangible machine people.Such as, the panel computer running virtual companion can be connected with mechanical part, the instruction sent by virtual companion controls the motion of mechanical part.
Other physical unit can be added, to improve the visual experience of user for the panel computer running virtual companion.Such as, when the image display of virtual companion with pet dog, the external frame of a kennel shape can be added outward at panel computer.
In addition, the physical unit with special tactile effect can also be added for the panel computer running virtual companion, promote the tactile experience of user.Such as, the shell of a soft fine hair quality can be added outward at panel computer, and finger can be inserted in the opening of shell by user.
By analyzing the character resolution touching input, virtual companion can also touch whether input is cause due to screen existing liquid.Because the touch input that liquid triggers usually has in intensity and position fluctuate fast.Virtual companion can show the interaction of virtual companion and liquid (such as rainwater) after detecting that liquid touches in virtual scene.

Claims (20)

1. a virtual partner device, comprising:
Show the method for virtual companion;
Detect the method for user's input;
The method changing virtual companion's exhibition method is inputted according to the user detected.
2. the device described in claim 1, wherein:
The user detected is input as touch;
The methods of exhibiting changing virtual companion needs the position of reading user input and judges which body part of its corresponding virtual companion;
The methods of exhibiting changing virtual companion comprises the movement of virtual companion's body part.
3. the device described in claim 1, wherein:
Virtual companion is an animating image that can represent one or more action, and each action represents the reaction that virtual companion encourages certain;
When receiving multiple excitation, multiple respective action can mix according to weight.
4. the device described in claim 3, wherein:
Virtual companion can show the virtual objects containing certain content; Content is obtained by remote data base.
5. the device described in claim 2, also comprises:
A kind of can the method for the virtual companion of Long-distance Control;
A kind of method that virtual companion can be made to carry out voice output under remote control.
6. the device described in claim 5, wherein:
Virtual companion can represent with inhuman form.
7. the device described in claim 1, also comprises:
A kind of method exciting virtual companion's psychological need;
The method of virtual companion's psychological need is represented by mixing animation.
8. the device described in claim 1, also comprises:
A kind of by image recognition identification physical item, and transformed the method being shown as the virtual objects that virtual companion can exchange.
9. the device described in claim 8, also comprises:
Physical item is placed on ad-hoc location so that identify for guiding user by an entity physical unit; This physical arrangement can also be used for supporting fixing virtual companion's exhibiting device.
10. the device described in claim 1, also comprises:
Entity physical unit invest virtual companion's exhibiting device surrounding or near, show a part for visual effect as virtual companion.
Device described in 11. claims 1, wherein:
The mode that detection user touches input adopts capacitive touch screen;
Virtual companion can carry out corresponding to the touch signal triggered due to screen existing liquid.
The method of 12. 1 kinds of one or more virtual companions of control, comprising:
Virtual companion is to server transmissioning data;
The data retransmission that the virtual companion received sends by server is to network computer;
Network computer shows the state of each virtual companion;
The user of network computer can choose any one virtual companion and open the detailed view of its detailed status of bag;
Video, audio stream data is sent to network computer user from the virtual companion of selected displaying detailed view;
Network computer user can send instruction to selected virtual companion.
Device described in 13. claims 12, wherein:
The information that the virtual companion being in non-detailed view state sends to network computer is lower relative to the information accuracy of the virtual companion being in detailed view;
When virtual companion is selected show detailed view at network computer end time, the three-dimensional model of virtual companion shows that end presents the state be more absorbed in.
Device described in 14. claims 12, wherein:
Each controlled virtual companion is the device described in claim 5.
Device described in 15. claims 12, wherein:
The order of virtual companion is sent to be generated by artificial intelligence;
The order generated by artificial intelligence can be modified by the user of use network computer and audit before sending and be passed through.
16. 1 control the system of multiple virtual image by multiple people, comprising:
Multiple virtual image, each virtual image has respective historical events and character trait record;
Multiple people, everyone can pass through the network computer each virtual image of Long-distance Control in a certain way;
One can by the mode of artificial virtual image incoming event historical record;
A people can read the mode of each virtual image event history.
System described in 17. claims 16, wherein:
Each virtual image is the device described in claim 5.
System described in 18. claims 16, wherein:
Everyone controls virtual image by the mode described in claim 12.
System described in 19. claims 16, also comprises:
A kind of in order to maximize man efficiency, dynamic assignment controllers with controlled the mode of virtual image corresponding relation.
System described in 20. claims 19, also comprises:
A kind of mode that can record performance when everyone controls each virtual image;
In order to a maximum overall performance, distribute the allocation scheme of controllers and virtual image according to history performance when dynamic assignment.
CN201480002468.2A 2013-07-10 2014-07-09 Virtual companion Pending CN104769645A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US13/939,172 US20140125678A1 (en) 2012-07-11 2013-07-10 Virtual Companion
US13/939,172 2013-07-10
PCT/IB2014/062986 WO2015004620A2 (en) 2013-07-10 2014-07-09 Virtual companion

Publications (1)

Publication Number Publication Date
CN104769645A true CN104769645A (en) 2015-07-08

Family

ID=52280769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201480002468.2A Pending CN104769645A (en) 2013-07-10 2014-07-09 Virtual companion

Country Status (2)

Country Link
CN (1) CN104769645A (en)
WO (1) WO2015004620A2 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104959985A (en) * 2015-07-16 2015-10-07 深圳狗尾草智能科技有限公司 Robot control system and robot control method thereof
CN105141587A (en) * 2015-08-04 2015-12-09 广东小天才科技有限公司 Virtual doll interaction method and device
CN105957140A (en) * 2016-05-31 2016-09-21 成都九十度工业产品设计有限公司 Pet dog interaction system based on technology of augmented reality, and analysis method
CN106375774A (en) * 2016-08-31 2017-02-01 广州酷狗计算机科技有限公司 Live broadcast room display content control method, apparatus and system
CN106850824A (en) * 2017-02-22 2017-06-13 北京爱惠家网络有限公司 A kind of intelligent service system and implementation method
CN107168174A (en) * 2017-06-15 2017-09-15 重庆柚瓣科技有限公司 A kind of method that use robot does family endowment
CN107322593A (en) * 2017-06-15 2017-11-07 重庆柚瓣家科技有限公司 Can outdoor moving company family endowment robot
CN107808191A (en) * 2017-09-13 2018-03-16 北京光年无限科技有限公司 The output intent and system of the multi-modal interaction of visual human
CN108536386A (en) * 2018-03-30 2018-09-14 联想(北京)有限公司 Data processing method, equipment and system
CN108886532A (en) * 2016-01-14 2018-11-23 三星电子株式会社 Device and method for operating personal agent
CN108874123A (en) * 2018-05-07 2018-11-23 北京理工大学 A kind of general modular virtual reality is by active haptic feedback system
WO2019037076A1 (en) * 2017-08-25 2019-02-28 深圳市得道健康管理有限公司 Artificial intelligence terminal system, server and behavior control method thereof
CN109521878A (en) * 2018-11-08 2019-03-26 歌尔科技有限公司 Exchange method, device and computer readable storage medium
CN109965466A (en) * 2018-05-29 2019-07-05 北京心有灵犀科技有限公司 AR virtual role intelligence jewelry
CN110653813A (en) * 2018-06-29 2020-01-07 深圳市优必选科技有限公司 Robot control method, robot and computer storage medium
CN113313836A (en) * 2021-04-26 2021-08-27 广景视睿科技(深圳)有限公司 Method for controlling virtual pet and intelligent projection equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109891357A (en) * 2016-10-20 2019-06-14 阿恩齐达卡士技术私人有限公司 Emotion intelligently accompanies device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6545682B1 (en) * 2000-05-24 2003-04-08 There, Inc. Method and apparatus for creating and customizing avatars using genetic paradigm
US20090055019A1 (en) * 2007-05-08 2009-02-26 Massachusetts Institute Of Technology Interactive systems employing robotic companions
CN101828161A (en) * 2007-10-18 2010-09-08 微软公司 Three-dimensional object simulation using audio, visual, and tactile feedback
CN201611889U (en) * 2010-02-10 2010-10-20 深圳先进技术研究院 Instant messaging partner robot

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6239785B1 (en) * 1992-10-08 2001-05-29 Science & Technology Corporation Tactile computer input device
WO2000066239A1 (en) * 1999-04-30 2000-11-09 Sony Corporation Electronic pet system, network system, robot, and storage medium
US8795072B2 (en) * 2009-10-13 2014-08-05 Ganz Method and system for providing a virtual presentation including a virtual companion and virtual photography
JP5812665B2 (en) * 2011-04-22 2015-11-17 任天堂株式会社 Information processing system, information processing apparatus, information processing method, and information processing program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6545682B1 (en) * 2000-05-24 2003-04-08 There, Inc. Method and apparatus for creating and customizing avatars using genetic paradigm
US20090055019A1 (en) * 2007-05-08 2009-02-26 Massachusetts Institute Of Technology Interactive systems employing robotic companions
CN101828161A (en) * 2007-10-18 2010-09-08 微软公司 Three-dimensional object simulation using audio, visual, and tactile feedback
CN201611889U (en) * 2010-02-10 2010-10-20 深圳先进技术研究院 Instant messaging partner robot

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104959985A (en) * 2015-07-16 2015-10-07 深圳狗尾草智能科技有限公司 Robot control system and robot control method thereof
CN105141587A (en) * 2015-08-04 2015-12-09 广东小天才科技有限公司 Virtual doll interaction method and device
CN105141587B (en) * 2015-08-04 2019-01-01 广东小天才科技有限公司 A kind of virtual puppet interactive approach and device
CN108886532A (en) * 2016-01-14 2018-11-23 三星电子株式会社 Device and method for operating personal agent
CN108886532B (en) * 2016-01-14 2021-12-17 三星电子株式会社 Apparatus and method for operating personal agent
CN105957140A (en) * 2016-05-31 2016-09-21 成都九十度工业产品设计有限公司 Pet dog interaction system based on technology of augmented reality, and analysis method
CN106375774A (en) * 2016-08-31 2017-02-01 广州酷狗计算机科技有限公司 Live broadcast room display content control method, apparatus and system
CN106375774B (en) * 2016-08-31 2019-12-27 广州酷狗计算机科技有限公司 Method, device and system for controlling display content of live broadcast room
CN106850824A (en) * 2017-02-22 2017-06-13 北京爱惠家网络有限公司 A kind of intelligent service system and implementation method
CN107168174B (en) * 2017-06-15 2019-08-09 重庆柚瓣科技有限公司 A method of family endowment is done using robot
CN107322593A (en) * 2017-06-15 2017-11-07 重庆柚瓣家科技有限公司 Can outdoor moving company family endowment robot
CN107168174A (en) * 2017-06-15 2017-09-15 重庆柚瓣科技有限公司 A kind of method that use robot does family endowment
WO2019037076A1 (en) * 2017-08-25 2019-02-28 深圳市得道健康管理有限公司 Artificial intelligence terminal system, server and behavior control method thereof
CN107808191A (en) * 2017-09-13 2018-03-16 北京光年无限科技有限公司 The output intent and system of the multi-modal interaction of visual human
CN108536386A (en) * 2018-03-30 2018-09-14 联想(北京)有限公司 Data processing method, equipment and system
CN108874123A (en) * 2018-05-07 2018-11-23 北京理工大学 A kind of general modular virtual reality is by active haptic feedback system
CN109965466A (en) * 2018-05-29 2019-07-05 北京心有灵犀科技有限公司 AR virtual role intelligence jewelry
CN110653813A (en) * 2018-06-29 2020-01-07 深圳市优必选科技有限公司 Robot control method, robot and computer storage medium
CN109521878A (en) * 2018-11-08 2019-03-26 歌尔科技有限公司 Exchange method, device and computer readable storage medium
CN113313836A (en) * 2021-04-26 2021-08-27 广景视睿科技(深圳)有限公司 Method for controlling virtual pet and intelligent projection equipment
WO2022227290A1 (en) * 2021-04-26 2022-11-03 广景视睿科技(深圳)有限公司 Method for controlling virtual pet and intelligent projection device

Also Published As

Publication number Publication date
WO2015004620A3 (en) 2015-05-14
WO2015004620A2 (en) 2015-01-15

Similar Documents

Publication Publication Date Title
CN104769645A (en) Virtual companion
Lillard Montessori: The science behind the genius
US20140125678A1 (en) Virtual Companion
McColl et al. Brian 2.1: A socially assistive robot for the elderly and cognitively impaired
Bijou et al. Methodology for experimental studies of young children in natural settings
CN108460707B (en) Intelligent supervision method and system for homework of students
US20120178065A1 (en) Advanced Button Application for Individual Self-Activating and Monitored Control System in Weight Loss Program
CN101648079B (en) Emotional doll
CN112199002A (en) Interaction method and device based on virtual role, storage medium and computer equipment
US11393357B2 (en) Systems and methods to measure and enhance human engagement and cognition
Williams Promoting independent learning in the primary classroom
Antony et al. Co-designing with older adults, for older adults: Robots to promote physical activity
Khosla et al. Socially assistive robot enabled home-based care for supporting people with autism
US20220319713A1 (en) Atmospheric mirroring and dynamically varying three-dimensional assistant addison interface for interior environments
US20220319714A1 (en) Atmospheric mirroring and dynamically varying three-dimensional assistant addison interface for behavioral environments
EP4109461A1 (en) Atmospheric mirroring and dynamically varying three-dimensional assistant addison interface for external environments
Koutsouris et al. InLife: a platform enabling the exploitation of IoT and gamification in healthcare
Khosravi et al. Learning enhancement in higher education with wearable technology
Kory-Westlund et al. Long-term interaction with relational SIAs
Rébola Designed technologies for healthy aging
Iseli Deaf Ni-Vanuatu and their signs: A sociolinguistic study
Saurio et al. Design Thinking and Welfare: A Focus on Information Design
Strandbech Humanoid robots for health and welfare: on humanoid robots as a welfare technology used in interaction with persons with dementia
Koushik Designing Customizable Smart Interfaces to Support People with Cognitive Disabilities in Daily Activities
Qiu et al. Innovative Strategies for Generative Art in the NFT Market: A Case Study of the Art Blocks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150708