CN105807929A - Virtual person as well as control system and device therefor - Google Patents

Virtual person as well as control system and device therefor Download PDF

Info

Publication number
CN105807929A
CN105807929A CN201610135719.5A CN201610135719A CN105807929A CN 105807929 A CN105807929 A CN 105807929A CN 201610135719 A CN201610135719 A CN 201610135719A CN 105807929 A CN105807929 A CN 105807929A
Authority
CN
China
Prior art keywords
module
control
visual human
instruction
controlling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610135719.5A
Other languages
Chinese (zh)
Inventor
沈愉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201610135719.5A priority Critical patent/CN105807929A/en
Publication of CN105807929A publication Critical patent/CN105807929A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a virtual person as well as a control system and device therefor. The virtual person comprises a communication module, a processing module, a control module, a moving module, an information acquisition module, a display module and a starting module, wherein the communication module receives a control command or/and first interaction information sent by the control system through a first wireless communication network, sends a connection request sent through a second wireless communication network or sends second interaction information through the first wireless communication network; the control module controls a corresponding module to execute a corresponding action according to the control command after processing; the moving module drives the virtual person to move under the control of the control module; the information acquisition module acquires the second interaction information under the control of the control module; the display module displays the first interaction information after processing; and the starting module sends out a connection request. A user can remotely control the virtual person by utilizing the carry-on control device or system, so that the user can not only obtain any required information in unreachable occasions but also perform information interaction with personnel in the occasions.

Description

A kind of visual human and control system and equipment
Technical field
The invention belongs to technical field of electronic communication, relate to a kind of intelligent robot, particularly relate to a kind of visual human and control system thereof and equipment.
Background technology
Along with high-tech development, the rhythm of life of people is more and more faster, it is necessary to the thing done also gets more and more, often the situation of time of origin arrangement conflict.Such as: Mr. Li is an enterprise middle administrative staff, need decision-making a lot of thing every day, be also required to participate in various meeting, exhibition etc. simultaneously;When two meeting schedules are different local, the time is identical again or when being spaced closely together, and just cannot take into account, one can only be selected to participate in.But, no matter give up which meeting, all can lose the chance understanding a lot of information undoubtedly.Even if present transportation is very flourishing, but also cannot solve the problems referred to above.Because the time of people is limited after all, and it is too busy to attend to anything else.
Summary of the invention
The shortcoming of prior art in view of the above, it is an object of the invention to provide a kind of visual human and control system thereof and equipment, and being used for solving work on hand personnel needs in some occasion to occur, but the problem cannot attended because of arrangement of time.
For achieving the above object and other relevant purposes, the present invention provides a kind of visual human, described visual human includes: communication module, receive startup order that control system sent, control command by the first cordless communication network or/and the first interactive information, or send connection request by the second cordless communication network to described control system, or send the second interactive information by the first cordless communication network to described control system;Processing module, is connected with described communication module, to described control command, the first interactive information, connection request or/and the second interactive information processes;Described first interactive information and the second interactive information all include audio frequency, video or/and pictorial information;Control module, be connected with described processing module, control corresponding module according to the control command after processing and perform corresponding action;Mobile module, is connected with described control module, drives described visual human to move under the control of described control module;Information acquisition module, is respectively connected with described processing module and control module, gathers described second interactive information under the control of described control module;Display module, is respectively connected with described processing module and control module, shows the first interactive information after processing under the control of described control module;Described display module is arranged at the localized positions of described visual human;Or described display module surrounds described visual human's whole body;Opening module, is respectively connected with described processing module and communication module, sends described connection request, or receives and described start order and automatically turn on the mode of operation of described visual human.
Alternatively, described information acquisition module includes: audition module, commissarial ear, input speech signal;Described audition module includes mike;Vision module, commissarial eyes, gather the scene in visual line of sight;Described vision module includes photographic head, and described photographic head can 360 degree of rotations and focusing.
Alternatively, described mobile module includes: walking module, commissarial lower limb, drives described visual human to move to desired location;Described walking module includes roller.
Alternatively, described visual human also includes: voice module, commissarial mouth, exports voice signal;Described voice module includes speaker;Face module, commissarial face, export countenance information;Described vision module is arranged on the eye position place of described face module;Described voice module is arranged on the face position of described face module;Described audition module is arranged on the ear location place of described face module;Neck submodule, commissarial neck, it is connected with described face module, drives described face module to rotate;Upper body module, the commissarial upper part of the body, it is connected respectively with described neck submodule and walking module;Or/and upper limb module, commissarial arm, it is arranged at the arm position of described upper body module.
Alternatively, described control command includes mobile control command, acquisition controlling order, shows control command;Described mobile control command includes the direction of movement, distance;Described acquisition controlling order includes picture collection, audio collection or/and video acquisition;Described display control command includes display audio frequency, display video or/and show picture.
The present invention also provides for the control system of a kind of visual human, the control system of described visual human includes: communication unit, the connection request that visual human sends is received by the second cordless communication network, the second interactive information that visual human sends is received by the first cordless communication network, or by the first cordless communication network to visual human's transmitting control commands or/and the first interactive information;Processing unit, is connected with described communication unit, to described control command, the first interactive information, connection request or/and the second interactive information processes;Described first interactive information and the second interactive information all include audio frequency, video or/and pictorial information;Display unit, is connected with described processing unit;Input block, is connected with described processing unit, inputs described control command;Information acquisition unit, is connected with described processing unit, gathers described first interactive information;Switch element, is respectively connected with described processing unit and communication unit, and the connection request after response process sets up communication connection, or initiation startup is ordered and automatically turned on described visual human by described communication unit.
Alternatively, described control command includes mobile control command, acquisition controlling order, shows control command;Described mobile control command includes the direction of movement, distance;Described acquisition controlling order includes picture collection, audio collection or/and video acquisition;Described display control command includes display audio frequency, display video or/and show picture.
Alternatively, described display unit is shown as the control interface of humanoid pattern, the control interface of described humanoid pattern includes: vision module control unit, display is the sub-interface of control of the eyes pattern of people, acquisition controlling person's click action controlling sub-interface to the eyes pattern of described people, generating corresponding visual spatial attention instruction, the vision module controlling visual human performs the action of described visual spatial attention instruction instruction;Voice module control unit, shows the sub-interface of control of the mouth sub-images for people;Acquisition controlling person's click action controlling sub-interface to the mouth sub-images of described people, generates corresponding phonetic control command, and the voice module controlling visual human performs the action of described phonetic control command instruction;Audition module control unit, shows the sub-interface of control of the ear pattern for people;Acquisition controlling person's click action controlling sub-interface to the ear pattern of described people, generates corresponding audition control instruction, and the audition module controlling visual human performs the action of described audition control instruction instruction;Walking module controls unit, shows the sub-interface of control of the leg pattern for people;Acquisition controlling person's click action controlling sub-interface to the leg pattern of described people, generates corresponding travelling control instruction, and the walking module controlling visual human performs the action of described travelling control instruction instruction;Communication control unit, it is respectively connected with described vision module control unit, voice module control unit, audition module control unit and walking module controls unit and communication unit, controls described visual spatial attention instruction, phonetic control command, audition control instruction and travelling control instruction respectively and be wirelessly transmitted to long-range visual human;Face module control unit, shows the sub-interface of control of the facial expression pattern for people;Acquisition controlling person's click action controlling sub-interface to the facial expression pattern of described people, generates corresponding expression control instruction, and the face module controlling visual human performs the action of described expression control instruction instruction;Neck module control unit, shows the sub-interface of control of the neck sub pattern for people;Acquisition controlling person's click action controlling sub-interface to the neck sub pattern of described people, generates corresponding neck control instruction, and the neck submodule controlling visual human performs the action of described neck control instruction instruction;Upper body module control unit, shows the sub-interface of control of the upper part of the body pattern for people;Acquisition controlling person's click action controlling sub-interface to the upper part of the body pattern of described people, generates corresponding upper part of the body control instruction, and the upper body module controlling visual human performs the action of described upper part of the body control instruction instruction;Or upper limb module control unit, show the sub-interface of control of the arm pattern for people;Acquisition controlling person's click action controlling sub-interface to the arm pattern of described people, generates corresponding arm control instruction, and the upper limb module controlling visual human performs the action of described arm control instruction instruction.
The present invention also provides for the control equipment of a kind of visual human, and the control equipment of described visual human runs the control system of the visual human having the right described in requirement 6 to 8 any one.
Alternatively, the control equipment of described visual human is that smart mobile phone, Pad are or/and intelligent watch.
As it has been described above, the visual human of the present invention and control system and equipment, have the advantages that
Visual human of the present invention can be positioned over the occasion of any needs, user can utilize visual human described in the control equipment or system remote control carried with, not only can obtain the information of any needs in the occasion that oneself cannot arrive, information can also be carried out alternately with the personnel of this occasion, solve the problem that staff is too busy to attend to anything else.
Accompanying drawing explanation
Fig. 1 is shown as the one of the visual human described in the embodiment of the present invention and realizes structural representation.
Fig. 2 is shown as the one of the visual human described in the embodiment of the present invention and realizes contour structures schematic diagram.
The one of the control system that Fig. 3 is shown as the visual human described in the embodiment of the present invention realizes structural representation.
Fig. 4 is shown as the one controlling interface of the humanoid pattern of the control system of the visual human described in the embodiment of the present invention and realizes structural representation.
Fig. 5 is shown as the one controlling equipment of the visual human described in the embodiment of the present invention and realizes structural representation.
Element numbers explanation
100 visual humans
110 communication modules
120 processing modules
130 control module
140 mobile modules
150 information acquisition modules
160 display modules
170 opening module
201 vision modules
202 voice modules
203 audition modules
204 walking module
205 face module
206 neck submodules
207 upper body modules
208 upper limb modules
The control system of 300 visual humans
310 communication units
320 processing units
330 display units
340 input blocks
350 information acquisition units
360 switch elements
400 control interface
401 vision module control units
402 voice module control units
403 audition module control units
404 walking module controls unit
405 communication control units
406 face module control units
407 neck module control units
408 upper body module control units
409 upper limb module control units
Detailed description of the invention
Below by way of specific instantiation, embodiments of the present invention being described, those skilled in the art the content disclosed by this specification can understand other advantages and effect of the present invention easily.The present invention can also be carried out by additionally different detailed description of the invention or apply, and the every details in this specification based on different viewpoints and application, can also carry out various modification or change under the spirit without departing from the present invention.It should be noted that, when not conflicting, following example and the feature in embodiment can be mutually combined.
It should be noted that, the diagram provided in following example only illustrates the basic conception of the present invention in a schematic way, then assembly that in graphic, only display is relevant with the present invention but not component count when implementing according to reality, shape and size drafting, during its actual enforcement, the kenel of each assembly, quantity and ratio can be a kind of random change, and its assembly layout kenel is likely to increasingly complex.
Referring to Fig. 1, the present invention provides a kind of visual human, and described visual human 100 includes: communication module 110, processing module 120, controls module 130, mobile module 140, information acquisition module 150, display module 160, opening module 170.
Described communication module 110 receives startup order that control system sent, control command by the first cordless communication network or/and the first interactive information, or send connection request by the second cordless communication network to described control system, or send the second interactive information by the first cordless communication network to described control system.Described communication module 110 realizes remote radio communication with external equipment.It is the Wireless Telecom Equipments such as wifi, 2G/3G/4G that protection scope of the present invention is not limited to the device that realizes of described communication module 110, and device or the equipment of every function being capable of described communication module 110 all include in the scope of described communication module.
Wherein, described first cordless communication network includes mobile communications network, WIFI;Described second cordless communication network includes mobile communications network.Such as: after described visual human sends connection request by mobile communications network to described control system, receive the described control system response connection request by mobile communications network or WIFI feedback, thus the communication connection set up between visual human and control system.Or, described control system directly transmits startup order by mobile communications network or WIFI, controls visual human and directly initiates, sets up communication connection.Further, described control system by mobile communications network or WIFI transmitting control commands or/and the first interactive information is to described visual human, so that described visual human performs the action corresponding with control command, described first interactive information is shown or/and receive, described first interactive information is that control system is controlling the information that end gathers, the audio/video information in environment residing for i.e. control system, including the audio/video information of the user of control system.Described visual human sends the second interactive information by mobile communications network or WIFI to described control system, described second interactive information is the information that visual human gathers at visual human's end, i.e. audio/video information in environment residing for visual human, including the audio/video information of the user of visual human.Wherein, described first interactive information and the second interactive information all include audio frequency, video or/and pictorial information.
Further, described control command includes mobile control command, acquisition controlling order, shows control command;Described mobile control command includes the direction of movement, distance;Described acquisition controlling order includes picture collection, audio collection or/and video acquisition;Described display control command includes display audio frequency, display video or/and show picture.
Described processing module 120 is connected with described communication module 110, to described control command, the first interactive information, connection request or/and the second interactive information processes.The process that described control command is processed includes the content resolving described control command, according to the control instruction that the content output resolved is corresponding, as: control how mobile module moves, move to where etc..Include decompressing, decoding described first interactive information to the processing procedure of described first interactive information, make display module can directly display the first interactive information.The processing procedure of described second interactive information is included coding, compresses described second interactive information, in order to sends.The processing procedure of described connection request is included described connection request of packing, in order to send.
Described control module 130 is connected with described processing module 120, controls corresponding module according to the control command after processing and performs corresponding action.
Described mobile module 140 is connected with described control module 130, drives described visual human to move under the control of described control module.The realizing structure and can include multiple of described mobile module, such as roller-coaster device etc..
Described information acquisition module 150 is respectively connected with described processing module 140 and control module 130, gathers described second interactive information under the control of described control module.Described information acquisition module includes all devices that can gather information, such as photographic head, mike, sensor etc..
Described display module 160 is respectively connected with described processing module 140 and control module 130, shows the first interactive information after processing under the control of described control module.Described display module 160 can show the image of effector in real time.Described display module 160 can be arranged at the localized positions of described visual human;Or described display module 160 surrounds described visual human's whole body.When described display module 160 surrounds described visual human's whole body, it is possible to be shown as the whole body images of effector, people looks the same at the scene just as effector around.
Described opening module 170 is connected with described processing module 140, sends described connection request, or receives and described start order and automatically turn on the mode of operation of described visual human.Described opening module can be start button or remote control starting device.
Further, shown in Figure 2, the exemplary contour structures that realizes of the one of described visual human includes: vision module 201, voice module 202, audition module 203, walking module 204, face module 205, neck submodule 206, upper body module 207, or/and upper limb module 208.
The described commissarial eyes of vision module 201, gather the scene in visual line of sight.Described vision module 201 includes photographic head, and described photographic head can 360 degree of rotations and focusing.It is photographic head that protection scope of the present invention is not limited to the device that realizes of described vision module 201, and device or the equipment of every function being capable of described vision module 201 all include in the scope of described vision module.
The described commissarial mouth of voice module 202, exports voice signal.Described voice module 202 includes speaker.It is speaker that protection scope of the present invention is not limited to the device that realizes of described voice module 202, and device or the equipment of every function being capable of described voice module 202 all include in the scope of described voice module.
The described commissarial ear of audition module 203, input speech signal.Described audition module 203 includes mike.It is mike that protection scope of the present invention is not limited to the device that realizes of described audition module 203, and the device of every function being capable of described audition module 203 or equipment all include in the scope of described audition module.
The described commissarial lower limb of walking module 204, drives described visual human to move to desired location.Described walking module 204 includes roller.It is roller that protection scope of the present invention is not limited to the device that realizes of described walking module 204, and device or the equipment of every function being capable of described walking module 204 all include in the scope of described walking module.
The described commissarial face of face module 205, exports countenance information;Described vision module is arranged on the eye position place of described face module;Described voice module is arranged on the face position of described face module;Described audition module is arranged on the ear location place of described face module.
The described commissarial neck of neck submodule 206, is connected with described face module, drives described face module to rotate.
The described commissarial upper part of the body of upper body module 207, is connected respectively with described neck submodule and walking module.
The described commissarial arm of upper limb module 208, is arranged at the arm position of described upper body module.
Wherein, described control module 170 is also respectively connected with described vision module 201, voice module 202, audition module 203, walking module 204, face module 205, neck submodule 206, upper body module 207 or upper limb module 208 etc., controls the work of each module cooperative.
The protection domain of visual human of the present invention is not limited to the modular structure that the present embodiment is enumerated, and the module increase and decrease of every prior art done according to principles of the invention, replaces the scheme realized and all includes in protection scope of the present invention.
The present invention also provides for the control system of a kind of visual human; the control system of described visual human can realize the long-range control of visual human of the present invention; but the structure of the control system realizing the visual human that device includes but not limited to that the present embodiment enumerates of the control system of visual human of the present invention; the malformation of every prior art done according to principles of the invention and replacement, all include in protection scope of the present invention.
Shown in Figure 3, the control system 300 of described visual human includes: communication unit 310, processing unit 320, display unit 330, input block 340, information acquisition unit 350, switch element 360.
Described communication unit 310 receives, by the second cordless communication network, the connection request that visual human sends, the second interactive information that visual human sends is received by the first cordless communication network, or by the first cordless communication network to visual human's transmitting control commands or/and the first interactive information.
Wherein, described first cordless communication network includes mobile communications network, WIFI;Described second cordless communication network includes mobile communications network.Such as: after described control system receives, by mobile communications network, the connection request that described visual human sends, described control system passes through mobile communications network or WIFI to described visual human's feedback response connection request, thus the communication connection set up between visual human and control system.Or, described control system directly transmits startup order by mobile communications network or WIFI, controls visual human and directly initiates, sets up communication connection.Further, described control system by mobile communications network or WIFI transmitting control commands or/and the first interactive information is to described visual human, so that described visual human performs the action corresponding with control command, described first interactive information is shown or/and receive, described first interactive information is that control system is controlling the information that end gathers, the audio/video information in environment residing for i.e. control system, including the audio/video information of the user of control system.Described visual human sends the second interactive information by mobile communications network or WIFI to described control system, described second interactive information is the information that visual human gathers at visual human's end, i.e. audio/video information in environment residing for visual human, including the audio/video information of the user of visual human.Wherein, described first interactive information and the second interactive information all include audio frequency, video or/and pictorial information.
Further, described control command includes mobile control command, acquisition controlling order, shows control command;Described mobile control command includes the direction of movement, distance;Described acquisition controlling order includes picture collection, audio collection or/and video acquisition;Described display control command includes display audio frequency, display video or/and show picture.
Described processing unit 320 is connected with described communication unit 310, to described control command, the first interactive information, connection request or/and the second interactive information processes.The process that described control command is processed includes compression, encodes the content of described control command, in order to send described control command.The processing procedure of described first interactive information is included compression, encodes described first interactive information, in order to sends described first interactive information.Include decompressing, decoding described second interactive information to the processing procedure of described second interactive information, in order to show described second interactive information.
Described display unit 330 is connected with described processing unit 320.Described display unit can be shown as the control interface of humanoid pattern.
Shown in Figure 4, the one controlling interface 400 of described humanoid pattern realizes structure and includes: vision module control unit 401, voice module control unit 402, audition module control unit 403, walking module controls unit 404, communication control unit 405, face module control unit 406, neck module control unit 407, upper body module control unit 408, upper limb module control unit 409.In actual applications, the display controlling interface 200 of described humanoid pattern includes but not limited to the said units that the present embodiment is enumerated, it is possible to be a part of control unit therein, it is also possible to be whole control units.
The display of described vision module control unit 401 is the sub-interface of control of the eyes pattern of people, acquisition controlling person's click action controlling sub-interface to the eyes pattern of described people, generating corresponding visual spatial attention instruction, the vision module controlling visual human performs the action of described visual spatial attention instruction instruction.
The display of described voice module control unit 402 is the sub-interface of control of the mouth sub-images of people;Acquisition controlling person's click action controlling sub-interface to the mouth sub-images of described people, generates corresponding phonetic control command, and the voice module controlling visual human performs the action of described phonetic control command instruction.
The display of described audition module control unit 403 is the sub-interface of control of the ear pattern of people;Acquisition controlling person's click action controlling sub-interface to the ear pattern of described people, generates corresponding audition control instruction, and the audition module controlling visual human performs the action of described audition control instruction instruction.
The display of described walking module controls unit 404 is the sub-interface of control of the leg pattern of people;Acquisition controlling person's click action controlling sub-interface to the leg pattern of described people, generates corresponding travelling control instruction, and the walking module controlling visual human performs the action of described travelling control instruction instruction.
Described communication control unit 405 is respectively connected with described vision module control unit 401, voice module control unit 402, audition module control unit 403 and walking module controls unit 404 etc., controls described visual spatial attention instruction, phonetic control command, audition control instruction and travelling control instruction respectively and is wirelessly transmitted to long-range visual human.
The display of described face module control unit 406 is the sub-interface of control of the facial expression pattern of people;Acquisition controlling person's click action controlling sub-interface to the facial expression pattern of described people, generates corresponding expression control instruction, and the face module controlling visual human performs the action of described expression control instruction instruction.
The display of described neck module control unit 407 is the sub-interface of control of the neck sub pattern of people;Acquisition controlling person's click action controlling sub-interface to the neck sub pattern of described people, generates corresponding neck control instruction, and the neck submodule controlling visual human performs the action of described neck control instruction instruction.
The display of described upper body module control unit 408 is the sub-interface of control of the upper part of the body pattern of people;Acquisition controlling person's click action controlling sub-interface to the upper part of the body pattern of described people, generates corresponding upper part of the body control instruction, and the upper body module controlling visual human performs the action of described upper part of the body control instruction instruction.
The display of described upper limb module control unit 409 is the sub-interface of control of the arm pattern of people;Acquisition controlling person's click action controlling sub-interface to the arm pattern of described people, generates corresponding arm control instruction, and the upper limb module controlling visual human performs the action of described arm control instruction instruction.
Described input block 340 is connected with described processing unit 320, inputs described control command.
Described information acquisition unit 350 is connected with described processing unit 320, gathers described first interactive information.Described information acquisition unit includes all devices that can gather information, such as photographic head, mike, sensor etc..
Described switch element 360 is connected with described processing unit 320, and the connection request after response process sets up communication connection, or initiation startup is ordered and automatically turned on described visual human by described communication unit.
Visual human described in the embodiment of the present invention can remotely be controlled by described control system, it is possible to makes visual human carry out various doings just as normal person, as greeted, and chat, viewing, listen to etc..
The present embodiment also provides for the control equipment of a kind of visual human, shown in Figure 5, and the control equipment 500 of described visual human runs the control system 300 having described visual human.The control equipment of described visual human is that smart mobile phone, Pad are or/and intelligent watch.
Visual human of the present invention can be positioned over the occasion of any needs, user can utilize visual human described in the control equipment or system remote control carried with, not only can obtain the information of any needs in the occasion that oneself cannot arrive, information can also be carried out alternately with the personnel of this occasion, solve the problem that staff is too busy to attend to anything else.
Visual human of the present invention, various types of meeting or occasion can be participated in as the scapegoat of effector, obtain information needed in time, and also can exchange with participant, being in the action such as effector, solve the problem that staff is too busy to attend to anything else.
In sum, the present invention effectively overcomes various shortcoming of the prior art and has high industrial utilization.
Above-described embodiment is illustrative principles of the invention and effect thereof only, not for the restriction present invention.Above-described embodiment all under the spirit and category of the present invention, can be modified or change by any those skilled in the art.Therefore, art has usually intellectual such as modifying without departing from all equivalences completed under disclosed spirit and technological thought or change, must be contained by the claim of the present invention.

Claims (10)

1. a visual human, it is characterised in that described visual human includes:
Communication module, receive startup order that control system sent, control command by the first cordless communication network or/and the first interactive information, or send connection request by the second cordless communication network to described control system, or send the second interactive information by the first cordless communication network to described control system;
Processing module, is connected with described communication module, to described control command, the first interactive information, connection request or/and the second interactive information processes;Described first interactive information and the second interactive information all include audio frequency, video or/and pictorial information;
Control module, be connected with described processing module, control corresponding module according to the control command after processing and perform corresponding action;
Mobile module, is connected with described control module, drives described visual human to move under the control of described control module;
Information acquisition module, is respectively connected with described processing module and control module, gathers described second interactive information under the control of described control module;
Display module, is respectively connected with described processing module and control module, shows the first interactive information after processing under the control of described control module;Described display module is arranged at the localized positions of described visual human;Or described display module surrounds described visual human's whole body;
Opening module, is respectively connected with described processing module and communication module, sends described connection request, or receives and described start order and automatically turn on the mode of operation of described visual human.
2. visual human according to claim 1, it is characterised in that described information acquisition module includes:
Audition module, commissarial ear, input speech signal;Described audition module includes mike;
Vision module, commissarial eyes, gather the scene in visual line of sight;Described vision module includes photographic head, and described photographic head can 360 degree of rotations and focusing.
3. visual human according to claim 1, it is characterised in that described mobile module includes:
Walking module, commissarial lower limb, drive described visual human to move to desired location;Described walking module includes roller.
4. visual human according to claim 4, it is characterised in that described visual human also includes:
Voice module, commissarial mouth, export voice signal;Described voice module includes speaker;
Face module, commissarial face, export countenance information;Described vision module is arranged on the eye position place of described face module;Described voice module is arranged on the face position of described face module;Described audition module is arranged on the ear location place of described face module;
Neck submodule, commissarial neck, it is connected with described face module, drives described face module to rotate;
Upper body module, the commissarial upper part of the body, it is connected respectively with described neck submodule and walking module;Or/and
Upper limb module, commissarial arm, it is arranged at the arm position of described upper body module.
5. visual human according to claim 1, it is characterised in that: described control command includes mobile control command, acquisition controlling order, shows control command;Described mobile control command includes the direction of movement, distance;Described acquisition controlling order includes picture collection, audio collection or/and video acquisition;Described display control command includes display audio frequency, display video or/and show picture.
6. the control system of a visual human, it is characterised in that the control system of described visual human includes:
Communication unit, the connection request that visual human sends is received by the second cordless communication network, the second interactive information that visual human sends is received by the first cordless communication network, or by the first cordless communication network to visual human's transmitting control commands or/and the first interactive information;
Processing unit, is connected with described communication unit, to described control command, the first interactive information, connection request or/and the second interactive information processes;Described first interactive information and the second interactive information all include audio frequency, video or/and pictorial information;
Display unit, is connected with described processing unit;
Input block, is connected with described processing unit, inputs described control command;
Information acquisition unit, is connected with described processing unit, gathers described first interactive information;
Switch element, is respectively connected with described processing unit and communication unit, and the connection request after response process sets up communication connection, or initiation startup is ordered and automatically turned on described visual human by described communication unit.
7. the control system of visual human according to claim 5, it is characterised in that: described control command includes mobile control command, acquisition controlling order, shows control command;Described mobile control command includes the direction of movement, distance;Described acquisition controlling order includes picture collection, audio collection or/and video acquisition;Described display control command includes display audio frequency, display video or/and show picture.
8. the control system of visual human according to claim 6, it is characterised in that described display unit is shown as the control interface of humanoid pattern, and the control interface of described humanoid pattern includes:
Vision module control unit, display is the sub-interface of control of the eyes pattern of people, acquisition controlling person's click action controlling sub-interface to the eyes pattern of described people, generates corresponding visual spatial attention instruction, and the vision module controlling visual human performs the action of described visual spatial attention instruction instruction;
Voice module control unit, shows the sub-interface of control of the mouth sub-images for people;Acquisition controlling person's click action controlling sub-interface to the mouth sub-images of described people, generates corresponding phonetic control command, and the voice module controlling visual human performs the action of described phonetic control command instruction;
Audition module control unit, shows the sub-interface of control of the ear pattern for people;Acquisition controlling person's click action controlling sub-interface to the ear pattern of described people, generates corresponding audition control instruction, and the audition module controlling visual human performs the action of described audition control instruction instruction;
Walking module controls unit, shows the sub-interface of control of the leg pattern for people;Acquisition controlling person's click action controlling sub-interface to the leg pattern of described people, generates corresponding travelling control instruction, and the walking module controlling visual human performs the action of described travelling control instruction instruction;
Communication control unit, it is respectively connected with described vision module control unit, voice module control unit, audition module control unit and walking module controls unit and communication unit, controls described visual spatial attention instruction, phonetic control command, audition control instruction and travelling control instruction respectively and be wirelessly transmitted to long-range visual human;
Face module control unit, shows the sub-interface of control of the facial expression pattern for people;Acquisition controlling person's click action controlling sub-interface to the facial expression pattern of described people, generates corresponding expression control instruction, and the face module controlling visual human performs the action of described expression control instruction instruction;
Neck module control unit, shows the sub-interface of control of the neck sub pattern for people;Acquisition controlling person's click action controlling sub-interface to the neck sub pattern of described people, generates corresponding neck control instruction, and the neck submodule controlling visual human performs the action of described neck control instruction instruction;
Upper body module control unit, shows the sub-interface of control of the upper part of the body pattern for people;Acquisition controlling person's click action controlling sub-interface to the upper part of the body pattern of described people, generates corresponding upper part of the body control instruction, and the upper body module controlling visual human performs the action of described upper part of the body control instruction instruction;Or
Upper limb module control unit, shows the sub-interface of control of the arm pattern for people;Acquisition controlling person's click action controlling sub-interface to the arm pattern of described people, generates corresponding arm control instruction, and the upper limb module controlling visual human performs the action of described arm control instruction instruction.
9. the control equipment of a visual human, it is characterised in that the control equipment of described visual human runs the control system of the visual human having the right described in requirement 6 to 8 any one.
10. the control equipment of visual human according to claim 9, it is characterised in that: the control equipment of described visual human is that smart mobile phone, Pad are or/and intelligent watch.
CN201610135719.5A 2016-03-10 2016-03-10 Virtual person as well as control system and device therefor Pending CN105807929A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610135719.5A CN105807929A (en) 2016-03-10 2016-03-10 Virtual person as well as control system and device therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610135719.5A CN105807929A (en) 2016-03-10 2016-03-10 Virtual person as well as control system and device therefor

Publications (1)

Publication Number Publication Date
CN105807929A true CN105807929A (en) 2016-07-27

Family

ID=56468067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610135719.5A Pending CN105807929A (en) 2016-03-10 2016-03-10 Virtual person as well as control system and device therefor

Country Status (1)

Country Link
CN (1) CN105807929A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502390A (en) * 2016-10-08 2017-03-15 华南理工大学 A kind of visual human's interactive system and method based on dynamic 3D Handwritten Digit Recognitions
CN106940594A (en) * 2017-02-28 2017-07-11 深圳信息职业技术学院 A kind of visual human and its operation method
CN111443853A (en) * 2020-03-25 2020-07-24 北京百度网讯科技有限公司 Digital human control method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002046088A (en) * 2000-08-03 2002-02-12 Matsushita Electric Ind Co Ltd Robot device
CN103631221A (en) * 2013-11-20 2014-03-12 华南理工大学广州学院 Teleoperated service robot system
CN104656653A (en) * 2015-01-15 2015-05-27 长源动力(北京)科技有限公司 Interactive system and method based on robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002046088A (en) * 2000-08-03 2002-02-12 Matsushita Electric Ind Co Ltd Robot device
CN103631221A (en) * 2013-11-20 2014-03-12 华南理工大学广州学院 Teleoperated service robot system
CN104656653A (en) * 2015-01-15 2015-05-27 长源动力(北京)科技有限公司 Interactive system and method based on robot

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502390A (en) * 2016-10-08 2017-03-15 华南理工大学 A kind of visual human's interactive system and method based on dynamic 3D Handwritten Digit Recognitions
CN106502390B (en) * 2016-10-08 2019-05-14 华南理工大学 A kind of visual human's interactive system and method based on dynamic 3D Handwritten Digit Recognition
CN106940594A (en) * 2017-02-28 2017-07-11 深圳信息职业技术学院 A kind of visual human and its operation method
CN106940594B (en) * 2017-02-28 2019-11-22 深圳信息职业技术学院 A kind of visual human and its operation method
CN111443853A (en) * 2020-03-25 2020-07-24 北京百度网讯科技有限公司 Digital human control method and device
CN111443853B (en) * 2020-03-25 2021-07-20 北京百度网讯科技有限公司 Digital human control method and device

Similar Documents

Publication Publication Date Title
CN106302427B (en) Sharing method and device in reality environment
CN105306868B (en) Video conferencing system and method
US20200322506A1 (en) Image processing system, non-transitory recording medium, and image processing method
CN109976690A (en) AR glasses remote interaction method, device and computer-readable medium
CN101049017A (en) Tele-robotic videoconferencing in a corporte environment
KR20150003711A (en) Spectacles having a built-in computer
JP2017511615A (en) Video interaction between physical locations
CN205068298U (en) Interaction system is wandered to three -dimensional scene
CN107452119A (en) virtual reality real-time navigation method and system
US20220301270A1 (en) Systems and methods for immersive and collaborative video surveillance
CN105807929A (en) Virtual person as well as control system and device therefor
CN108431872A (en) A kind of method and apparatus of shared virtual reality data
WO2018216355A1 (en) Information processing apparatus, information processing method, and program
CN107329268A (en) A kind of utilization AR glasses realize the shared method in sight spot
KR102512855B1 (en) Information processing device and information processing method
CN105472358A (en) Intelligent terminal about video image processing
US20120284652A1 (en) Human-environment interactive system and portable device using the same
CN106127534A (en) A kind of exhibition system based on virtual reality
CN105892627A (en) Virtual augmented reality method and apparatus, and eyeglass or helmet using same
CN214097971U (en) AR glasses system based on wireless Bluetooth transmission protocol
CN110133852A (en) Intelligence beautification glasses
CN104777899A (en) Method for video stream redirection among networked computer devices
Soujanya et al. Virtual reality meets IoT through telepresence
CN104023005A (en) Virtual conference system
Cochrane et al. Telepresence-visual telecommunications into the next century

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160727