CN105759650A - Method used for intelligent robot system to achieve real-time face tracking - Google Patents

Method used for intelligent robot system to achieve real-time face tracking Download PDF

Info

Publication number
CN105759650A
CN105759650A CN201610159030.6A CN201610159030A CN105759650A CN 105759650 A CN105759650 A CN 105759650A CN 201610159030 A CN201610159030 A CN 201610159030A CN 105759650 A CN105759650 A CN 105759650A
Authority
CN
China
Prior art keywords
face
detection image
processor
tracked
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610159030.6A
Other languages
Chinese (zh)
Inventor
贾梓筠
韩冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201610159030.6A priority Critical patent/CN105759650A/en
Publication of CN105759650A publication Critical patent/CN105759650A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a method used for an intelligent robot system to achieve real-time face tracking, comprising the steps of: receiving a multi-modal input command, calling a camera according to the multi-modal input command to obtain a detection image containing a face; utilizing a processor on an Android board to obtain position information of the face in the detection image, and performing judgment based on the position information and a preset face position; when the position of the face in the detection image is not consistent with the preset face position, utilizing a processor on a main control board to control a robot to move, and simultaneously outputting multi-modal output corresponding to the multi-modal input command; and re-acquiring a detection image containing a face, obtaining position information of the face in the detection image, and performing judgment based on the position information and the preset face position until the position of the face in the detection image is consistent with the preset face position. The method simplifies a tracking algorithm, reduces system cost, and achieves real-time and continuous tracking for the face.

Description

A kind of method for intelligent robot system real-time tracking face
Technical field
The present invention relates to field in intelligent robotics, particularly relate to a kind of method for intelligent robot system real-time tracking face.
Background technology
Along with the development of robotics, intelligent robot product has been deep into the various aspects of people's life more and more.Robot is not only used for helping user to be efficiently completed the work specified, and more designed to be able to and carries out the mutual partner of language, action and emotion with user.
In the interaction of person to person, a kind of conventional mode is face-to-face exchange, because being so more convenient for understanding the intention of mutual side, responds the emotional expression of mutual side.Similarly, in field in intelligent robotics, face is as a kind of important visual pattern, it is possible to the transmission age of user, sex, identity and major part emotion and emotional information etc..Therefore, in interactive process, intelligent robot passes through the location to face and tracking, it is possible to more efficiently face information is acquired and analysis, and knows the intention of user more accurately, improves man-machine interaction experience.
Face detection of the prior art is followed the tracks of and is mainly modeled analyzing based on image processing techniques, and realizes in conjunction with certain track algorithm.And the method for the modeling adopted and track algorithm are it is generally required to through complicated calculating, can take substantial amounts of system resource, it is difficult to realize the real-time continuous tracking to face.
In order to solve the problems referred to above, need a kind of new face tracking method badly and face is carried out real-time tracking.
Summary of the invention
One of the technical problem to be solved is to need to provide a kind of new face tracking method that face is carried out real-time tracking.
In order to solve above-mentioned technical problem, embodiments herein provides a kind of method for intelligent robot system real-time tracking face, including receiving multi-modal input instruction, and according to described multi-modal input instruction calls photographic head to obtain the detection image comprising face;The processor on Android plate is utilized to obtain the positional information of face in described detection image, and judge with the position of face preset based on described positional information: when face position in detecting image is inconsistent with the position of the face preset, the processor on master control borad is utilized to control robot motion the multi-modal output that output is corresponding with described multi-modal input instruction simultaneously;The positional information that again gather the detection image comprising face, obtains face in described detection image and carry out judging until the position consistency of face position in detection image and the face preset based on described positional information and the position of face preset.
Preferably, when face is when the position detected in image is inconsistent with the position of the face preset, also include: the processor on Android plate based on default communication protocol to the processor sending direction regulating command on master control borad;The regulating command of described direction is processed by the processor on master control borad based on default communication protocol, obtains the control instruction for controlling robot motion.
Preferably, when face is when the position detected in image is with the position consistency of the face preset, also include: the processor on Android plate sends to the processor on master control borad based on default communication protocol and stops regulating command;Described stopping regulating command being processed by the processor on master control borad based on default communication protocol, makes robot stop motion.
Preferably, in utilizing the described detection image of acquisition of the processor on Android plate, the step of the positional information of face includes: identifies the face in described detection image and determines face to be tracked;Resolve the positional information to obtain described face to be tracked.
Preferably, include in the step of the positional information resolved to obtain described face to be tracked: set up rectangular coordinate system with the position of default face for zero;Determine the described face to be tracked position in described rectangular coordinate system.
Preferably, carry out in the step judged in the position based on described positional information with the face preset, when described face to be tracked is positioned at the zero of described rectangular coordinate system, judge the position consistency of face position in detection image and the face preset, otherwise judge that face position in detection image is inconsistent with the position of the face preset.
Preferably, face in identifying described detection image first when determining face to be tracked, also include the characteristic information storing described face to be tracked.
Preferably, when the characteristic information of the face described to be tracked of characteristic information and the storage of the face identified in the described detection image that obtains is different, processor on Android plate sends time-out regulating command based on default communication protocol to the processor on master control borad, processor on master control borad makes robot pause motion, and exports default multi-modal interactive information simultaneously.
Compared with prior art, the one or more embodiments in such scheme can have the advantage that or beneficial effect:
By utilizing the closed loop control that face positional information in detection image forms feedback that the face in interactive process is tracked, track algorithm can be simplified, significantly decrease system cost, and be capable of the real-time continuous print of face is followed the tracks of, improve man-machine interaction experience.
Other advantages of the present invention, target, to be illustrated in the following description to a certain extent with feature, and to a certain extent, will be apparent to those skilled in the art based on to investigating hereafter, or can be instructed from the practice of the present invention.The target of the present invention and other advantages can be passed through description below, claims, and structure specifically noted in accompanying drawing and realize and obtain.
Accompanying drawing explanation
Accompanying drawing is used for providing being further appreciated by of the technical scheme to the application or prior art, and constitutes a part for description.Wherein, the accompanying drawing expressing the embodiment of the present application is used for explaining the technical scheme of the application together with embodiments herein, but is not intended that the restriction to technical scheme.
Fig. 1 is the schematic flow sheet of method for intelligent robot system real-time tracking face according to an embodiment of the invention;
Fig. 2 is the structural representation of the intelligent robot system that face can carry out real-time tracking according to an embodiment of the invention;
Fig. 3 a is the schematic diagram of the positional information obtaining face according to an embodiment of the invention, and Fig. 3 b is the schematic diagram determining direction regulating command according to an embodiment of the invention;
Fig. 4 is the schematic flow sheet of method for intelligent robot system real-time tracking face according to another embodiment of the present invention.
Detailed description of the invention
Describing embodiments of the present invention in detail below with reference to drawings and Examples, to the present invention, how application technology means solve technical problem whereby, and the process that realizes reaching relevant art effect can fully understand and implement according to this.Each feature in the embodiment of the present application and embodiment, can be combined with each other under not colliding premise, and the technical scheme formed is all within protection scope of the present invention.
For intelligent robot system, the input of its expression information is mainly based upon what the identification to photographic head acquired image obtained, therefore to accurately determine expression input instruction, need completely to be clearly captured face, this just require intelligent robot system with the interaction of user in face can be followed the tracks of in real time.But the face tracking of robot system requires to be different from accurate servosystem simultaneously, the object being taken has only to the coverage of photographic head that is stable and that be intactly positioned at robot, it becomes possible to effectively face information is acquired and analysis.So, the present invention proposes a kind of close-loop control scheme utilizing face position in the picture to be formed method to realize face tracking, and it balances the requirement on both side of the precision to face tracking and real-time.It is described in detail below in conjunction with specific embodiment.
Embodiment one:
Fig. 1 is the schematic flow sheet of method for intelligent robot system real-time tracking face according to an embodiment of the invention, and Fig. 2 is the structural representation of the intelligent robot system that face can carry out real-time tracking according to an embodiment of the invention.Figure it is seen that this intelligent robot system 20 mainly includes photographic head 21, Android plate 22, master control borad 23 and performs device 24.
Further as shown in Figure 2, Android plate 22 is mainly provided with first processor 221 and the data transmit circuit 222 being connected with this first processor and memory element 223, master control borad 23 is mainly provided with data receiver circuit 232, motor driving controling circuit 233 and steering wheel output interface circuit 234 that the second processor 231 is connected with this second processor 231.Wherein, photographic head 21 is connected with Android plate 22 by the mobile Industry Processor Interface MIPI of first processor 221, Android plate 22 is communicated to connect by data transmit circuit 222 and data receiver circuit 232 with master control borad 23, master control borad 23 is also connected with performing device 24, sends drive control signal to motor and steering wheel.
Photographic head 21 is that robot system is for gathering the sensing element of the multi-modal input information such as video, image.Photographic head is generally made up of CCD or CMOS technology, and it utilizes the photosensitive diode of silicon to carry out the conversion of light and point, it is possible to the optical image information collected is converted to electronic digital signal.The coverage of photographic head 21 generally has certain restriction, just cannot collect complete image information clearly, and then affect the identification to image information and process time beyond the coverage that the target being taken is positioned at photographic head 21.
The method of the real-time tracking face of the present embodiment is performed by multi-modal input instruction triggers, as shown in the step S110 of Fig. 1.Namely, when robot system receives the multi-modal input instruction that user sends, just call photographic head and start face is tracked with the detection image obtaining comprising face.The multi-modal input instruction that can trigger face tracking is not limited by the present embodiment, any can be identified as, by robot system, the multi-modal input instruction effectively inputting information and all can trigger face tracking, for instance phonetic entry, action input, comprise in the expression input of particular emotion one or more.It is to say, when robot system judges that above-mentioned multi-modal input instruction belongs to an effective interactive process and begins in interaction, face is tracked, this meets interpersonal mutual practical situation, be conducive to promoting interactive experience.After the photographic head 21 image to collecting carries out simple pretreatment, the view data after processing is stored in the memory element 223 being arranged on Android plate 22.
First processor 221 reads the data message of detection image from memory element 223, is further processed, and therefrom obtains the positional information of face in detection image, in Fig. 1 shown in step S120.Concrete, first processor 221 is primarily based on face identification method and identifies the face information comprised in detection image, and determines face to be tracked.It should be noted that in the present embodiment and the face identification method adopted is not limited, it is possible to adopt ripe general recognition methods in prior art.After determining face to be tracked, first processor 221 further to face to be tracked detection image in position resolve to obtain its positional information.In the present embodiment, the relative position information of face to be tracked is obtained with the location resolution of default face.
Concrete, first set up rectangular coordinate system using the position of default face as zero, then determine face to be tracked position in above-mentioned rectangular coordinate system.Wherein, the position of the face preset can be chosen and be positioned at photographic head 21 coverage, it is simple to obtain the optimum position completely detecting image clearly, for instance the center of detection image is chosen for the position of default face.
As shown in Figure 3 a, when face to be tracked is positioned at block of pixels A place, its positional information in rectangular coordinate system is determined and is recorded as the second quadrant.When face to be tracked is positioned at block of pixels B place, its positional information in rectangular coordinate system is determined and is recorded as the longitudinal axis.When face to be tracked is positioned at the zero of rectangular coordinate system, it can be determined that face position in detection image and the position consistency of face preset.
Further, when face to be tracked is arranged in rectangular coordinate system other positions except zero, it can be determined that face position in detection image is inconsistent with the position of the face preset, it is necessary to make the position of face move towards zero.As shown in Figure 3 b, when face to be tracked is positioned at block of pixels A place so that it is downwards and move right.When face to be tracked is positioned at block of pixels B place so that it is move up along the longitudinal axis.Can be seen that, in the method for the present embodiment, only judge with the face to be tracked affiliated area in rectangular coordinate system, face position coordinates in detection image need not be precisely calculated, also without the distance calculating movement, provide the direction of movement according to face affiliated area in rectangular coordinate system.Face in detection image thus can be made to move towards the position of default face, reach the purpose that face is tracked, simplify track algorithm.
When face to be tracked is when the position detected in image is inconsistent with the position of the face preset, first processor 221 determines direction regulating command according to above-mentioned adjustment process, and based on default communication protocol, direction regulating command is encapsulated as the form of regulation and gives data transmit circuit 222.Due to direction regulating command only comprises be similar to upwards, downwards, to the left, the information of the direction instruction such as to the right, therefore communication protocol is simple, it is simple to transmission, is advantageously implemented real-time continuous and follows the tracks of.
The direction regulating command with specific format obtained is sent to the data receiver circuit 232 being arranged on master control borad 23 by data transmit circuit 222, and data transmit circuit 222 and data receiver circuit 232 communicate based on default communication protocol.Second processor 231 obtains direction regulating command via data receiver circuit 232, and according to default communication protocol, direction regulating command is processed, and obtains the control instruction for controlling robot motion.
Control instruction includes the some instructions controlling robot motion, for instance can include the motor driving instruction controlling robot ambulation and the servo driving instruction controlling photographic head rotation.Second processor 231 transmits it to motor driving controling circuit 233 respectively and steering wheel output interface circuit 234 performs.Owing to direction regulating command only comprising directional information, do not comprise the displacement determined, therefore when carrying out the driving of motor and steering wheel, the direction mobile apparatus people that motor driving controling circuit 233 is specified along the regulating command of direction based on default step-length, the direction that steering wheel output interface circuit 234 is specified along the regulating command of direction based on default angle rotates photographic head.
When being adjusted with default step-length and default angle, generally can not pass through Primary regulation and just reach predeterminated position, therefore, the positional information that again gather the detection image comprising face, obtains face in detection image and judging with the position of the face preset based on the positional information of face in new detection image, and the direction regulating command that must make new advances and control instruction control robot motion.Repeatedly repeat said process, so that it may so that the position of face moves closer to the position of default face.
When face is when the position detected in image is with the position consistency of the face preset, first processor 221 sends stopping regulating command based on default communication protocol to the second processor 231, second processor 231 processes stopping regulating command based on default communication protocol, makes robot stop motion.Said process completes the transmission of instruction still through the communication between data transmit circuit 222 and data receiver circuit 232, it is possible to the transmitting procedure of reference orientation regulating command, repeats no more.
The face tracking method of the present embodiment, threedimensional model need not be rebuild, but utilize face positional information in the detection image of two dimension to form close-loop control scheme, the acquisition of direction regulating command and the execution of control instruction are respectively by host computer and slave computer synchronization process, amount of calculation significantly reduces, save hardware cost, and real-time is good, it is possible to achieve the continuous tracking to face.
It should be noted that the robot system in the present embodiment also exports the multi-modal output corresponding with the multi-modal input instruction received while face is tracked.It is to say, the face tracking method of the present embodiment is without interference with the normal multi-modal output of robot system.For example, if the multi-modal input command request robot system of user is taken pictures for it, then now the new frame image information gathered can be simultaneously used for the recognition of face of face tracking process and be used for showing by photographic head, exports to user.For another example, if the multi-modal input instruction of user is that requirement robot moves to the position specified, then the information of the motor driving controling circuit 233 meeting comprehensive direction regulating command and multi-modal input instruction that now control the motor of robot ambulation determines the controlled quentity controlled variable of motor.
Embodiment two:
Fig. 4 is the schematic flow sheet of method for intelligent robot system real-time tracking face according to another embodiment of the present invention.In conjunction with Fig. 2 and Fig. 4, first according to the multi-modal input instruction calls photographic head 21 received to obtain comprising the detection image of face, and pretreated view data is stored in the memory element 223 being arranged on Android plate 22, this step is identical with corresponding step in embodiment one, repeats no more.Then first processor 221 reads view data from memory element 223, and the face information in detection image is identified.
In reality, the detection image that photographic head 21 gathers would be likely to occur more than one face information, therefore first the number of the face comprised in detection image is judged in the present embodiment, if detection image only comprises a face, then this face is defined as face to be tracked.If comprising multiple face in detection image, then first processor 221 can determine face to be tracked according to the selection principle preset.For example, the quantity of block of pixels can occupied in conjunction with different face in detection image when choosing is to determine face to be tracked.Can also choose in conjunction with other multi-modal input information, for instance input instruction to determine face to be tracked according to whether have with the action that face is corresponding.Above-mentioned choosing method is not limited by the present embodiment.
After determining face to be tracked, it is sequentially carried out the parsing of face location and the judgement of the position of face, when first processor 221 detects that face position in detecting image is inconsistent with the position of the face preset, by the communication between Android plate 22 and master control borad 23 to direction regulating command sent the second processor 231, second processor 231 controls robot motion based on direction regulating command, return step S440 simultaneously and enter cyclic process, until face exits circulation when the position detected in image is with the position consistency of the face preset, terminate the tracking process of face.Identical with embodiment one above with respect to recognition of face, the location resolution of face, command communication and the concrete mode based on direction regulating command control robot motion, repeat no more.
Further, in the present embodiment, after identifying the face drawn in detection image first and determining face to be tracked, also the characteristic information of face to be tracked is stored in memory element 223.Repeatedly repeating after first calls in the new detection image of camera collection process that image is identified, first according to the characteristic information of the face to be tracked of storage judges whether comprise face to be tracked in new detection image, namely first pass through step S460 to judge, as shown in Figure 4.If comprising, continue this face to be tracked is tracked, if not comprising, namely when identify the detection image obtained is absent from the face identical with the characteristic information of the face to be tracked stored time, first processor 221 will send time-out regulating command based on default communication protocol to the second processor 231, and the second processor 231 controls robot pause motion.Second processor 231 according to the multi-modal output order output interactive information preset, can guide user that ensuing processing mode is selected simultaneously.
For example, when identifying that the characteristic information of the face detected in image obtained is different from the characteristic information of the face of the to be tracked of storage, robot can send the information of voice prompt of " beyond following range ", or makes display lamp flicker point out.And user now can interact based on the information that robot sends and decide whether to proceed face tracking, such as, user can pass through speech-input instructions " need not continue to follow the tracks of " and terminate the process of face tracking, and robot can directly exit the execution of face tracking program after identifying this speech-input instructions.Or user actively return to robot photographic head shooting scope in, robot recover face tracking program execution.
The face tracking method of the present embodiment, after collecting the detection image comprising multiple face, face to be tracked can be determined in conjunction with multi-modal input instruction and default rule, and can have no progeny in the track and processed by the mutual execution to program with user, improve the interactive experience between user and robot.
The present embodiment can be simple to the circuit structure of the intelligent robot system that face carries out real-time tracking, it is not necessary to adopt high performance DSP realize complexity algorithm, with low cost.
Although the embodiment that disclosed herein is as above, but described content is only to facilitate the embodiment understanding the present invention and adopt, is not limited to the present invention.Technical staff in any the technical field of the invention; under the premise without departing from the spirit and scope that disclosed herein; any amendment and change can be done in the formal and details implemented; but the scope of patent protection of the present invention, still must be as the criterion with the scope that appending claims defines.

Claims (8)

1. for a method for intelligent robot system real-time tracking face, including:
Receive multi-modal input instruction, and according to described multi-modal input instruction calls photographic head to obtain the detection image comprising face;
Utilize the processor on Android plate to obtain the positional information of face in described detection image, and the position based on described positional information with the face preset judge:
When face is when the position detected in image is inconsistent with the position of the face preset, the processor on master control borad is utilized to control robot motion the multi-modal output that output is corresponding with described multi-modal input instruction simultaneously;The positional information that again gather the detection image comprising face, obtains face in described detection image and carry out judging until the position consistency of face position in detection image and the face preset based on described positional information and the position of face preset.
2. method according to claim 1, it is characterised in that when face is when the position detected in image is inconsistent with the position of the face preset, also include:
Processor on Android plate based on default communication protocol to the processor sending direction regulating command on master control borad;
The regulating command of described direction is processed by the processor on master control borad based on default communication protocol, obtains the control instruction for controlling robot motion.
3. method according to claim 2, it is characterised in that when face is when the position detected in image is with the position consistency of the face preset, also include:
Processor on Android plate sends stopping regulating command based on default communication protocol to the processor on master control borad;
Described stopping regulating command being processed by the processor on master control borad based on default communication protocol, makes robot stop motion.
4. according to the method in claim 2 or 3, it is characterised in that include utilizing the processor on Android plate to obtain the step of the positional information of face in described detection image:
Identify the face in described detection image and determine face to be tracked;
Resolve the positional information to obtain described face to be tracked.
5. method according to claim 4, it is characterised in that the step at the positional information resolved to obtain described face to be tracked includes:
Rectangular coordinate system is set up for zero with the position of default face;
Determine the described face to be tracked position in described rectangular coordinate system.
6. method according to claim 5, it is characterized in that, carry out in the step judged in the position based on described positional information with the face preset, when described face to be tracked is positioned at the zero of described rectangular coordinate system, judge the position consistency of face position in detection image and the face preset, otherwise judge that face position in detection image is inconsistent with the position of the face preset.
7. method according to claim 4, it is characterised in that face in identifying described detection image first when determining face to be tracked, also includes the characteristic information storing described face to be tracked.
8. method according to claim 7, it is characterized in that, when the characteristic information of the face described to be tracked of characteristic information and the storage of the face identified in the described detection image that obtains is different, processor on Android plate sends time-out regulating command based on default communication protocol to the processor on master control borad, processor on master control borad makes robot pause motion, and exports default multi-modal interactive information simultaneously.
CN201610159030.6A 2016-03-18 2016-03-18 Method used for intelligent robot system to achieve real-time face tracking Pending CN105759650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610159030.6A CN105759650A (en) 2016-03-18 2016-03-18 Method used for intelligent robot system to achieve real-time face tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610159030.6A CN105759650A (en) 2016-03-18 2016-03-18 Method used for intelligent robot system to achieve real-time face tracking

Publications (1)

Publication Number Publication Date
CN105759650A true CN105759650A (en) 2016-07-13

Family

ID=56345326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610159030.6A Pending CN105759650A (en) 2016-03-18 2016-03-18 Method used for intelligent robot system to achieve real-time face tracking

Country Status (1)

Country Link
CN (1) CN105759650A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273839A (en) * 2017-06-08 2017-10-20 浙江工贸职业技术学院 A kind of face tracking swinging mounting system
CN107639624A (en) * 2016-07-20 2018-01-30 腾讯科技(深圳)有限公司 A kind of adjustable mechanism and intelligent robot
CN108304799A (en) * 2018-01-30 2018-07-20 广州市君望机器人自动化有限公司 A kind of face tracking methods
CN108399813A (en) * 2018-05-04 2018-08-14 广东小天才科技有限公司 Study coach method and system, robot and handwriting equipment based on robot
CN108647633A (en) * 2018-05-08 2018-10-12 腾讯科技(深圳)有限公司 Recognition and tracking method, recognition and tracking device and robot
CN108724178A (en) * 2018-04-13 2018-11-02 顺丰科技有限公司 The autonomous follower method of particular person and device, robot, equipment and storage medium
CN110321001A (en) * 2019-05-09 2019-10-11 江苏紫米软件技术有限公司 A kind of wireless charging bracket and face tracking methods
CN111265235A (en) * 2020-01-20 2020-06-12 东软医疗系统股份有限公司 Bed entering control method and system of medical equipment and medical equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1375084A1 (en) * 2001-03-09 2004-01-02 Japan Science and Technology Corporation Robot audiovisual system
JP2008087140A (en) * 2006-10-05 2008-04-17 Toyota Motor Corp Speech recognition robot and control method of speech recognition robot
CN105093986A (en) * 2015-07-23 2015-11-25 百度在线网络技术(北京)有限公司 Humanoid robot control method based on artificial intelligence, system and the humanoid robot
CN105116920A (en) * 2015-07-07 2015-12-02 百度在线网络技术(北京)有限公司 Intelligent robot tracking method and apparatus based on artificial intelligence and intelligent robot
CN105116994A (en) * 2015-07-07 2015-12-02 百度在线网络技术(北京)有限公司 Intelligent robot tracking method and tracking device based on artificial intelligence
CN105182983A (en) * 2015-10-22 2015-12-23 深圳创想未来机器人有限公司 Face real-time tracking method and face real-time tracking system based on mobile robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1375084A1 (en) * 2001-03-09 2004-01-02 Japan Science and Technology Corporation Robot audiovisual system
JP2008087140A (en) * 2006-10-05 2008-04-17 Toyota Motor Corp Speech recognition robot and control method of speech recognition robot
CN105116920A (en) * 2015-07-07 2015-12-02 百度在线网络技术(北京)有限公司 Intelligent robot tracking method and apparatus based on artificial intelligence and intelligent robot
CN105116994A (en) * 2015-07-07 2015-12-02 百度在线网络技术(北京)有限公司 Intelligent robot tracking method and tracking device based on artificial intelligence
CN105093986A (en) * 2015-07-23 2015-11-25 百度在线网络技术(北京)有限公司 Humanoid robot control method based on artificial intelligence, system and the humanoid robot
CN105182983A (en) * 2015-10-22 2015-12-23 深圳创想未来机器人有限公司 Face real-time tracking method and face real-time tracking system based on mobile robot

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107639624A (en) * 2016-07-20 2018-01-30 腾讯科技(深圳)有限公司 A kind of adjustable mechanism and intelligent robot
CN107273839A (en) * 2017-06-08 2017-10-20 浙江工贸职业技术学院 A kind of face tracking swinging mounting system
CN108304799A (en) * 2018-01-30 2018-07-20 广州市君望机器人自动化有限公司 A kind of face tracking methods
CN108724178A (en) * 2018-04-13 2018-11-02 顺丰科技有限公司 The autonomous follower method of particular person and device, robot, equipment and storage medium
CN108724178B (en) * 2018-04-13 2022-03-29 顺丰科技有限公司 Method and device for autonomous following of specific person, robot, device and storage medium
CN108399813A (en) * 2018-05-04 2018-08-14 广东小天才科技有限公司 Study coach method and system, robot and handwriting equipment based on robot
CN108647633A (en) * 2018-05-08 2018-10-12 腾讯科技(深圳)有限公司 Recognition and tracking method, recognition and tracking device and robot
CN108647633B (en) * 2018-05-08 2023-12-22 腾讯科技(深圳)有限公司 Identification tracking method, identification tracking device and robot
CN110321001A (en) * 2019-05-09 2019-10-11 江苏紫米软件技术有限公司 A kind of wireless charging bracket and face tracking methods
CN111265235A (en) * 2020-01-20 2020-06-12 东软医疗系统股份有限公司 Bed entering control method and system of medical equipment and medical equipment

Similar Documents

Publication Publication Date Title
CN105759650A (en) Method used for intelligent robot system to achieve real-time face tracking
CN205644294U (en) Intelligent robot system that can trail in real time people's face
CN107181818B (en) Robot remote control and management system and method based on cloud platform
CN104410883B (en) The mobile wearable contactless interactive system of one kind and method
CN102221887B (en) Interactive projection system and method
CN100487636C (en) Game control system and method based on stereo vision
CN108055501A (en) A kind of target detection and the video monitoring system and method for tracking
CN108983636B (en) Man-machine intelligent symbiotic platform system
CN102799191B (en) Cloud platform control method and system based on action recognition technology
CN106598226A (en) UAV (Unmanned Aerial Vehicle) man-machine interaction method based on binocular vision and deep learning
CN105867630A (en) Robot gesture recognition method and device and robot system
CN104965426A (en) Intelligent robot control system, method and device based on artificial intelligence
CN110362090A (en) A kind of crusing robot control system
US11850747B2 (en) Action imitation method and robot and computer readable medium using the same
CN110164060B (en) Gesture control method for doll machine, storage medium and doll machine
KR101850534B1 (en) System and method for picture taking using IR camera and maker and application therefor
CN110728739A (en) Virtual human control and interaction method based on video stream
KR20170102991A (en) Control systems and control methods
CN110223413A (en) Intelligent polling method, device, computer storage medium and electronic equipment
CN106175780A (en) Facial muscle motion-captured analysis system and the method for analysis thereof
CN109839827B (en) Gesture recognition intelligent household control system based on full-space position information
CN107813306A (en) Robot and its method of controlling operation and device
CN206331472U (en) A kind of interactive robot based on Face datection
CN115100563A (en) Production process interaction and monitoring intelligent scene based on video analysis
CN102984563A (en) Intelligent remote controlled television system and remote control method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160713