CN105578058A - Shooting control method and device for intelligent robot and robot - Google Patents

Shooting control method and device for intelligent robot and robot Download PDF

Info

Publication number
CN105578058A
CN105578058A CN201610077482.XA CN201610077482A CN105578058A CN 105578058 A CN105578058 A CN 105578058A CN 201610077482 A CN201610077482 A CN 201610077482A CN 105578058 A CN105578058 A CN 105578058A
Authority
CN
China
Prior art keywords
trigger condition
shooting
vision
detection
coverage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610077482.XA
Other languages
Chinese (zh)
Inventor
郭家
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201610077482.XA priority Critical patent/CN105578058A/en
Publication of CN105578058A publication Critical patent/CN105578058A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Abstract

The invention provides a shooting control method and device for an intelligent robot and a robot. The shooting control method comprises the following steps: detecting and positioning a target; tracking the target and positioning the target in a shooting range; determining that the target conforms to a shooting trigger condition; and shooting the target. The step of determining that the target conforms to the shooting trigger condition comprises: determining that the state of the target satisfies a visual trigger condition; determining that the state of the target satisfies a sound trigger condition; and/or determining that the target conforms to the shooting trigger condition when receiving a shooting control instruction. By implementing the shooting control method provided by the invention, the shooting function of the robot can be automatically activated under set conditions, so that the robot automatically and instantly snapshot the current state of the target.

Description

A kind of filming control method towards intelligent robot, device and robot
Technical field
The present invention relates to field in intelligent robotics, specifically, relate to a kind of filming control method towards intelligent robot, device and robot.
Background technology
Under sophisticated society's environment, the daily life problem of the elderly and children seems particularly outstanding.Usual children or father and mother need to go out for a long time, not free and energy eldercare and children.Intelligent type household accompanies robot can link up with the elderly and children, accompanies the elderly and children's chat, plays, to the great happiness of the other side as family nurse.
Family accompanies robot can meet the daily life of kinsfolk and the demand of emotion.In the process of accompanying robot and kinsfolk's community life, there is a lot of unforgettable life moment.In order to record these unforgettable scenes, usually needing quick-setting camera and capturing, but often owing to arranging camera, starting the opportunity that the process of camera misses shooting.
Therefore, a kind of control method of automatically taking instantaneously towards Intelligent robot and device is needed badly.
Summary of the invention
The company humanoid robot that the object of the invention is to solve prior art can not carry out the technological deficiency automatically snapped.
The invention provides a kind of filming control method towards intelligent robot, described method comprises:
Detection and location object;
Tracking target thing makes it be in coverage;
Determine that object meets shooting trigger condition;
Object is taken.
According to one embodiment of present invention, determine that object meets the step of taking trigger condition and also comprises:
Determine that the state of object meets vision trigger condition;
Determine that the state of object meets sounds trigger condition; And/or
When receiving shooting control command, then determine that object meets shooting trigger condition.
According to one embodiment of present invention, described vision trigger condition comprises one or more of following visual state:
Facial emotions, facial characteristics, facial attribute, facial image, action, gesture or certain objects;
Described sounds trigger condition comprises one or more of following sound status:
Sound mood, volume, voice or semanteme.
According to one embodiment of present invention, the step of detection and location object comprises:
By vision-based detection determination object in coverage;
If determine, object is in outside coverage, then detect the sound that object sends, utilize sound detection localizing objects thing;
If determine, object to be in outside coverage and to detect that object is not sounded, then utilize vision-based detection to carry out Scan orientation object.
According to one embodiment of present invention, the step of tracking target thing comprises:
When the position of object changes, detect the coordinate data of object in real time;
According to coordinate data adjustment image collecting device position and towards, be in coverage to make object.
According to another aspect of the present invention, additionally provide a kind of imaging control device towards intelligent robot, it comprises:
Locating module, it is configured to detection and location object;
Tracking module, it is configured to tracking target thing makes it be in coverage;
Detection module, it is configured to determine that object meets shooting trigger condition;
Taking module, it is configured to take object.
According to one embodiment of present invention, described locating module comprises:
Visual detection unit, it is configured to pass vision-based detection determination object in coverage;
Sound detection unit, if it is configured to determine that object is in outside coverage, then detects the sound that object sends, utilizes sound detection localizing objects thing;
Scanning element, if it is configured to determine that object to be in outside coverage and to detect that object is not sounded, then utilizes vision-based detection to carry out Scan orientation object.
According to one embodiment of present invention, described tracking module comprises:
Coordinate detection unit, it is configured to when the position of object changes, and detects the coordinate data of object in real time;
Adjustment unit, its be configured to according to coordinate data adjustment image collecting device position and towards, be in coverage to make object.
According to one embodiment of present invention, described detection module comprises:
Vision trigger detecting unit, it is configured to determine that the state of object meets vision trigger condition;
Sounds trigger detecting unit, it is configured to determine that the state of object meets sounds trigger condition; And/or
Instruction detection unit, it is configured to when receiving shooting control command, then determine that object meets shooting trigger condition.
According to another aspect of the invention, additionally provide a kind of robot, it comprises:
Image collecting device, it is for gathering the image information in coverage;
Voice collection device, it is for gathering acoustic information;
Drive unit, its for drive described image collecting device carry out position move or change towards;
Imaging control device as above, it is for meeting shooting trigger condition during at object, controls described drive unit and image collecting device is taken object.
By implementing the present invention, can be implemented in the shoot function of the lower automatic activation robot that imposes a condition, thus realize robot the automatic instantaneous of object current state is captured.
Other features and advantages of the present invention will be set forth in the following description, and, partly become apparent from specification, or understand by implementing the present invention.Object of the present invention and other advantages realize by structure specifically noted in specification, claims and accompanying drawing and obtain.
Accompanying drawing explanation
Accompanying drawing is used to provide a further understanding of the present invention, and forms a part for specification, with embodiments of the invention jointly for explaining the present invention, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is that robot automatic activation shoot function carries out the overview flow chart of the method for capturing according to an embodiment of the invention;
Fig. 2 is the flow chart of robot according to an embodiment of the invention for the method for detection and location object; And
Fig. 3 is that robot according to an embodiment of the invention is for following the tracks of the flow chart of the method for the object of movement.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly, below in conjunction with accompanying drawing, the embodiment of the present invention is described in further detail.
The embodiment provides a kind of automatic shooting control method realized by intelligent robot.As shown in Figure 1, the method flow diagram that the automatic shooting which show intelligent robot controls.
Group method of the present invention starts from step S101.In step S101, first open the vision-based detection function of robot, and detection and location are carried out to object.In testing process, preferably, the present invention first carries out vision-based detection.Typically, the vision-based detection of intelligent robot refers to, for any given target image, certain strategy is adopted to carry out catching search whether to exist containing the target object corresponding to this target image in the visual range determining intelligent robot to this target image.If there is the target object of coupling, then return position and the size of target object.In the example of vision-based detection, the most typically Face datection.
After starting robotic vision measuring ability, for facial image detection, judge whether there is target facial image given in advance in shooting picture.If there is the face of coupling, then return position and the size of this face.In order to robot can be allowed to capture each the splendid moment image of object as face, just must ensure that the camera moment of robot aims at face.
But personage is not actionless usually.At this moment, just need to open robotic vision following function, as shown in step S102.Introducing in group method flow process, for for purpose of brevity, the detailed implementation describing this step will omitted.But, hereafter after a while will to the present invention about how ensureing that the method for robot real-time tracking object describes in detail.
In step s 102, after opening robotic vision following function, robot moment tracking target thing is to ensure that object is in coverage always.Accomplish this point, just must make shooting orientation alignment motion or the static object always of the camera of robot, this generally includes the action regulating robot according to the positional information returned and article size information in real time, to rotate parts relevant with fixing camera, thus the camera of robot is aimed at the mark object or target object is in coverage.
Carry out, in the process of vision-based detection and location, within the scope of picture, the existence of object not detected if intelligent robot performs step S101, then the sense of hearing measuring ability of the people that can start the machine, thus carry out sound localization and Application on Voiceprint Recognition.By localization of sound, readjust the parts relevant with fixing camera of robot, object is appeared in shooting picture.At this moment, Zai Shi robot automatically switches in vision-based detection and following function.
When ensureing that object is in coverage by robotic vision detection and tracking function, the in real time or termly state of target acquisition thing, and judge whether this state meets the shooting trigger condition preset, as shown in step S103.
According to the present invention, default shooting trigger condition comprises vision trigger condition, sounds trigger condition and instruction triggers condition.When determining whether object meets shooting trigger condition, generally including the state judging object and whether meeting vision trigger condition, whether meet sounds trigger condition or whether receive shooting control command.
Such as when the particular appearance of " putting out one's tongue " appears in object face, robot can carry out automatic camera.At this moment the facial expression of " putting out one's tongue " can be regarded as a kind of visible change trigger condition.Or when detecting that sound exceedes certain decibel, the Semantic judgement combining the sound sent goes out target and is in such as " outbreak " state, at this moment also can trigger the shoot function of robot, thus take object, see step S104 simultaneously.
Typically, vision trigger condition can comprise one or more of the following visual state of identification:
(1) facial emotions; Refer to adopt certain strategy to carry out mood judgement to the image of face, result of determination is which kind of mood is target be in, such as special mood, as being different from laughing heartily of usual state
(2) facial characteristics; Refer to adopt certain strategy to extract to the image of face face, include but not limited to facial each organ site, angle, size, whether wear glasses, cap, mouth mask etc., such as put out one's tongue, raise one's eyebrows
(3) facial attribute, includes but not limited to the identification of age, sex, face value etc.;
(4) facial image; Specific personage appears in shooting picture;
(5) action; Adopt certain strategy to return the angle and direction of intended body each several part, the action of dancing for joy of such as target person, the information embodying this action is three dimensional space coordinate;
(6) gesture;
(7) certain objects.
Sounds trigger condition can comprise and identifies one or more of following sound status:
(1) volume, such as, detect sound and exceed certain decibel, just trigger the shoot function of robot;
(2) sound mood, if identify personage's sound to occur special mood, just triggers shooting;
(3) voice, special intonation also can as the condition triggering shoot function;
(4) semantic, such as, when detecting that personage has assigned certain instruction or said special sentence, trigger shoot function.
Preferably, when adopting certain sounds trigger condition to activate the shoot function of robot, can also be combined with the sounds trigger condition of other vision trigger conditions or other types, thus capture required moment image exactly.
Certainly, also there is the 3rd class trigger condition, i.e. sensor-triggered condition.Such as, carry out in the process automatically snapped at intelligent robot, it also can detect other transducers and takes instruction as the triggering of remote controller thus carry out the Long-distance Control shooting of intelligent robot.What the priority level of instruction triggers condition can be arranged is the highest.In this case, no matter whether there are other trigger conditions, the triggering command of intelligent robot preferential answering transducer thus take pictures.
No matter under any trigger condition, after robot receives shooting instruction according to trigger condition, the parameter such as focal length, aperture, shutter speed, photosensitivity of adjustment camera automatically, to make shooting condition best, thus obtains good shooting effect.
After completing and taking pictures, robot can autostore photo.According to actual needs, these photos can be stored in the memory of robot this locality, are also stored in high in the clouds by the bluetooth of robot, wireless network, cable network or are synchronized to other-end equipment as in mobile phone.
If intelligent robot or other synchronous after terminal equipment be provided with display screen, so also by the photo display of automatically capturing on the screen, can modify for user, delete or carry out the operation such as to forward.
As shown in Figure 2, which show the method flow diagram how carrying out vision-based detection and location according to embodiments of the invention.In the method flow process, the detail display sense of hearing measuring ability of intelligent robot and the automatic conversion of vision-based detection function.
The step of detection and location object comprises haply: by vision-based detection determination object in coverage; If determine, object is in outside coverage, then detect the sound that object sends, utilize sound detection localizing objects thing; If determine, object to be in outside coverage and to detect that object is not sounded, then utilize vision-based detection to carry out Scan orientation object.
Specifically, according to the present invention, in step S1011, first vision-based detection can be started.Judge the existence whether having object in shooting picture afterwards, as step S1012.If there is the existence of object, then auto-steering tracking target step as shown in Figure 1.
But if do not have object, then robot starts sense of hearing measuring ability, as step S1013.According to sense of hearing measuring ability (step S1014), if robot detects the sound of object, the position (step S1016) of localizing objects thing is then come according to the source of this sound, robot rotates health automatically subsequently, and camera is aimed at the mark object space position (step S1017).After aligning, robot again opens vision-based detection function and carries out follow-up vision and follows the tracks of (S102).
If in step S1013, robot does not detect the source of sound, then again start vision-based detection function and carry out scanning to find target, step S1015.In this step, the necessary parts of revolute's health, makes robot can at the scope interscan object of planar 360 degree and vertical plane 360 degree.In scanning process, if can target be found, then start vision following function, carry out vision and follow the tracks of S102.If can not target be found in scanning process, then repeated execution of steps S1013, thus again get back to sense of hearing testing process, start sense of hearing measuring ability and carry out sound detection location or Application on Voiceprint Recognition.
In order to robot can be allowed to capture each of object image of significant moment, need to ensure that camera aims at the mark thing in real time.As shown in Figure 3, which show the method flow diagram according to intelligent robot of the present invention how tracking target thing.
The step of tracking target thing comprises haply: when the position of object changes, and detects the coordinate data of object in real time; According to coordinate data adjustment image collecting device position and towards, be in coverage to make object.
Specifically, as shown in Figure 3, the S1021 when making camera keep tracking target object, if robot detects that target object is moving, then adjust machine human body part in real time thus the position of adjustment camera according to constantly transmitting the object coordinate of returning in vision tracing process, to ensure that target object is in shooting picture all the time, and be in suitable shooting orientation, S1022.
If the too fast of target object movement detected, jump out shooting picture, then robot can go out the position that target object may occur, as step S1023 according to the movement pattern of target object.Then the necessary parts of the rotary machine person, opens vision-based detection function simultaneously, again makes robot the existence of target can be detected, S1024.If can't detect, then continue to adopt sense of hearing measuring ability to carry out auxiliary detection.
In the present invention, measuring ability and following function alternately perform, once follow the tracks of less than, perform Scanning Detction immediately.When object Scanning Detction, preferentially carry out vision-based detection, when vision-based detection failure, be aided with the sense of hearing and detect.Certainly, the present invention is not limited to this.The state that user can capture as required, the sense of hearing is detected preferential, vision-based detection is auxiliary, or the status of the two is equal to.
According to another aspect of the present invention, additionally provide a kind of imaging control device towards intelligent robot, it comprises:
Locating module, it is configured to detection and location object;
Tracking module, it is configured to tracking target thing makes it be in coverage;
Detection module, it is configured to determine that object meets shooting trigger condition;
Taking module, it is configured to take object.
According to one embodiment of present invention, described locating module comprises:
Visual detection unit, it is configured to pass vision-based detection determination object in coverage;
Sound detection unit, if it is configured to determine that object is in outside coverage, then detects the sound that object sends, utilizes sound detection localizing objects thing;
Scanning element, if it is configured to determine that object to be in outside coverage and to detect that object is not sounded, then utilizes vision-based detection to carry out Scan orientation object.
According to one embodiment of present invention, described tracking module comprises:
Coordinate detection unit, it is configured to when the position of object changes, and detects the coordinate data of object in real time;
Adjustment unit, its be configured to according to coordinate data adjustment image collecting device position and towards, be in coverage to make object.
According to one embodiment of present invention, described detection module comprises:
Vision trigger detecting unit, it is configured to determine that the state of object meets vision trigger condition;
Sounds trigger detecting unit, it is configured to determine that the state of object meets sounds trigger condition; And/or
Instruction detection unit, it is configured to when receiving shooting control command, then determine that object meets shooting trigger condition.
According to another aspect of the invention, additionally provide a kind of robot, it comprises:
Image collecting device, it is for gathering the image information in coverage;
Voice collection device, it is for gathering acoustic information;
Drive unit, its for drive described image collecting device carry out position move or change towards;
Imaging control device as above, it is for meeting shooting trigger condition during at object, controls described drive unit and image collecting device is taken object.
Although execution mode disclosed in this invention is as above, the execution mode that described content just adopts for the ease of understanding the present invention, and be not used to limit the present invention.Technical staff in any the technical field of the invention; under the prerequisite not departing from spirit and scope disclosed in this invention; any amendment and change can be done what implement in form and in details; but scope of patent protection of the present invention, the scope that still must define with appending claims is as the criterion.

Claims (10)

1. towards a filming control method for intelligent robot, it is characterized in that, described method comprises:
Detection and location object;
Tracking target thing makes it be in coverage;
Determine that object meets shooting trigger condition;
Object is taken.
2. filming control method as claimed in claim 1, is characterized in that, is determining that object meets the step of taking trigger condition and comprises:
Determine that the state of object meets vision trigger condition;
Determine that the state of object meets sounds trigger condition; And/or
When receiving shooting control command, then determine that object meets shooting trigger condition.
3. filming control method as claimed in claim 2, is characterized in that,
Described vision trigger condition comprises one or more of following visual state:
Facial emotions, facial characteristics, facial attribute, facial image, action, gesture or certain objects;
Described sounds trigger condition comprises one or more of following sound status:
Sound mood, volume, voice or semanteme.
4. filming control method as claimed in claim 3, is characterized in that, comprise in the step of detection and location object:
By vision-based detection determination object in coverage;
If determine, object is in outside coverage, then detect the sound that object sends, utilize sound detection localizing objects thing;
If determine, object to be in outside coverage and to detect that object is not sounded, then utilize vision-based detection to carry out Scan orientation object.
5. the filming control method according to any one of claim 1-4, is characterized in that, comprises in the step of tracking target thing:
When the position of object changes, detect the coordinate data of object in real time;
According to coordinate data adjustment image collecting device position and towards, be in coverage to make object.
6. towards an imaging control device for intelligent robot, it is characterized in that, comprising:
Locating module, it is configured to detection and location object;
Tracking module, it is configured to tracking target thing makes it be in coverage;
Detection module, it is configured to determine that object meets shooting trigger condition;
Taking module, it is configured to take object.
7. imaging control device as claimed in claim 6, it is characterized in that, described locating module comprises:
Visual detection unit, it is configured to pass vision-based detection determination object in coverage;
Sound detection unit, if it is configured to determine that object is in outside coverage, then detects the sound that object sends, utilizes sound detection localizing objects thing;
Scanning element, if it is configured to determine that object to be in outside coverage and to detect that object is not sounded, then utilizes vision-based detection to carry out Scan orientation object.
8. imaging control device as claimed in claim 7, it is characterized in that, described tracking module comprises:
Coordinate detection unit, it is configured to when the position of object changes, and detects the coordinate data of object in real time;
Adjustment unit, its be configured to according to coordinate data adjustment image collecting device position and towards, be in coverage to make object.
9. the imaging control device according to any one of claim 6-8, is characterized in that, described detection module comprises:
Vision trigger detecting unit, it is configured to determine that the state of object meets vision trigger condition;
Sounds trigger detecting unit, it is configured to determine that the state of object meets sounds trigger condition; And/or
Instruction detection unit, it is configured to when receiving shooting control command, then determine that object meets shooting trigger condition.
10. a robot, is characterized in that, comprising:
Image collecting device, it is for gathering the image information in coverage;
Voice collection device, it is for gathering acoustic information;
Drive unit, its for drive described image collecting device carry out position move or change towards;
Imaging control device according to any one of claim 6-9, it is for meeting shooting trigger condition during at object, controls described drive unit and image collecting device is taken object.
CN201610077482.XA 2016-02-03 2016-02-03 Shooting control method and device for intelligent robot and robot Pending CN105578058A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610077482.XA CN105578058A (en) 2016-02-03 2016-02-03 Shooting control method and device for intelligent robot and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610077482.XA CN105578058A (en) 2016-02-03 2016-02-03 Shooting control method and device for intelligent robot and robot

Publications (1)

Publication Number Publication Date
CN105578058A true CN105578058A (en) 2016-05-11

Family

ID=55887666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610077482.XA Pending CN105578058A (en) 2016-02-03 2016-02-03 Shooting control method and device for intelligent robot and robot

Country Status (1)

Country Link
CN (1) CN105578058A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105915805A (en) * 2016-06-15 2016-08-31 北京光年无限科技有限公司 Photographing method for intelligent robot
CN105915801A (en) * 2016-06-12 2016-08-31 北京光年无限科技有限公司 Self-learning method and device capable of improving snap shot effect
CN105945995A (en) * 2016-05-26 2016-09-21 邵鑫 Voice control photographing system for robot
CN105979140A (en) * 2016-06-03 2016-09-28 北京奇虎科技有限公司 Image generation device and image generation method
CN106203332A (en) * 2016-07-08 2016-12-07 北京光年无限科技有限公司 Method and system based on the change of intelligent robot visual identity face facial expression
CN106716282A (en) * 2016-12-17 2017-05-24 深圳前海达闼云端智能科技有限公司 A method of controlling a target, a control apparatus and a control device
CN106791606A (en) * 2016-11-25 2017-05-31 深圳哈乐派科技有限公司 A kind of long-distance monitoring method and device
CN107127758A (en) * 2017-06-01 2017-09-05 深圳市悠响声学科技有限公司 Automatic identification photographic method and its system based on intelligent robot
WO2018058264A1 (en) * 2016-09-27 2018-04-05 深圳市大疆创新科技有限公司 Video-based control method, device, and flying apparatus
CN108683840A (en) * 2018-03-28 2018-10-19 深圳臻迪信息技术有限公司 Filming control method, image pickup method and unmanned machine end
CN108718386A (en) * 2018-08-06 2018-10-30 光锐恒宇(北京)科技有限公司 The implementation method and device of automatic shooting
CN108833766A (en) * 2018-03-21 2018-11-16 北京猎户星空科技有限公司 Control method, device, smart machine and the storage medium of smart machine
CN109060826A (en) * 2018-08-16 2018-12-21 大连维德集成电路有限公司 A kind of non-stop-machine wind electricity blade detection device
CN110830705A (en) * 2018-08-08 2020-02-21 深圳市优必选科技有限公司 Robot photographing method, robot, terminal device and storage medium
WO2020063614A1 (en) * 2018-09-26 2020-04-02 上海肇观电子科技有限公司 Smart glasses tracking method and apparatus, and smart glasses and storage medium
US10860165B2 (en) 2018-09-26 2020-12-08 NextVPU (Shanghai) Co., Ltd. Tracking method and apparatus for smart glasses, smart glasses and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202145279U (en) * 2011-07-08 2012-02-15 上海合时智能科技有限公司 Household mobile safety protection robot based on object identification technology
US20120307091A1 (en) * 2011-06-03 2012-12-06 Panasonic Corporation Imaging apparatus and imaging system
CN202753155U (en) * 2012-07-18 2013-02-27 深圳市中科睿成智能科技有限公司 Robot device used for Internet
CN103624789A (en) * 2013-12-03 2014-03-12 深圳如果技术有限公司 Security robot
CN103984315A (en) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 Domestic multifunctional intelligent robot
CN105116920A (en) * 2015-07-07 2015-12-02 百度在线网络技术(北京)有限公司 Intelligent robot tracking method and apparatus based on artificial intelligence and intelligent robot
CN105126355A (en) * 2015-08-06 2015-12-09 上海元趣信息技术有限公司 Child companion robot and child companioning system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120307091A1 (en) * 2011-06-03 2012-12-06 Panasonic Corporation Imaging apparatus and imaging system
CN202145279U (en) * 2011-07-08 2012-02-15 上海合时智能科技有限公司 Household mobile safety protection robot based on object identification technology
CN202753155U (en) * 2012-07-18 2013-02-27 深圳市中科睿成智能科技有限公司 Robot device used for Internet
CN103624789A (en) * 2013-12-03 2014-03-12 深圳如果技术有限公司 Security robot
CN103984315A (en) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 Domestic multifunctional intelligent robot
CN105116920A (en) * 2015-07-07 2015-12-02 百度在线网络技术(北京)有限公司 Intelligent robot tracking method and apparatus based on artificial intelligence and intelligent robot
CN105126355A (en) * 2015-08-06 2015-12-09 上海元趣信息技术有限公司 Child companion robot and child companioning system

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105945995B (en) * 2016-05-26 2018-03-09 三个爸爸家庭智能环境科技(北京)有限公司 The Voice command camera system of robot
CN105945995A (en) * 2016-05-26 2016-09-21 邵鑫 Voice control photographing system for robot
CN105979140A (en) * 2016-06-03 2016-09-28 北京奇虎科技有限公司 Image generation device and image generation method
CN105915801A (en) * 2016-06-12 2016-08-31 北京光年无限科技有限公司 Self-learning method and device capable of improving snap shot effect
CN105915805A (en) * 2016-06-15 2016-08-31 北京光年无限科技有限公司 Photographing method for intelligent robot
CN106203332A (en) * 2016-07-08 2016-12-07 北京光年无限科技有限公司 Method and system based on the change of intelligent robot visual identity face facial expression
WO2018058264A1 (en) * 2016-09-27 2018-04-05 深圳市大疆创新科技有限公司 Video-based control method, device, and flying apparatus
CN106791606A (en) * 2016-11-25 2017-05-31 深圳哈乐派科技有限公司 A kind of long-distance monitoring method and device
CN106716282A (en) * 2016-12-17 2017-05-24 深圳前海达闼云端智能科技有限公司 A method of controlling a target, a control apparatus and a control device
CN107127758A (en) * 2017-06-01 2017-09-05 深圳市悠响声学科技有限公司 Automatic identification photographic method and its system based on intelligent robot
CN107127758B (en) * 2017-06-01 2020-04-14 深圳市物朗智能科技有限公司 Automatic identification photographing method and system based on intelligent robot
CN108833766A (en) * 2018-03-21 2018-11-16 北京猎户星空科技有限公司 Control method, device, smart machine and the storage medium of smart machine
CN108683840A (en) * 2018-03-28 2018-10-19 深圳臻迪信息技术有限公司 Filming control method, image pickup method and unmanned machine end
CN108718386A (en) * 2018-08-06 2018-10-30 光锐恒宇(北京)科技有限公司 The implementation method and device of automatic shooting
CN110830705A (en) * 2018-08-08 2020-02-21 深圳市优必选科技有限公司 Robot photographing method, robot, terminal device and storage medium
CN109060826A (en) * 2018-08-16 2018-12-21 大连维德集成电路有限公司 A kind of non-stop-machine wind electricity blade detection device
CN109060826B (en) * 2018-08-16 2021-07-09 大连维德集成电路有限公司 Wind-powered electricity generation blade detection device that does not shut down
WO2020063614A1 (en) * 2018-09-26 2020-04-02 上海肇观电子科技有限公司 Smart glasses tracking method and apparatus, and smart glasses and storage medium
US10860165B2 (en) 2018-09-26 2020-12-08 NextVPU (Shanghai) Co., Ltd. Tracking method and apparatus for smart glasses, smart glasses and storage medium

Similar Documents

Publication Publication Date Title
CN105578058A (en) Shooting control method and device for intelligent robot and robot
CN104410883B (en) The mobile wearable contactless interactive system of one kind and method
WO2017149868A1 (en) Information processing device, information processing method, and program
CN107660039B (en) A kind of lamp control system of identification dynamic gesture
US20230305530A1 (en) Information processing apparatus, information processing method and program
CN104772748B (en) A kind of social robot
EP1375084A1 (en) Robot audiovisual system
WO2018031758A1 (en) Control system and control processing method and apparatus
CN106791420A (en) A kind of filming control method and device
CN105058389A (en) Robot system, robot control method, and robot
CN110083202A (en) With the multi-module interactive of near-eye display
WO2019104681A1 (en) Image capture method and device
KR101850534B1 (en) System and method for picture taking using IR camera and maker and application therefor
WO2018108176A1 (en) Robot video call control method, device and terminal
CN109151393A (en) A kind of sound fixation and recognition method for detecting
CN111726921B (en) Somatosensory interactive light control system
WO2019216016A1 (en) Information processing device, information processing method, and program
US11144751B2 (en) Information processing apparatus and non-transitory computer readable medium to allow operation without contact
US20200329202A1 (en) Image capturing apparatus, control method, and recording medium
US20210233529A1 (en) Imaging control method and apparatus, control device, and imaging device
JP7279646B2 (en) Information processing device, information processing method and program
US11875571B2 (en) Smart hearing assistance in monitored property
EP3115926A1 (en) Method for control using recognition of two-hand gestures
EP3037916A1 (en) Monitoring
CN111105039A (en) Information processing apparatus, control method thereof, and memory

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160511