WO2017000773A1 - 一种机器人的摄像头总成装置及其拍摄与跟踪方法 - Google Patents

一种机器人的摄像头总成装置及其拍摄与跟踪方法 Download PDF

Info

Publication number
WO2017000773A1
WO2017000773A1 PCT/CN2016/085758 CN2016085758W WO2017000773A1 WO 2017000773 A1 WO2017000773 A1 WO 2017000773A1 CN 2016085758 W CN2016085758 W CN 2016085758W WO 2017000773 A1 WO2017000773 A1 WO 2017000773A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera assembly
robot body
assembly device
robot
camera
Prior art date
Application number
PCT/CN2016/085758
Other languages
English (en)
French (fr)
Inventor
陈晴
蔡明峻
刘生华
Original Assignee
芋头科技(杭州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 芋头科技(杭州)有限公司 filed Critical 芋头科技(杭州)有限公司
Publication of WO2017000773A1 publication Critical patent/WO2017000773A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B30/00Camera modules comprising integrated lens units and imaging units, specially adapted for being embedded in other devices, e.g. mobile phones or vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D3/00Control of position or direction
    • G05D3/12Control of position or direction using feedback

Definitions

  • the present invention relates to the field of electronic device technologies, and in particular, to a camera assembly device for a robot and a method for photographing and tracking the same.
  • the camera is the eye of the robot, which can track and capture the graphics of the person or object that the user is tracking. It provides convenience for tracking investigations in many fields.
  • the existing anthropomorphic robot includes an eye (camera) integrated with the body, without taking into account the fact that the eye (camera) is separated from the robot body, which makes searching for items in gaps or small spaces inconvenient or suspicious. After the character appears, the automatic tracking cannot be performed autonomously, and even if the camera is dropped under severe vibration or other unexpected circumstances, the robot cannot complete the user-specified operation normally.
  • the present invention provides a camera assembly device for a robot and a shooting and tracking method thereof, which comprises a device body, a plurality of driving mechanisms, and a laser measurement.
  • the camera assembly device from the scanning sensor, the signal receiving/transmitting device, and the power supply device is mainly controlled by voice, so that the device body carries the camera to leave the robot body, moves to a designated position, or relies on the artificial intelligence of the robot to automatically recognize the outside world.
  • the abnormal situation is automatically separated from the robot body, and the detached camera assembly device automatically moves and tracks the suspicious person and/or object, captures enough information, saves or transmits the video information to the owner, and the camera assembly device automatically returns to the robot body.
  • the plan is specifically:
  • a camera assembly device wherein the device is detachably disposed on a robot body, and the device includes:
  • a device body having a top surface and a bottom surface opposite the top surface
  • a camera embedded in a sidewall of the device body to obtain an image of an object around the device body
  • a laser ranging scanning sensor embedded in the device body and protruding from the top surface to obtain a distance between the device body and an adjacent object thereof;
  • a signal receiving/transmitting device disposed on a top surface of the device body and the robot body to perform data interaction between the device body and the robot body;
  • a driving structure partially embedded in the device body to drive the device body to move
  • a control module is disposed in the device body, and is respectively connected to the camera, the laser ranging scanning sensor, the signal receiving/transmitting device located on the device body, and the driving device;
  • the control module receives a control command issued by the robot body through the signal receiving/transmitting device, and controls an imaging action of the camera according to the control command. And the driving action of the driving device, after the camera completes the control command, the control module controls the signal receiving/transmitting device to transmit the captured information to the user.
  • the device further comprises a power supply device for supplying electrical energy for short-time exploration, positioning and photographing after the camera assembly device leaves the robot body.
  • the driving structure is composed of a plurality of universal wheels and a plurality of driving wheels, wherein the universal wheel can roll in any direction, and the driving wheels respectively refinely change the device by the rotational speed of the two wheels Location and direction.
  • the laser ranging scanning sensor is a two-dimensional laser ranging scanning sensor.
  • a shooting and tracking method wherein the method is based on the camera assembly device described above, the method comprising:
  • the camera assembly device is separated from the robot body
  • the control module controls the camera and the signal receiving/transmitting device to complete the operation
  • the camera assembly device transmits the operation result to the owner by video and/or voice after the operation is completed.
  • the camera assembly device is automatically disengaged from the robot body by a voice command to control the camera assembly device to disengage from the robot body and/or when the camera assembly device senses an abnormality.
  • the robot body transmits the received instruction including the target position and the photographing orientation to the camera assembly device, and the camera assembly device performs a moving operation according to the target position and the photographing orientation;
  • the photographing task is executed, and the execution result is transmitted to the robot body or the user;
  • the camera After the camera executes the instruction, it waits for a new shooting task at the position where the shooting was performed last time.
  • condition that the camera assembly device automatically returns to the robot body comprises:
  • the power source device has a power consumption that is less than twice the amount of power required by the operation task included in the operation command received by the signal receiving/transmitting device mounted on the camera assembly device;
  • the camera assembly device receives a regression command
  • the camera assembly device loses communication with the robot body.
  • the camera assembly device moves to a position where the error near the robot body does not exceed 100 mm;
  • the signal transceiving/device mounted on the device body receives the signal emitted by the robot body, and the laser ranging scanning sensor calculates the orientation of the robot body relative to the device body;
  • the driving mechanism turns the camera assembly device to the main body direction and moves forward;
  • the laser ranging scanning sensor repeatedly calculates the orientation of the camera assembly device and turns the camera assembly device toward the robot body direction and moves, and the camera assembly device is connected to the main body.
  • the purpose of carrying the camera off the robot main body can be achieved, and the tracking and shooting of the narrow area can be better completed.
  • the robot main body is automatically found and automatically installed to the original position, which is convenient for implementation and effectively improves the robot tracking.
  • the efficiency of the shooting in addition, when the camera assembly device detects an unexpected situation, automatically exits the robot body to complete the operation and returns to the robot body, effectively protecting the camera device and improving the tracking efficiency.
  • FIG. 1(a) is a schematic structural view of a front surface of a camera assembly device according to an embodiment of the present invention
  • FIG. 1(b) is a schematic structural view showing a bottom surface of a camera assembly device according to an embodiment of the present invention
  • FIG. 2 is a flow chart of a method for tracking and photographing a camera assembly device according to an embodiment of the present invention
  • FIG. 3 is a flow chart of a specific method for completing operation of a camera assembly device according to an embodiment of the present invention
  • the present invention introduces a camera assembly device 6, which can automatically track an object and return information of the tracking object to the robot body or user.
  • the camera assembly device 6 includes a device body 7; a camera 4; Ranging scanning sensing 1; signal receiving/transmitting device; driving structure and control module (since the control module is placed inside the camera assembly device, mainly controlling left and right, not shown in the figure), wherein:
  • the device body 7 has a top surface and a bottom surface opposite to the top surface;
  • the camera 4 is embedded on the side wall of the device body to acquire an image of an object around the device body;
  • the laser ranging scanning sensor 1 is embedded on the device body and protrudes from the top surface to obtain a distance between the device body and its adjacent object;
  • the signal receiving/transmitting device is disposed on the top surface of the device body and on the robot body to complete data interaction between the device body and the robot body;
  • the driving structure is partially embedded in the device body to drive the device body to move.
  • the control module is disposed in the device body and is respectively connected to the camera, the laser ranging scanning sensor, the signal receiving/transmitting device and the driving device located on the device body;
  • the control module receives the control command issued by the robot body through the signal receiving/transmitting device, and controls the camera action of the camera and the driving of the driving device according to the control command. After the camera completes the control command, the control module controls the signal receiving/transmitting device to transmit the captured information to the user.
  • the camera assembly device 6 further includes a power supply unit that provides electrical energy for short-term exploration, positioning, and photographing of the camera assembly device after it leaves the robot body.
  • the driving structure is composed of a plurality of universal wheels 3 and a plurality of driving wheels 5, wherein the universal wheel can be rolled in any direction, and the driving wheels respectively refine the position and direction of the device by the rotational speed of the two wheels .
  • the laser ranging scanning sensor 1 is a two-dimensional laser ranging scanning sensor.
  • FIG. 1(a) is a schematic structural view of a front surface of a camera assembly device according to an embodiment of the present invention
  • FIG. 1(b) is a schematic structural view of a bottom surface of a camera assembly device according to an embodiment of the present invention
  • FIG. 1(a) and FIG. (b) is an exploded view of the camera assembly device
  • FIG. 1(a) shows a component structure provided in an upper region (ie, a front surface thereof) of the camera assembly device
  • FIG. 1(b) shows the total camera portion.
  • the structure of the component provided in the lower region of the device i.e., the back surface thereof
  • Fig. 1(a) shows the structure shown in Fig. 1(b).
  • a front surface of the device body 7 may be provided with a laser ranging scanning sensor 1, a signal receiving/transmitting device and a camera 4, wherein the signal is received.
  • the transmitting device may include a plurality of infrared sensors 2, and the bottom surface of the device body 7 may be provided with at least two universal wheels 3 and two driving wheels 5. It is noted that the power supply system and signals are not illustrated in the figure. Receiving/transmitting device.
  • the present invention discloses a photographing and tracking method, which is based on the present invention.
  • a disclosed camera assembly device, the shooting and tracking method includes:
  • Step S1 The camera assembly device 6 is separated from the robot body, wherein the conditions for the camera assembly device 6 to be separated from the robot body include:
  • the camera assembly device 6 receives the voice control command, and the robot body parses the user's shooting request into an instruction with a target position and a shooting direction, and transmits the command to the camera assembly device;
  • the camera assembly device 6 senses an abnormality and automatically disengages from the robot body.
  • the signal receiving/transmitting device transmits the voice control including the target position and the shooting task to the control module, and the control module controls The camera assembly device 6 performs the tasks contained in the voice control command, including searching for objects contained in the voice command and/or shooting abnormal motions and sharp images.
  • the operation result is sent to the owner through the video and/or voice reply.
  • the shooting task is executed, and the shooting task is divided into two types. In the form of data return, the first frame is to compress the captured video data and then return it in real time. This way, the shooting timing of the photo can be given to the user; the second is to return the captured single photo. This way you can transfer high-definition photos, the timing of which is determined by the camera assembly.
  • step S1 the camera assembly device automatically detects whether there is an abnormality around the robot body. If present, the camera assembly device automatically detects an abnormality around the robot body and passes the found abnormality information through the control module. The control signal receiving/transmitting device is sent to the robot body or the user.
  • Step S2 the control module controls the camera, and the signal receiving/transmitting device completes the operation
  • the control module controls the camera assembly device 6 to perform the movement and shooting tasks according to the received voice control command including the target position and the shooting direction.
  • the control module controls the camera assembly device to move and track the abnormality.
  • the shooting task is automatically executed, and the captured information is sent to the robot body or the user.
  • step S3 the camera assembly device 6 automatically returns to the robot body.
  • the power of the power of the camera assembly device 6 is less than twice the amount of power required by the operation task included in the voice control command issued by the user;
  • the user sends a return command
  • the signal receiving/transmitting device of the camera assembly device 6 receives the return command
  • the control module controls the camera assembly device 6 to return and access the robot body
  • the communication between the camera assembly device 6 and the robot body is lost.
  • the signal receiving/transmitting device cannot detect the signal receiving/transmitting device mounted on the robot body or cannot communicate with the signal receiving/transmitting device mounted on the robot body. .
  • the camera assembly device 6 Before the camera assembly device 6 is separated from the robot body, it is first necessary to connect the robot body and the camera assembly device 6 and the user to the wireless network coverage environment.
  • the robot body opens the TCP/IP listening service and receives an access request from the camera assembly device.
  • the specific method for the camera assembly device 6 to complete the operation includes:
  • the robot body transmits the received instruction including the target position and the photographing orientation to the camera assembly device 6, and the control module controls the camera assembly device 6 to perform the moving operation according to the target position and the photographing orientation included in the command;
  • the shooting task is executed, and after the shooting task is executed, the execution result is transmitted to the robot body through the signal receiving/transmitting device;
  • the camera assembly device After the camera assembly device performs the operations included in the command, it waits in place for a new shooting task.
  • the method for automatically returning the camera assembly device 6 to the robot body is as follows:
  • the camera assembly device is moved to a position where the error near the robot body is less than 100 mm by SLAM, where SLAM (Simultaneous localization and mapping, also called CML (Concurrent Mapping and Localization)), real-time positioning and map construction, the SLAM problem can be described as
  • SLAM Simultaneous localization and mapping, also called CML (Concurrent Mapping and Localization)
  • CML Concurrent Mapping and Localization
  • the signal receiving/transmitting device includes a calculating function for calculating an orientation of the robot body relative to the device body;
  • the orientation of the camera assembly device is repeatedly calculated and the camera assembly device is turned to the robot body direction and moved, and the camera assembly device is connected to the main body.
  • the present invention constructs a camera assembly device including a body, a plurality of driving mechanisms, a laser ranging scanning sensor, a signal receiving/transmitting device, and a power supply device, and mainly carries the camera assembly device through voice control.
  • the camera leaves the robot body, moves to a specified position, or relies on the robot's artificial intelligence to automatically detect the abnormal situation of the outside world automatically from the robot body, and the detached camera assembly device automatically moves and tracks the suspicious person and/or object, capturing Sufficient information to save or send the video information to the owner.
  • the camera assembly device can carry the camera out of the robot body to better complete the tracking shooting of the narrow area, and automatically find the robot body and automatically after completing the operation task.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Studio Devices (AREA)
  • Accessories Of Cameras (AREA)

Abstract

一种机器人的摄像头总成装置及其拍摄与跟踪方法,通过构建包括一装置本体(7)、驱动机构、激光测距扫描传感器(1)、信号收/发装置、电源装置的摄像头总成装置(6),通过语音控制或者机器人的人工智能自动识别感应到外界的异常,让装置本体(7)携带摄像头(4)离开机器人主体,移动到指定的位置,自动移动并跟踪可疑人和/或物,捕获到足够信息,将拍摄的视频信息保存或者发送给主人,通过本摄像头总成装置及其拍摄和跟踪方法,可以达到装置本体(7)携带摄像头(4)脱离机器人主体更好完成狭窄区域的跟踪拍摄的目的,完成操作任务后自动找到机器人主体并自动安装到原位置,方便实施且有效提高了机器人跟踪拍摄的效率。

Description

一种机器人的摄像头总成装置及其拍摄与跟踪方法 技术领域
本发明涉及电子设备技术领域,尤其涉及一种机器人的摄像头总成装置及其拍摄与跟踪方法。
背景技术
随着科学技术的发展,机器人的应用涉足众多领域,尤其是危险或者人类难以操作的地方,对机器人而言,摄像头是机器人的眼睛,能跟踪并拍摄用户让机器人跟踪的人或物体的图形,为众多领域的跟踪调查提供了方便。
但是,目前存在的拟人化的机器人包括的眼睛(摄像头)和身体是一体的,没有考虑到眼睛(摄像头)与机器人主体分离的情况,这使得在缝隙或者狭小空间搜索物品很不方便或者在可疑人物出现后不能进行自主的自动跟踪,甚至在剧烈震动或者在其他意外情况下摄像头掉下来导致机器人无法正常完成用户指定的操作。
因此,如何能让机器人的眼睛(摄像头)更方便跟踪与拍摄目标成为本领域技术人员面临的一大难题。
发明内容
针对上述问题,本发明提出一种机器人的摄像头总成装置及其拍摄与跟踪方法,通过构建一包括装置本体、若干驱动机构、一激光测 距扫描传感器、信号收/发装置、电源装置的摄像头总成装置,主要通过语音控制,让装置本体携带摄像头离开机器人主体,移动到指定的位置,或者依靠机器人的人工智能,自动感应识别到外界的异常情况自动脱离机器人主体,脱离的摄像头总成装置自动移动跟踪可疑人和/或物,捕获到足够信息,将视频信息保存或者发送给主人,摄像头总成装置自动回归到机器人主体,该技术方案具体为:
一种摄像头总成装置,其中,可分离设置于机器人主体上,所述装置包括:
装置本体,具有顶部表面及相对于该顶部表面的底部表面;
摄像头,嵌入设置于所述装置本体的侧壁上,以获取所述装置本体周围物体的图像;
激光测距扫描传感器,嵌入设置于所述装置本体上且凸出于所述顶部表面,以获取所述装置本体与其临近物体之间的距离;
信号收/发装置,设置于所述装置本体的顶部表面上和所述机器人主体上,以完成所述装置本体与所述机器人本体进行数据交互;
驱动结构,部分嵌入设置于所述装置本体中,以驱使所述装置本体移动;
控制模块,设置于所述装置本体内,并分别与所述摄像头、所述激光测距扫描传感器、位于装置本体上的所述信号收/发装置和所述驱动装置连接;
其中,所述控制模块通过所述信号收/发装置接收所述机器人主体下发的控制命令,并根据该控制命令控制所述摄像头的摄像动作, 和所述驱动装置的驱动动作,所述摄像头完成所述控制命令后,控制模块控制信号收/发装置将拍摄到的信息传输到用户。
上述装置,其中,所述装置还包括一电源装置,为摄像头总成装置离开机器人主体后的短时间的探索、定位和拍摄提供电能。
上述装置,其中,所述驱动结构由若干万向轮和若干驱动轮组成,其中,所述万向轮可以任意方向滚动,所述驱动轮分别通过两个轮子的转速精制地改变所述装置的位置和方向。
上述装置,其中,所述激光测距扫描传感器为二维激光测距扫描传感器。
一种拍摄与跟踪方法,其中,该方法基于上述摄像头总成装置,所述方法包括:
摄像头总成装置脱离机器人主体;
控制模块控制摄像头、信号收/发装置完成操作;
自动回到机器人主体并自动安装到所述机器人主体本身所在的位置。
上述方法,其中,所述完成操作后所述摄像头总成装置将操作结果通过视频和/或语音方式发送给主人。
上述方法,其中,通过语音指令控制所述摄像头总成装置脱离所述机器人主体和/或当所述摄像头总成装置感应到异常时所述摄像头总成装置自动脱离所述机器人主体。
上述方法,其中,所述摄像头总成装置与所述机器人主体通过无线的方式进行通信。
上述方法,其中,所述摄像头总成装置与用户通过无线的方式进行通信。
上述方法,其中,所述完成操作的具体方法包括:
所述机器人主体把接收到的包含目标位置和拍摄方位的指令传递给摄像头总成装置,所述摄像头总成装置根据所述目标位置和拍摄方位执行移动操作;
摄像头总成装置移动到所述目标位置并旋转到所述拍摄方位后,执行拍摄任务,将执行结果传递给所述机器人主体或者用户;
所述摄像头执行完所述指令后,在最后一次执行拍摄的位置处等待新的拍摄任务。
上述方法,其中,所述摄像头总成装置自动回到机器人主体的条件包括:
所述电源装置的电量低于安装于所述摄像头总成装置上的信号收/发装置接收到的操作指令包含的操作任务需求电量的两倍;
和/或所述摄像头总成装置接收到回归指令;
和/或所述摄像头总成装置与机器人主体通讯丢失。
上述方法,其中,所述摄像头总成装置回到机器人主体的方法具体为:
摄像头总成装置移动到机器人主体附近误差不超过100mm的位置;
安装于装置本体上的信号收发/装置接收到由机器人主体发出的信号,激光测距扫描传感器计算机器人主体相对于装置本体的方位;
驱动机构使摄像头总成装置转向主体方向,并往前移动;
激光测距扫描传感器反复计算摄像头总成装置的方位并使摄像头总成装置转向机器人主体方向并移动,摄像头总成装置接入主体。
本发明具有的优点以及能达到的有益效果:
通过采用本技术方案,可以达到装置本体携带摄像头脱离机器人主体的目的,更好完成狭窄区域的跟踪拍摄,完成操作任务后自动找到机器人主体并自动安装到原位置,方便实施且有效提高了机器人跟踪拍摄的效率,另外当摄像头总成装置察觉到意外情况发生时,自动脱离机器人主体完成操作并返回机器人主体,有效保护了摄像头装置,并提高了跟踪效率。
附图说明
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明及其特征外形和优点将会变得更加明显。在全部附图中相同的标记指示相同的部分。并未可以按照比例绘制附图,重点在于示出本发明的主旨。
图1(a)是本发明实施例中摄像头总成装置正面的结构示意图;
图1(b)是本发明实施例中摄像头总成装置底面的结构示意图;
图2是本发明实施例中摄像头总成装置跟踪与拍摄方法流程图;
图3是本发明实施例中本发明实施例摄像头总成装置完成操作的具体方法流程图;
图4是本发明实施例中摄像头总成装置回到机器人主体的具体 方法流程图。
具体实施方式
下面结合附图和具体的实施例对本发明作进一步的说明,但是不作为本发明的限定。
本发明介绍一种摄像头总成装置6,该摄像头总成装置可自动跟踪物体并将跟踪物体的信息返回给机器人主体或用户,该摄像头总成装置6包括一装置本体7;一摄像头4;激光测距扫描传感1;信号收/发装置;驱动结构和控制模块(由于控制模块置于摄像头总成装置内部,主要起控制左右,在图中未示意出),其中:
装置本体7具有顶部表面及相对于该顶部表面的底部表面;
摄像头4嵌入设置于装置本体的侧壁上,以获取装置本体周围物体的图像;
激光测距扫描传感器1嵌入设置于装置本体上且凸出于顶部表面,以获取装置本体与其临近物体之间的距离;
信号收/发装置设置于装置本体的顶部表面上和机器人主体上,以完成装置本体与机器人本体进行数据交互;
驱动结构,部分嵌入设置于装置本体中,以驱使装置本体移动。
控制模块,设置于装置本体内,并分别与摄像头、激光测距扫描传感器、位于装置本体上的信号收/发装置和驱动装置连接;
其中,控制模块通过信号收/发装置接收机器人主体下发的控制命令,并根据该控制命令控制摄像头的摄像动作和驱动装置的驱动动 作,摄像头完成控制命令后,控制模块控制信号收/发装置将拍摄到的信息传输到用户。
作为本发明一个优选实施例,摄像头总成装置6还包括一电源装置,为摄像头总成装置离开机器人主体后的短时间的探索、定位和拍摄提供电能。
作为本发明一个优选实施例,驱动结构由若干万向轮3和若干驱动轮5组成,其中,万向轮可以任意方向滚动,驱动轮分别通过两个轮子的转速精制地改变装置的位置和方向。
作为本发明一个优选实施例,激光测距扫描传感器1为二维激光测距扫描传感器。
图1(a)是本发明实施例中摄像头总成装置正面的结构示意图,图1(b)是本发明实施例中摄像头总成装置底面的结构示意图,即该图1(a)和图1(b)为摄像头总成装置的分解图,其中图1(a)示出该摄像头总成装置的上部区域(即其正面)中设置的部件结构,而图1(b)示出该摄像头总成装置的下部区域(即其背面)中设置的部件结构,将图1(a)翻转180°即呈现图1(b)所示的结构。
参见图1(a)和图1(b)所示,本发明一个优选实施例中,装置本体7正面可设置有激光测距扫描传感器1、信号收/发装置和摄像头4,其中该信号收/发装置可包括若干个红外传感器2,而该装置本体7的底面则可设置有至少两个万向轮3及两个驱动轮5,值得注意的是,图中未示意出电源系统和信号收/发装置。
参见图2,本发明公开一种拍摄与跟踪方法,该方法基于本发明 公开的摄像头总成装置,该拍摄与跟踪方法包括:
步骤S1、摄像头总成装置6脱离机器人主体,其中,摄像头总成装置6脱离机器人主体的条件包括:
摄像头总成装置6接收到语音控制指令,机器人主体把用户的拍摄请求解析为带有目标位置和拍摄方位的指令,传递给摄像头总成装置;
和/或摄像头总成装置6感应到异常,自动脱离机器人主体。
若步骤S1中是语音控制指令控制摄像头总成装置6脱离机器人本体完成跟踪与拍摄任务,则信号收/发装置将接受到包含有目标位置和拍摄任务的语音控制发送给控制模块,控制模块控制摄像头总成装置6完成语音控制指令包含的任务,该任务包括搜索到语音指令所包含的物体和/或拍摄异常的行动和清晰的图像。摄像头总成装置6完成操作后将操作结果通过视频和/或语音回复发送给主人,其中,摄像头总成装置6移动到目标位置并旋转到指定拍摄方位后,执行拍摄任务,拍摄任务分两种数据回传形式,第一帧是把拍摄到的视频数据压缩后实时回传,这种方式可以把照片的拍摄时机选择交给用户;第二种是把拍摄到的单张照片进行回传,这种方式可以把高清照片进行传输,拍摄时机由摄像头总成装置决定。
若步骤S1中是摄像头总成装置自动察觉到异常,摄像头总成装置自动检测机器人主体周围是否存在异常,若存在,摄像头总成装置自动拍摄机器人主体周围的异常并将发现的异常信息通过控制模块控制信号收/发装置发送给机器人主体或用户。
步骤S2、控制模块控制摄像头、信号收/发装置完成操作;
控制模块控制摄像头总成装置6按照接收到的语音控制指令包含目标位置和拍摄方位执行移动和拍摄任务,当摄像头是智能感受到异常而脱落,控制模块控制摄像头总成装置移动、跟踪异常的发生并自动执行拍摄任务,并将拍摄到的信息发送给机器人主体或者用户。
步骤S3、摄像头总成装置6自动回到机器人主体。
当摄像头总成装置6遇到以下几个条件之一时便自动回到机器人主体:
摄像头总成装置6的电源的电量低于用户发出的语音控制指令包含的操作任务需求电量的两倍;
用户发出回归指令,摄像头总成装置6的信号收/发装置接收到该回归指令,控制模块控制摄像头总成装置6回归并接入机器人主体;
摄像头总成装置6与机器人主体的通讯丢失,此种情况包括信号收/发装置无法检测到安装在机器人主体上的信号收/发装置或者无法与安装于机器人主体上的信号收/发装置通信。
其中,在摄像头总成装置6脱离机器人主体前,首先需要将机器人主体与摄像头总成装置6以及用户均接入无线网络覆盖环境下。
机器人主体开启TCP/IP监听服务,接收摄像头总成装置的接入请求。
作为本发明一个优选实施例,参见图3所示结构,摄像头总成装置6完成操作的具体方法包括:
机器人主体把接收到的包含目标位置和拍摄方位的指令传递给摄像头总成装置6,控制模块根据指令中包含的目标位置和拍摄方位控制摄像头总成装置6执行移动操作;
摄像头总成装置6在控制模块的控制下移动到该目标位置并旋转到该指令包含的拍摄方位后,执行拍摄任务,执行完拍摄任务之后通过信号收/发装置将执行结果传递给机器人主体;
摄像头总成装置执行完该指令所包含的操作后,原地待机等待新的拍摄任务。
作为本发明一个优选实施例,参见图4所示结构,摄像头总成装置6自动回到机器人主体的方法具体为:
通过SLAM使摄像头总成装置移动到机器人主体附近误差不超过100mm的位置,其中SLAM(Simultaneous localization and mapping,也称为CML(Concurrent Mapping and localization)),即时定位与地图构建,SLAM问题可以描述为:机器人在未知环境中从一个未知位置开始移动,在移动过程中根据位置估计和地图进行自身定位,同时在自身定位的基础上建造增量式地图,实现机器人的自主定位和导航。
作为本发明一个优选实施例,信号收/发装置包含一计算功能,计算机器人主体相对于装置本体的方位;
通过驱动机构使装置本体转向主体方向,并往前移动;
反复计算摄像头总成装置的方位并使摄像头总成装置转向机器人主体方向并移动,摄像头总成装置接入主体。
综上所述,本发明通过构建一包括一本体、若干驱动机构、一激光测距扫描传感器、信号收/发装置、电源装置的摄像头总成装置,主要通过语音控制,让摄像头总成装置携带摄像头离开机器人主体,移动到指定的位置,或者依靠机器人的人工智能,自动感应识别到外界的异常情况自动脱离机器人主体,脱离的摄像头总成装置自动移动并跟踪可疑人和/或物,捕获到足够信息,将视频信息保存或者发送给主人,通过采用本技术方案,可以达到摄像头总成装置携带摄像头脱离机器人主体更好完成狭窄区域的跟踪拍摄的目的,完成操作任务后自动找到机器人主体并自动安装到原位置,方便实施且有效提高了机器人跟踪拍摄的效率。本领域技术人员应该理解,本领域技术人员在结合现有技术以及上述实施例可以实现所述变化例,在此不做赘述。这样的变化例并不影响本发明的实质内容,在此不予赘述。
以上对本发明的较佳实施例进行了描述。需要理解的是,本发明并不局限于上述特定实施方式,其中未尽详细描述的设备和结构应该理解为用本领域中的普通方式予以实施;任何熟悉本领域的技术人员,在不脱离本发明技术方案范围情况下,都可利用上述揭示的方法和技术内容对本发明技术方案作出许多可能的变动和修饰,或修改为等同变化的等效实施例,这并不影响本发明的实质内容。因此,凡是未脱离本发明技术方案的内容,依据本发明的技术实质对以上实施例所做的任何简单修改、等同变化及修饰,均仍属于本发明技术方案保护的范围。

Claims (12)

  1. 一种摄像头总成装置,其特征在于,可分离设置于机器人主体上,所述装置包括:
    装置本体,具有顶部表面及相对于该顶部表面的底部表面;
    摄像头,嵌入设置于所述装置本体的侧壁上,以获取所述装置本体周围物体的图像;
    激光测距扫描传感器,嵌入设置于所述装置本体上且凸出于所述顶部表面,以获取所述装置本体与其临近物体之间的距离;
    信号收/发装置,设置于所述装置本体的顶部表面上和所述机器人主体上,以完成所述装置本体与所述机器人本体进行数据交互;
    驱动结构,部分嵌入设置于所述装置本体中,以驱使所述装置本体移动;
    控制模块,设置于所述装置本体内,并分别与所述摄像头、所述激光测距扫描传感器、位于装置本体上的所述信号收/发装置和所述驱动装置连接;
    其中,所述控制模块通过所述信号收/发装置接收所述机器人主体下发的控制命令,并根据该控制命令控制所述摄像头的摄像动作,和所述驱动装置的驱动动作,所述摄像头完成所述控制命令后,控制模块控制信号收/发装置将拍摄到的信息传输到用户。
  2. 如权利要求1所述装置,其特征在于,所述装置还包括一电源装置,为装置本体离开机器人主体后的短时间的探索、定位和拍摄提供电能。
  3. 如权利要求1所述装置,其特征在于,所述驱动结构由若干 万向轮和若干驱动轮组成,其中,所述万向轮可以任意方向滚动,所述驱动轮分别通过两个轮子的转速精制地改变所述装置的位置和方向。
  4. 如权利要求1所述装置,其特征在于,所述激光测距扫描传感器为二维激光测距扫描传感器。
  5. 一种拍摄与跟踪方法,其特征在于,所述方法基于权利要求1-4中任意一项权利要求所述的装置,所述方法包括:
    摄像头总成装置脱离机器人主体;
    控制模块控制摄像头、信号收/发装置完成操作;
    自动回到机器人主体并自动安装到所述机器人主体本身所在的位置。
  6. 如权利要求5所述方法,其特征在于,所述完成操作后所述摄像头总成装置将操作结果通过视频和/或语音方式发送给主人。
  7. 如权利要求5所述方法,其特征在于,通过语音指令控制所述摄像头总成装置脱离所述机器人主体和/或当所述摄像头总成装置感应到异常时所述摄像头总成装置自动脱离所述机器人主体。
  8. 如权利要求5所述方法,其特征在于,所述摄像头总成装置与所述机器人主体通过无线的方式进行通信。
  9. 如权利要求5所述方法,其特征在于,所述摄像头总成装置与用户通过无线的方式进行通信。
  10. 如权利要求5所述方法,其特征在于,所述完成操作的具体方法包括:
    所述机器人主体把接收到的包含目标位置和拍摄方位的指令传递给摄像头总成装置,所述摄像头总成装置根据所述目标位置和拍摄方位执行移动操作;
    摄像头总成装置到所述目标位置并旋转到所述拍摄方位后,执行拍摄任务,将执行结果传递给所述机器人主体或者用户;
    所述摄像头执行完所述指令后,在最后一次执行拍摄的位置处等待新的拍摄任务。
  11. 如权利要求5所述方法,其特征在于,所述摄像头总成装置自动回到机器人主体的条件包括:
    所述电源装置的电量低于安装于所述摄像头总成装置上的信号收/发装置接收到的操作指令包含的操作任务需求电量的两倍;
    和/或所述摄像头总成装置接收到回归指令;
    和/或所述摄像头总成装置与机器人主体通讯丢失。
  12. 如权利要求5所述方法,其特征在于,所述摄像头总成装置回到机器人主体的方法具体为:
    摄像头总成装置移动到机器人主体附近误差不超过100mm的位置;
    安装于装置本体上的信号收/发装置接收到由机器人主体发出的信号,激光测距扫描传感器计算机器人主体相对于装置本体的方位;
    驱动机构使装置本体转向主体方向,并往前移动;
    激光测距扫描传感器反复计算装置本体的方位并使装置本体转向机器人主体方向并移动,装置本体接入主体。
PCT/CN2016/085758 2015-06-30 2016-06-14 一种机器人的摄像头总成装置及其拍摄与跟踪方法 WO2017000773A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510387047.2 2015-06-30
CN201510387047.2A CN106325306B (zh) 2015-06-30 2015-06-30 一种机器人的摄像头总成装置及其拍摄与跟踪方法

Publications (1)

Publication Number Publication Date
WO2017000773A1 true WO2017000773A1 (zh) 2017-01-05

Family

ID=57607844

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/085758 WO2017000773A1 (zh) 2015-06-30 2016-06-14 一种机器人的摄像头总成装置及其拍摄与跟踪方法

Country Status (4)

Country Link
CN (1) CN106325306B (zh)
HK (1) HK1231577A1 (zh)
TW (1) TWI625970B (zh)
WO (1) WO2017000773A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110296656A (zh) * 2019-07-15 2019-10-01 三峡大学 基于机器视觉的开关柜自动倒闸装置及方法
CN111970436A (zh) * 2020-08-03 2020-11-20 南京智能高端装备产业研究院有限公司 一种基于激光雷达技术的教学录播方法及系统
CN117739994A (zh) * 2024-02-20 2024-03-22 广东电网有限责任公司阳江供电局 一种视觉机器人水下目标识别追踪方法及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6988704B2 (ja) * 2018-06-06 2022-01-05 トヨタ自動車株式会社 センサ制御装置、物体探索システム、物体探索方法及びプログラム
CN113114942B (zh) * 2021-04-14 2023-01-13 维沃移动通信有限公司 拍摄方法及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992015838A1 (en) * 1991-03-07 1992-09-17 Fanuc Ltd Detection position correction system
CN1593859A (zh) * 2004-07-14 2005-03-16 华南理工大学 保安巡逻机器人
CN101084817A (zh) * 2007-04-26 2007-12-12 复旦大学 开放智能计算构架的家用多功能小型服务机器人
CN104618634A (zh) * 2015-02-12 2015-05-13 深圳市欧珀通信软件有限公司 移动终端及校准摄像头旋转角度的方法
CN104679001A (zh) * 2015-01-25 2015-06-03 无锡桑尼安科技有限公司 一种用于矩形目标检测方法

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU767561B2 (en) * 2001-04-18 2003-11-13 Samsung Kwangju Electronics Co., Ltd. Robot cleaner, system employing the same and method for reconnecting to external recharging device
KR100850462B1 (ko) * 2006-03-03 2008-08-07 삼성테크윈 주식회사 경계 로봇
CN101896957B (zh) * 2008-10-15 2013-06-19 松下电器产业株式会社 光投射装置
TW201040886A (en) * 2009-05-06 2010-11-16 Taiwan Shin Kong Security Co Ltd Security robot system
TW201245931A (en) * 2011-05-09 2012-11-16 Asustek Comp Inc Robotic device
US9025856B2 (en) * 2012-09-05 2015-05-05 Qualcomm Incorporated Robot control information
CN203169878U (zh) * 2013-03-27 2013-09-04 上海理工大学 遥控消防机器人
CN204131634U (zh) * 2014-07-15 2015-01-28 深圳奇沃智联科技有限公司 具影像辨识及自动巡逻路径设定的机器人监视系统
CN204260673U (zh) * 2014-11-05 2015-04-15 东莞市万锦电子科技有限公司 地面清洁机器人以及地面清洁机器人系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992015838A1 (en) * 1991-03-07 1992-09-17 Fanuc Ltd Detection position correction system
CN1593859A (zh) * 2004-07-14 2005-03-16 华南理工大学 保安巡逻机器人
CN101084817A (zh) * 2007-04-26 2007-12-12 复旦大学 开放智能计算构架的家用多功能小型服务机器人
CN104679001A (zh) * 2015-01-25 2015-06-03 无锡桑尼安科技有限公司 一种用于矩形目标检测方法
CN104618634A (zh) * 2015-02-12 2015-05-13 深圳市欧珀通信软件有限公司 移动终端及校准摄像头旋转角度的方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110296656A (zh) * 2019-07-15 2019-10-01 三峡大学 基于机器视觉的开关柜自动倒闸装置及方法
CN110296656B (zh) * 2019-07-15 2021-10-19 三峡大学 基于机器视觉的开关柜自动倒闸装置及方法
CN111970436A (zh) * 2020-08-03 2020-11-20 南京智能高端装备产业研究院有限公司 一种基于激光雷达技术的教学录播方法及系统
CN117739994A (zh) * 2024-02-20 2024-03-22 广东电网有限责任公司阳江供电局 一种视觉机器人水下目标识别追踪方法及系统
CN117739994B (zh) * 2024-02-20 2024-04-30 广东电网有限责任公司阳江供电局 一种视觉机器人水下目标识别追踪方法及系统

Also Published As

Publication number Publication date
CN106325306A (zh) 2017-01-11
TWI625970B (zh) 2018-06-01
CN106325306B (zh) 2019-07-16
TW201715877A (zh) 2017-05-01
HK1231577A1 (zh) 2017-12-22

Similar Documents

Publication Publication Date Title
WO2017000773A1 (zh) 一种机器人的摄像头总成装置及其拍摄与跟踪方法
JP3667281B2 (ja) 移動通信網を用いたロボット掃除システム
JP2002172584A (ja) Rfモジュールを用いたモービルロボットシステム
US8214082B2 (en) Nursing system
JP5122770B2 (ja) 映像認識の可能な移動体誘導システム
WO2019128070A1 (zh) 目标跟踪方法及装置、移动设备及存储介质
KR20180055571A (ko) 이동 로봇 시스템, 이동 로봇 및 이동 로봇 시스템의 제어 방법
CN102902271A (zh) 基于双目视觉的机器人目标识别与抓取系统及方法
US20070265799A1 (en) System for determining three dimensional position of radio transmitter
JP2018085630A5 (zh)
CN101653662A (zh) 机器人
WO2018228256A1 (zh) 一种通过图像识别方式达成室内任务目标位置确定的系统和方法
JP2017114270A (ja) 特定ビーコン追跡機能を有する無人飛行体および追跡ビーコン発信ユニット
WO2019001237A1 (zh) 一种移动电子设备以及该移动电子设备中的方法
WO2018228254A1 (zh) 一种移动电子设备以及该移动电子设备中的方法
CN105773619A (zh) 用于实现人形机器人抓取行为的电控系统和人形机器人
KR20150097049A (ko) 네추럴 ui를 이용한 자율서빙 로봇 시스템
CN110744544A (zh) 服务机器人视觉抓取方法及服务机器人
WO2018228258A1 (zh) 一种移动电子设备以及该移动电子设备中的方法
CN113084776B (zh) 基于视觉的多传感器融合的智能防疫机器人及系统
Dunbabin et al. Experiments with cooperative control of underwater robots
JP2007303913A (ja) 異物検知装置、それを用いたロボット装置、異物検知方法および異物検知プログラム
US10901412B2 (en) Moving body, control method, and recording medium
JP2021047724A (ja) 作業システム、自律作業機、自律作業機の制御方法及びプログラム
KR101891312B1 (ko) 원격 구동 로봇, 그리고 사용자 단말기를 이용한 상기 원격 구동 로봇의 제어 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16817137

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16817137

Country of ref document: EP

Kind code of ref document: A1