CN108994832B - Robot eye system based on RGB-D camera and self-calibration method thereof - Google Patents

Robot eye system based on RGB-D camera and self-calibration method thereof Download PDF

Info

Publication number
CN108994832B
CN108994832B CN201810804650.XA CN201810804650A CN108994832B CN 108994832 B CN108994832 B CN 108994832B CN 201810804650 A CN201810804650 A CN 201810804650A CN 108994832 B CN108994832 B CN 108994832B
Authority
CN
China
Prior art keywords
robot
camera
rgb
pose
calibration method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810804650.XA
Other languages
Chinese (zh)
Other versions
CN108994832A (en
Inventor
李明洋
王家鹏
任明俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jieka Robot Co ltd
Original Assignee
Shanghai Jaka Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jaka Robot Technology Co ltd filed Critical Shanghai Jaka Robot Technology Co ltd
Priority to CN201810804650.XA priority Critical patent/CN108994832B/en
Publication of CN108994832A publication Critical patent/CN108994832A/en
Application granted granted Critical
Publication of CN108994832B publication Critical patent/CN108994832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1653Programme controls characterised by the control loop parameters identification, estimation, stiffness, accuracy, error analysis
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0009Constructional details, e.g. manipulator supports, bases
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robot eye system based on an RGB-D camera and a self-calibration method thereof, which relate to the technical field of robots and comprise the following steps: (1) the robot is fixed on the table board through the base, the RGB-D camera is driven to move through the connecting piece, a series of depth maps of the environment are obtained, and the pose of the robot is recorded at the same time; (2) performing three-dimensional reconstruction on the shot scene by using the continuous depth maps to acquire the pose of the camera when each depth map is shot; (3) combining the camera pose and the robot pose at the same moment to form a constraint equation about the relative poses of the robot tail end and the camera; (4) combining the poses at all the moments to construct an equation set about the relative poses of the robot tail end and the camera; (5) solving the equation set by a Tsai-Lenz method to obtain the relative pose of the robot tail end and the camera. The invention does not need to use special markers, has simple algorithm, can be used for online calibration and greatly improves the calibration efficiency of the robot eyes.

Description

Robot eye system based on RGB-D camera and self-calibration method thereof
Technical Field
The invention relates to the technical field of robots, in particular to a robot eye system based on an RGB-D camera and a self-calibration method thereof
Background
Against the background of the project "china manufacturing 2025", the china industry is increasingly demanding flexible production lines and intelligent robots. An important means for making a robot intelligent is to give the robot the ability to see using machine vision. Machine vision refers to acquiring image information of an environment through a vision sensor, so that a machine has a function of visual perception. Through machine vision, the robot can identify the object and determine the position of the object.
The motion coordinate system of the robot and the coordinate system of the camera are combined through 'hand-eye calibration', wherein the end effector of the robot can be regarded as a hand of a person, and the vision sensor can be regarded as an eye of the person. The eye-in-hand system of the robot is generally divided into two types, namely eye-in-hand (eye-in-hand) and eye-to-hand (eye-to-hand), wherein the eye-in-hand system fixes the vision sensor on the robot end effector and can move along with the robot end, and the eye-to-hand system fixes the vision sensor in the environment and does not move along with the robot end. The former tends to have higher flexibility and also allows the robot to act with higher precision.
The traditional robot eye system adopts high-precision markers such as cubes or checkerboards, angular points are extracted from an image, the relative position from a camera to the markers is calculated in a projective geometric mode, meanwhile, the position of the tail end of the robot is recorded, and the relative pose between the tail end of the robot and the camera is calculated through multiple groups of data obtained through multiple shooting. However, the method needs a high-precision marker, and often needs off-line calibration, which greatly limits the application range of the calibration method.
Therefore, those skilled in the art are dedicated to develop a robot eye-hand system based on RGB-D camera and a self-calibration method thereof, which do not need special markers, and only need a certain spatial complexity of the environment around the robot, so that the off-line or on-line calibration of the eye-hand system can be performed through algorithm calculation.
Disclosure of Invention
In view of the above defects in the prior art, the technical problem to be solved by the present invention is to overcome the disadvantage that the prior art needs a high-precision marker, and limits the application range of the conventional calibration method, and to solve the problem that the conventional calibration method only can perform offline calibration because the high-precision marker needs to be used.
In order to achieve the purpose, the invention provides a robot eye system self-calibration method based on an RGB-D camera, which comprises the following steps:
s1, driving an RGB-D camera installed at the tail end of the robot to move through the robot, acquiring a series of surrounding environment depth maps, and recording pose information of the robot at the same moment;
step S2, performing three-dimensional reconstruction on the shot scene by using a series of continuous depth maps, thereby obtaining the camera pose when each depth map is shot;
step S3, combining the camera pose and the robot pose at the same moment into a data pair, and combining any two groups of data pairs at different moments to form a constraint equation about the relative poses of the tail end and the camera of the robot;
s4, combining the poses at all times in pairs to construct a large equation set related to the relative poses of the tail end of the robot and the camera;
and S5, solving the equation set by adopting a Tsai-Lenz algorithm to obtain the relative pose of the robot tail end and the camera.
The invention also provides a robot hand-eye system based on the RGB-D camera, which comprises a robot, a workbench, a robot base, a rigid connecting piece, the RGB-D camera and a robot end effector; the robot base is arranged on the workbench; the robot is arranged on the robot base; the rigid connecting piece is arranged at the tail end of the robot; the RGB-D camera is arranged on the rigid connecting piece; the robot end effector is disposed on the rigid link.
Further, the acquiring of the robot pose and the depth map of the RGB-D camera at the same time is triggered by hardware or software to acquire a frame of image by the RGB-D camera, and the robot pose is acquired by a socket or a robot API.
Furthermore, the three-dimensional reconstruction of the shot scene according to a series of continuous depth maps is to fuse the multi-frame depth maps through a TSDF Volume model, perform three-dimensional reconstruction of the scene and estimate the RGB-D camera pose corresponding to each depth map.
Further, the two groups of data pairs at different time instants are combined to form a constraint equation about the relative pose of the robot end and the camera, the data at two time instants are randomly selected from the continuous multi-time data to be combined, and the established constraint equation is the most basic pose matrix transformation without other prior assumptions.
Further, the equation set is solved through the Tsai-Lenz method to obtain the relative pose of the robot end and the camera, the rotation and translation parts of a relative transformation matrix between the robot end and the camera are separately solved, the change of the pose is represented by using a Rodrigue parameter, a rotation vector is firstly solved, and then a rotation matrix is solved.
Further, the RGB-D camera position is set as the origin of the reference coordinate system in the step S1, and the direction matrix is a unit matrix.
Further, in the moving process of the robot, the following formula is satisfied between any two poses:
Figure BDA0001737903470000021
wherein subscript g represents robot end coordinate system, c represents camera coordinate system, i, j represent recorded pose sequence numbers, such as: and c _ i represents a camera coordinate system in the poses recorded in the ith group, and Hcjci represents the transformation of the spatial midpoint from the coordinates in the camera coordinate system in the i pose to the coordinates in the camera coordinate system in the j pose.
Further, the robot base is rigidly connected with the workbench, the RGB-D camera is rigidly connected with the robot end effector, and the robot can carry the RGB-D camera and the end effector to move together.
Further, the table is all environmental information around the vision sensor including the table top.
Compared with the prior art, the robot eye calibration method does not need to use special markers, has simple algorithm, can be used for online calibration, and greatly improves the robot eye calibration efficiency.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a schematic diagram of a hand-eye calibration basic model according to a preferred embodiment of the present invention;
fig. 2 is a schematic structural diagram of a hand-eye calibration system according to a preferred embodiment of the invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
In the drawings, structurally identical elements are represented by like reference numerals, and structurally or functionally similar elements are represented by like reference numerals throughout the several views. The size and thickness of each component shown in the drawings are arbitrarily illustrated, and the present invention is not limited to the size and thickness of each component. The thickness of the components may be exaggerated where appropriate in the figures to improve clarity.
The invention adopts the following technical scheme:
the RGB-D camera is fixed to the robot end effector so that the RGB-D camera can move with the robot end and keep the connection between the two rigid and free of relative movement. The robot is operated to move and depth map data of the environment surrounding the robot is acquired by a depth sensor. And performing real-time three-dimensional reconstruction on the environment around the robot by taking the posture of the camera when the first depth map is acquired as a reference coordinate system, and simultaneously recording the posture of the robot when the depth map data is acquired each time.
The pose of the camera relative to the reference coordinate system when the depth map is acquired each time can be estimated while the environment around the robot is reconstructed.
For this form of robotic eye system, the basic model is shown in fig. 1.
In the moving process of the robot, the following formula is satisfied between any two poses:
Figure BDA0001737903470000031
wherein subscript g represents robot end coordinate system, c represents camera coordinate system, i, j represent recorded pose sequence numbers, such as: c. CiCoordinate system of camera in pose representing ith group of records, HcjciThe representation transforms the spatial midpoint from coordinates in the camera coordinate system at i pose to coordinates in the camera coordinate system at j pose. Since the relative position between the robot tip and the camera is fixed, the robot is moved to the second positionThis HgcThe subscripts i, j are omitted. Let A be Hgjgi,B=Hgjgi,X=HgcThen equation (1) can be simplified as:
AX=XB (2)
splitting the above formula, and writing into a block matrix in the form of:
Figure BDA0001737903470000041
Figure BDA0001737903470000042
wherein R and t are the rotation and translation components in the transformation matrix, respectively, and comparing the two sides of the equation can yield the equation set:
Figure BDA0001737903470000043
for the solution of equation set (5), the present invention employs a Tsai two-step solution, commonly referred to as the Tsai-Lenz method, which is one of the most widely used robotic eye calibration methods.
As shown in fig. 2, the robot hand-eye self-calibration method of the present invention includes: the robot comprises a robot base 1, a robot 2, a rigid connecting device 3 of the tail end of the robot and a vision sensor, a robot tail end executor 4 which is an executing mechanism for the robot to execute specific tasks, an RGB-D vision sensor 5 and a workbench environment 6, wherein the workbench environment referred to in the patent is all environment information around the vision sensor including a workbench surface.
(1) The robot 2 is operated to return it to its initial position so that the main part of the worktop environment 6 is within the field of view of the RGB-D camera 5. The camera position at this time is set as the origin of the reference coordinate system, and the direction matrix is a unit matrix. Initializing a working space of visual reconstruction, wherein a reconstruction model adopted by the invention is a TSDF (round Signed Distance function) model.
(2) Operating the robot 2 edgeThe RGB-D camera 5 is driven to move along a set track (such as a Z-shaped route), a scene 6 is continuously shot in the moving process, and an obtained depth map IkAnd simultaneously recording the pose T of the tail end 4 of the robotbgk
(3) For each obtained depth map IkAccording to Ray casting method, the data in TSDFvolume is processed according to Ik-1Projecting the pose of the moment to obtain I'k-1Calculating I 'by ICP algorithm'k-1And IkRelative pose T therebetweenck,ck-1. Therefore, the pose of the camera at the time k is
Figure BDA0001737903470000044
(4) Camera pose T according to time kckDepth map IkAnd projecting the three-dimensional space, and then fusing the three-dimensional space into the TSDF Volume according to the definition mode of the TSDF model.
(5) A series of camera poses T can be obtained according to the 2-4 step line circulationckAnd corresponding robot end pose TbgkAnd (4) data. Grouping two of them, e.g. a set of data consisting of time i and time j data, should
Figure BDA0001737903470000051
(6) Assuming that a total of N data for each position is obtained, N x (N-1)/2 sets of data can be obtained, wherein each set of data can be listed as an equation of the form shown in equation (7). These N x (N-1)/2 equations constitute a large system of equations.
(7) After a plurality of groups of data are obtained, solving is carried out on the equation set according to a Tsai-Lenz method, and finally the relative pose T between the camera and the tail end of the robot is obtainedgc
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A robot eye system self-calibration method based on an RGB-D camera is characterized by comprising the following steps:
s1, driving an RGB-D camera installed at the tail end of the robot to move through the robot, acquiring a series of surrounding environment depth maps, and recording pose information of the robot at the same moment;
step S2, performing three-dimensional reconstruction on the shot scene by using a series of continuous depth maps, thereby obtaining the camera pose when each depth map is shot;
step S3, combining the camera pose and the robot pose at the same moment into a data pair, and combining any two groups of data pairs at different moments to form a constraint equation about the relative poses of the tail end and the camera of the robot;
s4, combining the poses at all times in pairs to construct a large equation set related to the relative poses of the tail end of the robot and the camera;
and S5, solving the equation set by adopting a Tsai-Lenz algorithm to obtain the relative pose of the robot tail end and the camera.
2. A robot hand-eye system based on an RGB-D camera is characterized by comprising a robot, a workbench, a robot base, a rigid connecting piece, the RGB-D camera and a robot end effector; the robot base is arranged on the workbench; the robot is arranged on the robot base; the rigid connecting piece is arranged at the tail end of the robot; the RGB-D camera is arranged on the rigid connecting piece; the robot end effector is disposed on the rigid link.
3. The self-calibration method of the robot eye system based on the RGB-D camera as claimed in claim 1, wherein the robot pose and the depth map of the RGB-D camera at the same time are obtained by triggering the RGB-D camera to obtain a frame of image in a hardware triggering or software triggering mode, and obtaining the robot pose through a socket or a robot API.
4. The self-calibration method for the robot eye system based on the RGB-D camera as claimed in claim 1, wherein the captured scene is three-dimensionally reconstructed according to a series of continuous depth maps, a plurality of frames of depth maps are fused by a TSDF Volume model, the scene is three-dimensionally reconstructed, and the pose of the RGB-D camera corresponding to each depth map is estimated.
5. The RGB-D camera based robot eye system self-calibration method of claim 1, wherein the combination of two sets of data pairs at different time instants to form a constraint equation about the relative pose of the robot tip and the camera is performed by arbitrarily selecting two time instants of data in the continuous multi-time data to combine, and the created constraint equation is the most basic pose matrix transformation without other prior assumptions.
6. The RGB-D camera based robot eye system self-calibration method of claim 1, wherein the solving of the system of equations by the Tsai-Lenz method to obtain the relative pose of the robot tip and the camera is performed by solving separately the rotation and translation parts of the relative transformation matrix between the robot tip and the camera, using the rodgers parameters to represent the change of the pose, first solving the rotation vector, and then solving the rotation matrix.
7. The RGB-D camera based robot eye system self-calibration method of claim 1, wherein the RGB-D camera position is set as an origin of a reference coordinate system in the step S1, and the direction matrix is a unit matrix.
8. The self-calibration method for the robot eye system based on the RGB-D camera as claimed in claim 1, wherein the following formula is satisfied between any two poses of the robot during the moving process:
Figure FDA0002860544260000021
where the subscript g denotes the robot end coordinate system, c denotes the camera coordinate system, i, j denote the recorded pose sequence numbers, and accordingly ciCoordinate system of camera in pose representing ith group of records, HcjciThe representation transforms the spatial midpoint from coordinates in the camera coordinate system at i pose to coordinates in the camera coordinate system at j pose.
9. The RGB-D camera based robotic hand-eye system of claim 2, wherein the robotic base is rigidly connected to the stage, the RGB-D camera being rigidly connected to the robotic end effector, the robot being capable of moving with the RGB-D camera and the end effector.
10. The RGB-D camera based robotic hand-eye system of claim 2, wherein the table is all environmental information around a vision sensor including a table top.
CN201810804650.XA 2018-07-20 2018-07-20 Robot eye system based on RGB-D camera and self-calibration method thereof Active CN108994832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810804650.XA CN108994832B (en) 2018-07-20 2018-07-20 Robot eye system based on RGB-D camera and self-calibration method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810804650.XA CN108994832B (en) 2018-07-20 2018-07-20 Robot eye system based on RGB-D camera and self-calibration method thereof

Publications (2)

Publication Number Publication Date
CN108994832A CN108994832A (en) 2018-12-14
CN108994832B true CN108994832B (en) 2021-03-02

Family

ID=64596742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810804650.XA Active CN108994832B (en) 2018-07-20 2018-07-20 Robot eye system based on RGB-D camera and self-calibration method thereof

Country Status (1)

Country Link
CN (1) CN108994832B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109531577B (en) * 2018-12-30 2022-04-19 北京猎户星空科技有限公司 Mechanical arm calibration method, device, system, medium, controller and mechanical arm
CN110281231B (en) * 2019-03-01 2020-09-29 浙江大学 Three-dimensional vision grabbing method for mobile robot for unmanned FDM additive manufacturing
CN110276803B (en) * 2019-06-28 2021-07-20 首都师范大学 Formalization method and device for camera pose estimation, electronic equipment and storage medium
CN110238831B (en) * 2019-07-23 2020-09-18 青岛理工大学 Robot teaching system and method based on RGB-D image and teaching device
CN110480658B (en) * 2019-08-15 2022-10-25 同济大学 Six-axis robot control system integrating vision self-calibration
CN110580725A (en) * 2019-09-12 2019-12-17 浙江大学滨海产业技术研究院 Box sorting method and system based on RGB-D camera
CN111452048B (en) * 2020-04-09 2023-06-02 亚新科国际铸造(山西)有限公司 Calibration method and device for relative spatial position relation of multiple robots
CN111474932B (en) * 2020-04-23 2021-05-11 大连理工大学 Mobile robot mapping and navigation method integrating scene experience
CN111890355B (en) * 2020-06-29 2022-01-11 北京大学 Robot calibration method, device and system
CN113479442B (en) * 2021-07-16 2022-10-21 上海交通大学烟台信息技术研究院 Device and method for realizing intelligent labeling of unstructured objects on assembly line

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102566577A (en) * 2010-12-29 2012-07-11 沈阳新松机器人自动化股份有限公司 Method for simply and easily calibrating industrial robot
CN106384353A (en) * 2016-09-12 2017-02-08 佛山市南海区广工大数控装备协同创新研究院 Target positioning method based on RGBD
CN107253190A (en) * 2017-01-23 2017-10-17 梅卡曼德(北京)机器人科技有限公司 The device and its application method of a kind of high precision machines people trick automatic camera calibration
CN107818554A (en) * 2016-09-12 2018-03-20 索尼公司 Message processing device and information processing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100468857B1 (en) * 2002-11-21 2005-01-29 삼성전자주식회사 Method for calibrating hand/eye using projective invariant shape descriptor for 2-dimensional shape

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102566577A (en) * 2010-12-29 2012-07-11 沈阳新松机器人自动化股份有限公司 Method for simply and easily calibrating industrial robot
CN106384353A (en) * 2016-09-12 2017-02-08 佛山市南海区广工大数控装备协同创新研究院 Target positioning method based on RGBD
CN107818554A (en) * 2016-09-12 2018-03-20 索尼公司 Message processing device and information processing method
CN107253190A (en) * 2017-01-23 2017-10-17 梅卡曼德(北京)机器人科技有限公司 The device and its application method of a kind of high precision machines people trick automatic camera calibration

Also Published As

Publication number Publication date
CN108994832A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
CN108994832B (en) Robot eye system based on RGB-D camera and self-calibration method thereof
CN112132894B (en) Mechanical arm real-time tracking method based on binocular vision guidance
CN107160364B (en) Industrial robot teaching system and method based on machine vision
JP2019508273A (en) Deep-layer machine learning method and apparatus for grasping a robot
Ren et al. Domain randomization for active pose estimation
Schröder et al. Real-time hand tracking using synergistic inverse kinematics
Hebert et al. Combined shape, appearance and silhouette for simultaneous manipulator and object tracking
JP2022542239A (en) Autonomous Task Execution Based on Visual Angle Embedding
JP2021167060A (en) Robot teaching by human demonstration
CN113327281A (en) Motion capture method and device, electronic equipment and flower drawing system
Gratal et al. Visual servoing on unknown objects
JP2015071206A (en) Control device, robot, teaching data generation method, and program
CN111300384B (en) Registration system and method for robot augmented reality teaching based on identification card movement
CN113751981B (en) Space high-precision assembling method and system based on binocular vision servo
CN110909644A (en) Method and system for adjusting grabbing posture of mechanical arm end effector based on reinforcement learning
CN114851201B (en) Mechanical arm six-degree-of-freedom visual closed-loop grabbing method based on TSDF three-dimensional reconstruction
Schröder et al. Real-time hand tracking with a color glove for the actuation of anthropomorphic robot hands
CN113379849A (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
Kazakidi et al. Vision-based 3D motion reconstruction of octopus arm swimming and comparison with an 8-arm underwater robot
Gratal et al. Virtual visual servoing for real-time robot pose estimation
CN110722547B (en) Vision stabilization of mobile robot under model unknown dynamic scene
Cai et al. 6D image-based visual servoing for robot manipulators with uncalibrated stereo cameras
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system
Walck et al. Automatic observation for 3d reconstruction of unknown objects using visual servoing
Wen et al. Data-driven 6d pose tracking by calibrating image residuals in synthetic domains

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 200240 building 6, 646 Jianchuan Road, Minhang District, Shanghai

Applicant after: SHANGHAI JAKA ROBOTICS Ltd.

Address before: 200120 floor 1, building 1, No. 251, Yaohua Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant before: SHANGHAI JAKA ROBOTICS Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Building 6, 646 Jianchuan Road, Minhang District, Shanghai 201100

Patentee after: Jieka Robot Co.,Ltd.

Address before: 200240 building 6, 646 Jianchuan Road, Minhang District, Shanghai

Patentee before: SHANGHAI JAKA ROBOTICS Ltd.

CP03 Change of name, title or address