CN113450414A - Camera calibration method, device, system and storage medium - Google Patents

Camera calibration method, device, system and storage medium Download PDF

Info

Publication number
CN113450414A
CN113450414A CN202010214083.XA CN202010214083A CN113450414A CN 113450414 A CN113450414 A CN 113450414A CN 202010214083 A CN202010214083 A CN 202010214083A CN 113450414 A CN113450414 A CN 113450414A
Authority
CN
China
Prior art keywords
robot
positioning
image
coordinates
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010214083.XA
Other languages
Chinese (zh)
Inventor
朱凯
张友群
彭忠东
冯雪涛
井连杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lianhe Technology Co ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010214083.XA priority Critical patent/CN113450414A/en
Publication of CN113450414A publication Critical patent/CN113450414A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application provides a camera calibration method, equipment, a system and a storage medium. The system comprises: the system comprises a camera to be calibrated, a robot and a server, wherein the server is in communication connection with the camera and the robot respectively; the robot is used for carrying out autonomous positioning according to a robot coordinate system in the moving process and providing positioning coordinates of the robot at the target position to the server; the camera is used for shooting the robot and providing a first image obtained when the robot is located at the target position to the server; the server is used for converting the positioning coordinates into scene coordinates based on the mapping relation between the robot coordinate system and the scene coordinate system; determining image coordinates of the target position in the first image according to an image coordinate system; and determining calibration parameters of the camera according to the scene coordinates and the image coordinates of the target position so as to calibrate the camera. In the embodiment of the application, the automatic calibration of the camera can be realized, and the efficiency and the precision of the camera calibration are improved.

Description

Camera calibration method, device, system and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a camera calibration method, device, system, and storage medium.
Background
Camera calibration is a key basis for implementing machine vision applications. The accuracy of the camera calibration affects the accuracy of the machine vision application.
At present, a camera is usually calibrated in a manual mode, and calibration personnel need to perform a large amount of manual measurement, manual debugging and other work, so that the camera calibration efficiency is very low, and the calibration precision is not ideal.
Disclosure of Invention
Aspects of the present application provide a camera calibration method, device, system and storage medium, so as to achieve automatic calibration of a camera and improve calibration efficiency and precision.
The embodiment of the application provides a camera calibration method, which comprises the following steps:
acquiring a positioning coordinate when the robot moves to a target position, wherein the positioning coordinate is generated by the robot performing autonomous positioning according to a robot coordinate system;
converting the positioning coordinates into scene coordinates based on a mapping relation between a robot coordinate system and a scene coordinate system;
according to an image coordinate system, determining image coordinates of the target position in a first image of the robot shot by a camera when the robot is positioned at the target position;
and determining calibration parameters of the camera according to the scene coordinates and the image coordinates of the target position so as to calibrate the camera.
The embodiment of the present application further provides a camera calibration method, which is applicable to a robot, and includes:
receiving a navigation instruction sent by a control terminal;
moving in the scene where the camera is located according to the navigation instruction;
in the moving process, performing autonomous positioning according to a self coordinate system to generate positioning data;
and providing the positioning data to a server, so that the server can determine calibration parameters of the camera according to the positioning data and calibrate the camera.
The present application further provides a camera calibration system, including: the robot calibration system comprises a camera to be calibrated, a robot and a server, wherein the server is in communication connection with the camera and the robot respectively;
the robot is used for carrying out autonomous positioning according to a robot coordinate system in the moving process and providing positioning coordinates of the robot at the target position to the server;
the camera is used for shooting the robot and providing a first image obtained when the robot is located at the target position to the server;
the server is used for converting the positioning coordinates into scene coordinates based on the mapping relation between the robot coordinate system and the scene coordinate system; determining image coordinates of the target position in the first image according to an image coordinate system; and determining calibration parameters of the camera according to the scene coordinates and the image coordinates of the target position so as to calibrate the camera.
The embodiment of the application also provides a computing device, which comprises a memory, a processor and a communication component;
the memory is to store one or more computer instructions;
the processor, coupled with the memory and the communication component, to execute the one or more computer instructions to:
acquiring a positioning coordinate when the robot moves to a target position through the communication assembly, wherein the positioning coordinate is generated by the robot performing autonomous positioning according to a robot coordinate system;
converting the positioning coordinates into scene coordinates based on a mapping relation between a robot coordinate system and a scene coordinate system;
according to an image coordinate system, determining image coordinates of the target position in a first image of the robot shot by a camera when the robot is positioned at the target position;
and determining calibration parameters of the camera according to the scene coordinates and the image coordinates of the target position so as to calibrate the camera.
An embodiment of the present application further provides a robot, including: a memory, a processor, and a communications component;
the memory is to store one or more computer instructions;
the processor, coupled with the memory and the communication component, to execute the one or more computer instructions to:
receiving a navigation instruction sent by a control terminal through the communication assembly;
moving in the scene where the camera is located according to the navigation instruction;
in the moving process, performing autonomous positioning according to a self coordinate system to generate positioning data;
and providing the positioning data to a server through the communication assembly so that the server can calculate calibration parameters of the camera according to the positioning data and calibrate the camera.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the aforementioned camera calibration method.
In the embodiment of the application, an innovative camera calibration scheme is provided, a robot is used as a calibration object in a field, and a target position for calibrating a camera can be quickly and conveniently determined by controlling the robot to move in the field; the robot can be used for autonomous positioning, so that the scene coordinates of the target position in the field can be accurately determined; and then, the calibration parameters of the camera can be determined according to the image coordinates and the scene coordinates of the robot in the image shot by the camera, so that the calibration of the camera is realized. Therefore, in the embodiment of the application, the automatic calibration of the camera can be realized, and the efficiency and the precision of the camera calibration are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic structural diagram of a camera calibration system according to an exemplary embodiment of the present application;
FIG. 1b is a schematic logic diagram of a schematic view of a camera calibration system according to an exemplary embodiment of the present application;
fig. 2 is a schematic structural diagram of a robot according to an exemplary embodiment of the present application;
FIG. 3 is a logic diagram of a method for establishing an image coordinate mapping relationship between a top surface and a bottom surface of a robot at a target position according to an exemplary embodiment of the present application;
fig. 4 is a schematic flow chart of a camera calibration method according to another exemplary embodiment of the present application;
FIG. 5 is a schematic flow chart diagram illustrating another camera calibration method according to another exemplary embodiment of the present application;
FIG. 6 is a schematic block diagram of a computing device according to yet another embodiment of the present application;
fig. 7 is a schematic structural diagram of a robot according to yet another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, a camera is usually calibrated manually, which not only causes the calibration efficiency of the camera to be very low, but also causes the calibration precision to be unsatisfactory. To improve these technical problems, some embodiments of the present application: the method provides an innovative camera calibration scheme, a robot is used as a calibration object in a field, and the target position for camera calibration can be quickly and conveniently determined by controlling the robot to move in the field; the robot can be used for autonomous positioning, so that the scene coordinates of the target position in the field can be accurately determined; and then, the calibration parameters of the camera can be determined according to the image coordinates and the scene coordinates of the robot in the image shot by the camera, so that the calibration of the camera is realized. Therefore, in the embodiment of the application, the automatic calibration of the camera can be realized, and the efficiency and the precision of the camera calibration are improved.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1a is a schematic structural diagram of a camera calibration system according to an exemplary embodiment of the present application. Fig. 1b is a schematic logic diagram of a solution of a camera calibration system according to an exemplary embodiment of the present application. Referring to fig. 1a and 1b, the camera calibration system includes a camera 10 to be calibrated, a robot 20, and a server 30, wherein the server 30 is in communication connection with the camera 10 and the robot 20, respectively.
The camera calibration system provided by the embodiment can be applied to various application scenes in which camera calibration is required, for example, camera calibration can be performed in various indoor places such as business superman, home or enterprise. The present embodiment does not limit the application scenario.
As shown in fig. 1a, a scene may include one or more cameras 10, and this embodiment may implement calibration of any camera 10 in the scene. In practice, the calibration operations for different cameras 10 may be independent of each other. For convenience of description, hereinafter, the technical solution will be described with one camera 10 in the scene as the calibration object, but it should be understood that the camera calibration solution provided by the present embodiment can be applied to any one camera 10 in the scene.
In the present embodiment, the robot 20 is located in the scene where the camera 10 is located, and can move in the scene. In this embodiment, the robot is a device capable of automatically executing work, and can accept human commands and run a preprogrammed program.
In this embodiment, the robot 20 may be a ground robot, such as a device that moves on the ground by means of wheels or mechanical legs. The robot 20 may also be a non-ground based robot such as a drone, a device with a floating mobile unit, or other device capable of moving off the ground.
During the movement, the robot 20 can perform autonomous positioning, thereby generating positioning coordinates of any track point in the movement track thereof. Robot 20 may provide positioning data generated during the movement to server 30.
Wherein the robot 20 may establish its own coordinate system, which is described herein as the robot coordinate system 20 itself. The robot coordinate system is a coordinate system dedicated to performing autonomous positioning of the robot 20, and is independent of a scene coordinate system and an image coordinate system mentioned later. In practical applications, the robot coordinate system may use the movement starting point of the robot 20 as the origin, which is not limited in this embodiment.
Based on this, the robot 20 can perform autonomous positioning according to the robot coordinate system during the movement. Since the robot 20 performs autonomous positioning based on actual moving processes, the generated positioning data can accurately reflect the real position of the robot 20.
In this embodiment, the robot 20 may perform autonomous positioning by using SLAM (Simultaneous Localization and Mapping) technology. Of course, the present embodiment is not limited thereto, and the robot 20 may also perform autonomous positioning using other positioning technologies.
During movement of the robot 20 within the field, the camera 10 may photograph the robot 20. In general, the imaging field of view of the camera 10 is limited, and in the present embodiment, it is possible to detect whether the robot 20 is located within the imaging field of view of the camera 10 by a technique such as object detection, and the camera 10 may be controlled to perform imaging only when it is detected that the robot 20 is located within the imaging field of view. Of course, this is not essential, and the camera 10 may also continuously shoot, which is not limited in this embodiment.
The camera 10 may provide the captured image to the server 30.
In this embodiment, part or all of the track points in the moving track of the robot 20 may be used as target positions, where a target position refers to a track point involved in the calibration parameter calculation, the number of the target positions may be one or more, and in this embodiment, the number of the target positions is not limited.
It is possible for robot 20 to provide at least its location coordinates at the target location to server 30.
For the camera 10, at least a first image taken when the robot 20 is located at the target position may be provided to the server 30.
Of course, the robot 20 may also provide the positioning coordinates of other track points to the server 30, and the camera 10 may also provide other images to the server 30, which is not limited in this embodiment.
Accordingly, the server 30 can obtain at least two aspects of data for the target location: the positioning coordinates provided by the robot 20 and the image provided by the camera 10.
For the server 30, the positioning coordinates may be converted into scene coordinates based on a mapping relationship between the robot coordinate system and the scene coordinate system; determining image coordinates of the target position in the first image according to an image coordinate system; and determining calibration parameters of the camera 10 according to the scene coordinates and the image coordinates of the target position so as to calibrate the camera 10.
In terms of physical implementation, the server 30 may be a conventional server, a cloud host, a virtual center, or other server devices, and the server devices mainly include a processor, a hard disk, a memory, a system bus, and the like, and are similar to a general computer architecture.
In this embodiment, the calibration parameters of the camera 10 may include a mapping relationship between a scene coordinate system and an image coordinate system.
The image coordinate system is the basis for positioning in the image captured by the camera 10, and the image coordinates are used to represent the position in the image.
The scene coordinate system is a coordinate system corresponding to a scene in which the camera 10 is located. The scene coordinate system is a basis for positioning in the scene map, and the scene coordinates are used for representing positions in the scene map. The scene map may be a plan view or a three-dimensional view of the scene, or the like. In practice, the scene map is usually known, and accordingly, the scene coordinate system is also known.
Based on this, server 30 may determine the image coordinates of robot 20 in the first image as the image coordinates of the target position according to the image coordinate system.
The server 30 may further convert the positioning coordinates of the target position into scene coordinates according to a preset association relationship between the scene coordinate system and the robot coordinate system, thereby obtaining the scene coordinates of the target position.
In this way, the server 30 can determine the mapping relationship between the image coordinate system and the scene coordinate system according to the image coordinate and the scene coordinate of the target position.
In this process, the server 30 may determine a mapping relationship between the image coordinate system and the robot coordinate system based on the image coordinates and the positioning coordinates of the target position. On the basis, the robot coordinate system can be used as an intermediate coordinate system, and the mapping relation between the image coordinate system and the scene coordinate system can be determined according to the mapping relation between the scene coordinate system and the robot coordinate system and the mapping relation between the image coordinate system and the robot coordinate system.
The process of determining the mapping relationship between the two coordinate systems is a process of calculating the mapping relationship between the two coordinate systems when the coordinates of at least one position in the two coordinate systems are known, and several calculation schemes for solving the problems are available, and are not described in detail herein.
It should be noted that in the present embodiment, the calibration parameters may include other parameters such as the focal length of the camera 10 besides the mapping relationship between the image coordinate system and the scene coordinate system, and the calibration manner of the other parameters is not described in detail herein.
In the embodiment, an innovative camera calibration scheme is provided, in which the robot 20 is used as a calibration object in a field, and the target position for camera calibration can be quickly and conveniently determined by controlling the robot 20 to move in the field; the robot 20 can be used for autonomous positioning, so that scene coordinates of the target position in the field can be accurately determined; furthermore, the calibration parameters of the camera 10 can be determined according to the image coordinates and scene coordinates of the robot 20 in the image captured by the camera 10, so as to realize the calibration of the camera 10. Accordingly, in the embodiment of the present application, the automatic calibration of the camera 10 can be realized, and the efficiency and the accuracy of the camera calibration are improved.
In the above or below embodiments, before performing camera calibration, the robot 20 may perform environmental scanning on a scene where the camera 10 is located to obtain environmental data; and generating positioning data according to a robot coordinate system in a scanning process, and generating a first map according to the environment data and the positioning data.
Wherein the environment scanning process may be implemented based on radar or other components mounted on the robot 20. Through the environmental scan, the locations of various environmental elements within the venue can be determined in the first map, including, but not limited to, walls, columns, facilities, etc. within the venue.
Accordingly, the first map may accurately describe the actual environment in the scene in which the camera 10 is located.
In practical applications, the layout of facilities and the like in a scene may change frequently, and a scene map may not be updated synchronously, which results in that the scene map may not conform to the actual environment in the scene. While inaccurate scene maps may cause machine vision applications to produce erroneous results.
In this embodiment, the scene map may be corrected by using the first map.
The server 30 may determine scene coordinates of the at least one environmental element based on a mapping relationship between a scene coordinate system and a robot coordinate system according to the positioning coordinates of the at least one environmental element in the first map; and respectively rendering the corresponding environment elements at the scene coordinates of at least one environment element to modify the scene map.
For example, a few shelves are added to the scene, but the scene map is not labeled, so that the scene map is no longer accurate. In this embodiment, the first map provided by the robot 20 includes the positioning coordinates of the shelves, and the server 30 may determine the scene coordinates of the shelves according to the mapping relationship between the scene coordinate system and the robot coordinate system, and render the shelves in the scene map according to the determined scene coordinates, so as to correct the scene map.
Accordingly, the corrected scene map can more accurately reflect the actual environment of the scene where the camera 10 is located, thereby improving the accuracy of the machine vision application.
In addition, in the present embodiment, the server 30 may determine the mapping relationship between the scene coordinate system and the robot coordinate system by at least the following implementation manners:
determining a positioning coordinate of at least one environmental element in the scene in the robot coordinate system according to the positioning data and the environmental data generated by the robot 20;
acquiring scene coordinates of at least one environment element in a scene coordinate system;
and determining the mapping relation between the robot coordinate system and the scene coordinate system according to the positioning coordinates and the scene coordinates of the at least one environment element.
The server 30 may select at least one environment element with a fixed position in the scene from the environment elements in the scene. Such as walls, columns, etc. in the scene. And based on the positioning coordinates and scene coordinates of the environment elements, a mapping relation between a robot coordinate system and a scene coordinate system is established.
Of course, in this embodiment, the implementation manner of establishing the mapping relationship between the robot coordinate system and the scene coordinate system is not limited to this, and is not exhaustive here.
In the above or below real-time, the camera calibration system may further include a control terminal 40, the control terminal 40 is in communication connection with the robot 20, and the control terminal 40 may provide a navigation service for the moving process of the robot 20 in the field.
Robot 20 may provide the first map to control terminal 40, and control terminal 40 may issue robot 20 navigation instructions based on the first map to control robot 20 to move through the scene.
In this embodiment, various implementations may be used to determine the target position during the movement of the robot 20.
In one implementation, one or more target locations in the field of view of the camera 10 may be specified by a technician.
In this implementation, based on the first map, the technician may select one or more target locations in the first map. Based on the experience of the technician, the selected target position should be a position capable of comprehensively reflecting the mapping relationship between the image coordinate system and the scene coordinate system.
Control terminal 40 may generate navigation instructions to control movement of robot 20 to one or more target positions in response to the position selection operation.
In the case where there are a plurality of target positions, the control terminal 40 may control the robot 20 to sequentially move to the plurality of target positions.
In the case of the robot 20, autonomous positioning is performed while moving to the target position, and the positioning coordinates of the target position are provided to the server 30.
Accordingly, in this implementation, the navigation command sent by the control terminal 40 includes the target positions, the robot 20 can move to one or more manually selected target positions under the control of the control terminal 40, and the positioning coordinates of the selected target position or positions can be used as the data basis for calibrating the camera.
In another implementation, the control terminal 40 may issue the cruise instruction based on the first map, or the technician may select several route points distributed in the shooting view of the camera 10 in the first map, and the control terminal 40 may issue the navigation instruction based on the several route points. Under both schemes, the control terminal 40 will control the robot 20 to move densely in the field, thereby generating a large number of movement tracks.
In this implementation, the target position is no longer specified in the control terminal 40, and the robot 20 can perform intensive movements within the shooting field of view of the camera 10 and provide the positioning coordinates of at least one track point in the movement trajectory to the server 30.
And for the server 30, at least one track point meeting the marking requirement can be selected from the moving track of the robot 20 as the target position.
In practical applications, in order to ensure the reasonability of the selected target position, the server 30 may determine whether the coverage degree of the moving track of the robot 20 on the shooting visual field of the camera 10 reaches a preset standard, and select at least one track point meeting the marking requirement from the moving track of the robot 20 as the target position when it is determined that the coverage degree of the moving track of the robot 20 on the shooting visual field of the camera 10 reaches the preset standard.
The preset standard of the coverage degree may be greater than 80%, and the like, which is not limited in this embodiment, and the preset standard may be adjusted according to actual requirements.
Wherein the marking requirement may be that the distance from the boundary of the field of view of the camera 10 satisfies a preset requirement. For example, the distance from the boundary of the field of view of the camera 10 is less than 5 (the unit is not limited herein). This embodiment is not limited to this, and the preset requirement may be adjusted according to actual requirements.
Accordingly, in this implementation, the navigation command issued by the control terminal 40 will no longer include the target positions, the robot 20 will perform intensive movement, but the server 30 automatically selects one or more target positions from the movement track of the robot 20, and the positioning coordinates of the selected one or more target positions can be used as the data basis for calibrating the camera.
Of course, in addition to determining the target position by using the above two implementation manners, the target position may also be determined by using other implementation manners in the embodiment, and the embodiment is not limited thereto.
In addition, in the embodiment, the control terminal 40 may be further connected to the camera 10 in a communication manner, and the control terminal 40 may display an image captured by the camera 10. In practice, the camera 10 may provide a video stream to the control terminal 40 to present the movement process of the robot 20 within the shooting view of the camera 10.
Accordingly, the technician can observe the movement of the robot 20 in the shooting view of the camera 10 through the control terminal 40, and can adjust the movement scheme of the robot 20 in the control terminal 40 when finding that the movement track of the robot 20 does not conform to the expected effect.
For example, if the robot 20 moves to the target position selected by the technician, the technician may reselect the target position if the actual position within the field of view of the camera 10 when the robot 20 moves to the target position does not match the expected effect.
For another example, if the robot 20 moves according to the cruise command or according to a plurality of passing points designated by the technician, and finds that the coverage of the moving track on the shooting view of the camera 10 does not reach the preset standard, that is, the robot 20 moves out of the shooting view, the technician may add a passing point to the control terminal 40, adjust the moving direction of the robot 20 in the cruise mode, or the like.
The control terminal 40 may generate a navigation command of the robot 20 in response to the adjustment operation to adjust the movement of the robot 20.
Accordingly, in the embodiment, the control of the moving process of the robot 20 can be realized based on the control terminal 40, which makes the calibration process of the camera 10 more intelligent, and on this basis, the target position participating in the calibration and calculation can be selected more reasonably and more conveniently.
In the above or below described embodiments, the image coordinates of the target location may be determined in a number of ways.
In one implementation, the robot 20 may locate its bottom center point while moving to the target position to obtain location coordinates of its bottom center point as location coordinates of the target position.
Based on this, it is possible for the server 30 to determine, in the first image, image coordinates corresponding to the top surface center point of the robot 20 according to the image coordinate system; the image coordinates corresponding to the center point of the top surface are converted into image coordinates corresponding to the center point of the bottom surface as image coordinates of the target position based on the image coordinate mapping relationship between the top surface and the bottom surface when the robot 20 is at the target position.
The camera 10 captures the robot 20 to obtain an image in which the center point of the bottom surface of the robot 20 is blocked, and the server 30 may convert the image coordinates of the center point of the top surface in the image into the image coordinates of the center point of the bottom surface, and may use the image coordinates of the center point of the bottom surface as the image coordinates of the target position.
In this implementation, robot 20 may be configured to assist server 30 in determining the image coordinates of the center point of the floor.
Fig. 2 is a schematic structural diagram of a robot 20 according to an exemplary embodiment of the present disclosure.
Referring to fig. 2, a plurality of pairs of symmetrical marking points may be disposed on the top and bottom surfaces of the robot 20. As seen from one pair of the marked points, a symmetric manner may be: the marker points on the top surface and the marker points on the bottom surface are located on the same straight line, and the straight line is perpendicular to the ground plane in the scene.
The marker may be an LED lamp, an icon, or the like. Further, different attributes of color, shape, etc. may be configured for different pairs of marked points to distinguish the different pairs of marked points. For example, different LED colors may be used for different mark points, different patterns may be used for different mark points, and the like, which is not limited in this embodiment.
In addition, in practical application, at least two pairs of mark points can be shot by the camera 10 as targets at the same time, and the layout of the mark points can be determined. For example, in fig. 2, 4 LEDs and the like are provided on the top surface and the bottom surface, respectively, and the 4 LEDs and the like are uniformly distributed on the periphery of the top surface or the bottom surface.
Based on this, the image including the robot 20 captured by the camera 10 will include at least two pairs of mark points.
For the server 30, a second image of the robot 20 when moving to a reference position before the target position may be taken by using the camera 10; identifying at least two pairs of marker points from the top and bottom surfaces of the robot 20 in the second image, and determining first image coordinates of the at least two pairs of marker points; determining second image coordinates of at least two pairs of mark points in the first image; and establishing an image coordinate mapping relation between the top surface and the bottom surface of the robot 20 at the target position according to the first image coordinates and the second image coordinates of the at least two pairs of marking points.
Based on this, server 30 may determine an image coordinate mapping relationship between any two symmetric points on the top and bottom surfaces of robot 20 based on the image coordinate mapping relationship between the top and bottom surfaces of robot 20 at the target position. Accordingly, server 30 may determine the graphic coordinates of the center point of the bottom surface of robot 20 based on the graphic coordinates of the center point of the top surface of robot 20.
Alternatively, the server 30 may construct the first polygon from at least two of the at least two pairs of marked points located on the top surface; constructing a second polygon according to at least two marking points positioned on the bottom surface in the at least two pairs of marking points; and establishing an image coordinate mapping relation between the first polygon and the second polygon according to the image coordinates of the mark points, wherein the image coordinate mapping relation is used as the image coordinate mapping relation between the top surface and the bottom surface of the robot 20 at the target position.
The process may also be regarded as a process of calculating the mapping relationship between the two coordinate systems when the coordinates of the at least one point in the two coordinate systems are known, and the specific calculation process is not described in detail.
Fig. 3 is a logic diagram of a method for establishing an image coordinate mapping relationship between a top surface and a bottom surface of the robot 20 at a target position according to an exemplary embodiment of the present application.
As shown in fig. 3, in the first image and the second imageIn the second image, two pairs of marked points [ B, B- ] and [ C, C- ] can be represented as [ B ] in the first image1,B1- ] and [ C1,C1-, characterized in the second image as [ B ]2,B2- ] and [ C2,C2-】。
The server 30 can mark the point B1、C1、B2And C2Constructing a first quadrilateral according to the marked point B1-、C1-、B2-and C2-constructing a second quadrilateral.
And an image coordinate mapping relationship between the two quadrangles can be calculated according to the image coordinates of the mark points, and the image coordinate mapping relationship is used as the image coordinate mapping relationship between the top surface and the bottom surface of the robot 20 at the target position.
Further, the image coordinates of the bottom surface center point P-of the robot 20 may be calculated from the image coordinates of the top surface center point P of the robot 20 in the first image based on the image coordinate mapping relationship between the two quadrangles.
Of course, in this implementation, the robot 20 may be configured in other ways to assist the server 30 in determining the image coordinate mapping relationship between the top surface and the bottom surface of the robot 20 at the target position, which is not limited herein.
In another implementation, the robot 20 may connect a marker, and the robot 20 may locate the marker while moving to the target position, and use the obtained location coordinates of the marker as the location coordinates of the target position.
In this implementation, the connection manner between the robot 20 and the marker is not limited, and any hard connection manner may be adopted between the robot 20 and the marker, wherein the hard connection means that the relative position between the two is kept unchanged.
In practical applications, the marker may be shot by the camera 10 as a target during the movement of the robot 20, and parameters such as the distance between the marker and the robot 20 may be determined. In addition, the height difference between the marker and the bottom surface of the robot 20 is less than a preset threshold, i.e., the marker may be located in or near the plane in which the bottom surface of the robot 20 is located. Preferably, the marker may be located in the plane of the bottom surface of the robot 20.
Based on this, the server 30 may determine image coordinates corresponding to the marker as image coordinates of the target position in the first image according to the image coordinate system.
Of course, other implementations may also be adopted in the present embodiment to determine the image coordinates of the target position, and the present embodiment is not limited to the above two implementations.
Accordingly, in this embodiment, the image coordinates of the target position can be accurately determined without being affected by the three-dimensional structure of the robot 20, which can effectively improve the precision of camera calibration.
In the above or below described embodiments, an information carrier for presenting time information may be provided on the robot 20.
On the basis, the robot 20 can display the time information when moving to the target position through the information carrier; and may associate the time information with the location coordinates of the target position and provide it to the server 30.
That is, when robot 20 moves to the target position, the positioning coordinates of the target position may be provided to server 30, and time information when robot 20 moves to the target position may be synchronously provided to server 30.
Wherein, the time information displayed by the information carrier dynamically changes according to the change rule of the natural time. Furthermore, on the same track point, the time information provided by the robot 20 to the server 30 is kept identical to the time information presented by the information carrier.
It is possible for the server 30 to acquire time information associated with the positioning coordinates of the target position from the robot 20. Based on this, the server 30 may search, from the image including the robot 20 captured by the camera 10, for a target image in which the time information shown by the information carrier matches the time information associated with the positioning coordinates, as the first image.
The server 30 may recognize time information included in an image including the robot 20 captured by the camera 10 by using an OCR (Optical Character Recognition) technique or the like.
In practical applications, the information carrier may be a flexible display screen, and the number of the flexible display screens may be multiple, and the multiple flexible display screens surround the side wall of the robot 20.
Referring to fig. 2, a plurality of flexible display screens surround the side wall of the robot 20, which ensures that the time information displayed in at least one of the flexible display screens can be completely captured by the camera 10, thereby ensuring that the first image contains complete time information.
Accordingly, in the present embodiment, alignment of data provided by the camera 10 and the robot 20 can be achieved from a time dimension. This can effectively avoid a time difference between the positioning coordinates provided by the robot 20 and the image coordinates extracted from the image captured by the camera 10 due to network instability, thereby improving the accuracy of camera calibration.
Fig. 4 is a schematic flowchart of a camera calibration method according to another exemplary embodiment of the present application. The camera calibration method provided by the embodiment may be executed by a camera calibration apparatus, which may be implemented as software or implemented as a combination of software and hardware, and may be integrally disposed in a computing device. As shown in fig. 4, the camera calibration method includes:
step 400, acquiring a positioning coordinate when the robot moves to a target position, wherein the positioning coordinate is generated by autonomous positioning of the robot according to a robot coordinate system;
step 401, converting the positioning coordinates into scene coordinates based on the mapping relation between the robot coordinate system and the scene coordinate system;
step 402, according to an image coordinate system, determining an image coordinate of a target position in a first image of the robot, which is shot by a camera and is located at the target position;
and 403, determining calibration parameters of the camera according to the scene coordinates and the image coordinates of the target position so as to calibrate the camera.
In an alternative embodiment, if the positioning coordinates are coordinates corresponding to a center point of the bottom surface of the robot, the step of determining the image coordinates of the target position includes:
determining image coordinates corresponding to the top surface center point of the robot in the first image according to the image coordinate system;
and determining the image coordinates corresponding to the center point of the bottom surface as the image coordinates of the target position according to the image coordinates corresponding to the center point of the top surface based on the image coordinate mapping relationship between the top surface and the bottom surface when the robot is at the target position.
In an alternative embodiment, the top surface and the bottom surface of the robot are provided with a plurality of pairs of symmetrical marking points, and the method further comprises:
shooting a second image of the robot when the robot moves to a reference position before the target position by using the camera;
identifying at least two pairs of marking points from the top surface and the bottom surface of the robot in the second image, and determining first image coordinates of the at least two pairs of marking points;
determining second image coordinates of at least two pairs of mark points in the first image;
and establishing an image coordinate mapping relation between the top surface and the bottom surface of the robot at the target position according to the first image coordinates and the second image coordinates of the at least two pairs of marking points.
In an optional embodiment, the robot is provided with an information carrier for displaying time information, and the positioning coordinates are associated with the time information, and the method further comprises:
and searching a target image, which is displayed by the information carrier and matched with the time information associated with the positioning coordinates, from the image containing the robot shot by the camera to be used as a first image.
In an optional embodiment, the target position is multiple, and the step of determining the calibration parameters of the camera according to the scene coordinates and the image coordinates of the target position includes:
selecting a target position with a distance from a visual field boundary of the camera meeting a preset requirement from the plurality of target positions as a marking position;
and determining the calibration parameters of the camera according to the scene coordinates and the image coordinates corresponding to the mark positions respectively.
In an alternative embodiment, the calibration parameters include a mapping relationship between an image coordinate system and a scene coordinate system.
In an optional embodiment, the method further comprises:
the method comprises the steps that a robot is used for carrying out environment scanning on a scene where a camera is located to obtain environment data;
determining a positioning coordinate of at least one environmental element in a scene in a robot coordinate system according to positioning data and environmental data generated by the robot based on the robot coordinate system in a scanning process;
acquiring scene coordinates of at least one environment element in a scene coordinate system;
and determining the mapping relation between the robot coordinate system and the scene coordinate system according to the positioning coordinates and the scene coordinates of the at least one environment element.
In an optional embodiment, the robot performs autonomous positioning using SLAM instant positioning and mapping techniques.
It should be noted that, for the technical details in the embodiments of the camera calibration method, reference may be made to the description of the server in the related embodiments of the camera calibration system, which is not repeated herein for brevity, but this should not cause a loss of the protection scope of the present application.
Fig. 5 is a schematic flow chart of another camera calibration method according to another exemplary embodiment of the present application. The camera calibration method provided by the embodiment may be executed by a camera calibration device, which may be implemented as software or as a combination of software and hardware, and may be integrated in a robot. As shown in fig. 5, the camera calibration method includes:
500, receiving a navigation instruction sent by a control terminal;
step 501, moving in a scene where the camera is located according to a navigation instruction;
step 502, in the moving process, performing autonomous positioning according to a self coordinate system to generate positioning data;
step 503, providing the positioning data to the server, so that the server determines calibration parameters of the camera according to the positioning data and calibrates the camera.
In an optional embodiment, the step of performing autonomous positioning according to the own coordinate system to generate the positioning data includes:
positioning the bottom center point of the self-body according to a self-body coordinate system to obtain a positioning coordinate of the bottom center point;
and taking the positioning coordinate of the central point of the bottom surface as positioning data.
In an alternative embodiment, the top surface and the bottom surface of the robot are provided with a plurality of pairs of symmetrical marking points.
In an optional embodiment, the robot is connected with a marker, the height difference between the marker and the bottom surface of the robot is smaller than a preset threshold, and the step of performing autonomous positioning according to a self coordinate system to generate positioning data includes:
positioning the marker according to a self coordinate system to obtain a positioning coordinate of the marker;
the positioning coordinates of the marker are used as positioning data.
In an alternative embodiment, the robot is provided with an information carrier for displaying time information, and the method further comprises:
in the moving process, displaying the time information through the information carrier;
and associating the time information with the positioning data and providing the positioning data to the server.
In an optional embodiment, before receiving the navigation instruction, the method further comprises:
performing environment scanning on a scene to obtain environment data;
generating positioning data according to a coordinate system of the scanner in the scanning process;
generating a first map according to the environment data and the positioning data;
and providing the first map to the control terminal so that the control terminal sends out a navigation instruction based on the first map.
In an alternative embodiment, the navigation instruction includes a target location; moving in the scene where the camera is located according to the navigation instruction, and the method comprises the following steps:
moving to a target position according to the navigation instruction;
in the moving process, the autonomous positioning is carried out according to the coordinate system of the autonomous positioning device to generate positioning data, and the method comprises the following steps:
and carrying out autonomous positioning according to the coordinate system of the target so as to generate positioning coordinates of the target position as positioning data.
In an optional embodiment, the navigation instruction is a cruise instruction or the navigation instruction includes a plurality of approach points distributed in a shooting field of the camera, and the step of performing autonomous positioning according to a coordinate system of the navigation instruction to generate positioning data includes:
and carrying out autonomous positioning according to the coordinate system of the mobile terminal to generate a positioning coordinate of at least one track point in the mobile track as positioning data.
In an optional embodiment, the step of performing autonomous positioning according to the own coordinate system includes:
and performing autonomous positioning by adopting an SLAM instant positioning and map construction technology.
It should be noted that, for the technical details in the embodiments of the camera calibration method, reference may be made to the description of the robot in the related embodiments of the camera calibration system, which is not repeated herein for brevity, but this should not cause a loss of the protection scope of the present application.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 400, 401, etc., are merely used to distinguish various operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 6 is a schematic structural diagram of a computing device according to another exemplary embodiment of the present application. As shown in fig. 6, the computing device includes a memory 60, a processor 61, and a communication component 62;
memory 60 is used to store one or more computer instructions;
the processor 61 is coupled to the memory 61 and the communication component 62 for executing one or more computer instructions for:
acquiring a positioning coordinate when the robot moves to a target position through the communication component 62, wherein the positioning coordinate is generated by the robot performing autonomous positioning according to a robot coordinate system;
converting the positioning coordinates into scene coordinates based on the mapping relation between the robot coordinate system and the scene coordinate system;
according to an image coordinate system, determining image coordinates of a target position in a first image of the robot shot by a camera when the robot is positioned at the target position;
and determining calibration parameters of the camera according to the scene coordinates and the image coordinates of the target position so as to calibrate the camera.
In an alternative embodiment, if the positioning coordinates are coordinates corresponding to a center point of the bottom surface of the robot, the processor 61 is configured to, when determining the image coordinates of the target position:
determining image coordinates corresponding to the top surface center point of the robot in the first image according to the image coordinate system;
and determining the image coordinates corresponding to the center point of the bottom surface as the image coordinates of the target position according to the image coordinates corresponding to the center point of the top surface based on the image coordinate mapping relationship between the top surface and the bottom surface when the robot is at the target position.
In an alternative embodiment, the top and bottom surfaces of the robot are provided with a plurality of pairs of symmetrical marking points, and the processor 61 is further configured to:
shooting a second image of the robot when the robot moves to a reference position before the target position by using the camera;
identifying at least two pairs of marking points from the top surface and the bottom surface of the robot in the second image, and determining first image coordinates of the at least two pairs of marking points;
determining second image coordinates of at least two pairs of mark points in the first image;
and establishing an image coordinate mapping relation between the top surface and the bottom surface of the robot at the target position according to the first image coordinates and the second image coordinates of the at least two pairs of marking points.
In an alternative embodiment, the robot is provided with an information carrier for presenting time information, the positioning coordinates are associated with the time information, and the processor 61 is further configured to:
and searching a target image, which is displayed by the information carrier and matched with the time information associated with the positioning coordinates, from the image containing the robot shot by the camera to be used as a first image.
In an alternative embodiment, the target position is multiple, and the processor 61, when determining the calibration parameters of the camera according to the scene coordinates and the image coordinates of the target position, is configured to:
selecting a target position with a distance from a visual field boundary of the camera meeting a preset requirement from the plurality of target positions as a marking position;
and determining the calibration parameters of the camera according to the scene coordinates and the image coordinates corresponding to the mark positions respectively.
In an alternative embodiment, the calibration parameters include a mapping relationship between an image coordinate system and a scene coordinate system.
In an alternative embodiment, the processor 61 is further configured to:
the method comprises the steps that a robot is used for carrying out environment scanning on a scene where a camera is located to obtain environment data;
determining a positioning coordinate of at least one environmental element in a scene in a robot coordinate system according to positioning data and environmental data generated by the robot based on the robot coordinate system in a scanning process;
acquiring scene coordinates of at least one environment element in a scene coordinate system;
and determining the mapping relation between the robot coordinate system and the scene coordinate system according to the positioning coordinates and the scene coordinates of the at least one environment element.
In an optional embodiment, the robot performs autonomous positioning using SLAM instant positioning and mapping techniques.
It should be noted that, for the sake of brevity, the technical details in the embodiments of the computing device may refer to the description of the server in the related embodiments of the camera calibration system, which is not described herein again, but should not cause a loss of the protection scope of the present application.
Further, as shown in fig. 6, the computing device further includes: power supply components 63, and the like. Only some of the components are schematically shown in fig. 6, and the computing device is not meant to include only the components shown in fig. 6.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps that can be executed by a computing device in the foregoing method embodiments when executed.
Fig. 7 is a schematic structural diagram of a robot according to another exemplary embodiment of the present application. As shown in fig. 7, the robot comprises a memory 70, a processor 71 and a communication component 72;
the memory 70 is used to store one or more computer instructions;
processor 71 is coupled to memory 70 and communication component 72 for executing one or more computer instructions for:
receiving a navigation instruction sent by the control terminal through the communication component 72;
moving in the scene where the camera is located according to the navigation instruction;
in the moving process, performing autonomous positioning according to a self coordinate system to generate positioning data;
the positioning data is provided to the server through the communication component 72, so that the server calculates calibration parameters of the camera according to the positioning data and calibrates the camera.
In this embodiment, the robot may further include a robot body and other structures, and the shape, size, and the like of the robot body are not limited.
In an alternative embodiment, the processor 71, when performing autonomous positioning according to its own coordinate system to generate the positioning data, is configured to:
positioning the bottom center point of the self-body according to a self-body coordinate system to obtain a positioning coordinate of the bottom center point;
and taking the positioning coordinate of the central point of the bottom surface as positioning data.
In an alternative embodiment, the top surface and the bottom surface of the robot are provided with a plurality of pairs of symmetrical marking points.
In an alternative embodiment, the robot is connected with a marker, the height difference between the marker and the bottom surface of the robot is smaller than a preset threshold, and the processor 71, when performing autonomous positioning according to its own coordinate system to generate positioning data, is configured to:
positioning the marker according to a self coordinate system to obtain a positioning coordinate of the marker;
the positioning coordinates of the marker are used as positioning data.
In an alternative embodiment, the robot is provided with an information carrier 74 for presenting time information, and the processor 71 is further adapted to:
during the movement, the time information is presented via the information carrier 74;
and associating the time information with the positioning data and providing the positioning data to the server.
In an alternative embodiment, prior to receiving the navigation instruction, the processor 71 is further configured to:
performing environment scanning on a scene to obtain environment data;
generating positioning data according to a coordinate system of the scanner in the scanning process;
generating a first map according to the environment data and the positioning data;
and providing the first map to the control terminal so that the control terminal sends out a navigation instruction based on the first map.
In an alternative embodiment, the navigation instruction includes a target location; the processor 71, when moving in the scene of the camera according to the navigation instruction, is configured to:
moving to a target position according to the navigation instruction;
in the moving process, the autonomous positioning is carried out according to the coordinate system of the autonomous positioning device, so that when the positioning data is generated, the autonomous positioning device is used for:
and carrying out autonomous positioning according to the coordinate system of the target so as to generate positioning coordinates of the target position as positioning data.
In an alternative embodiment, the navigation instruction is a cruise instruction or the navigation instruction includes a plurality of waypoints distributed in a shooting field of the camera, and the processor 71, when performing autonomous positioning according to its own coordinate system to generate the positioning data, is configured to:
and carrying out autonomous positioning according to the coordinate system of the mobile terminal to generate a positioning coordinate of at least one track point in the mobile track as positioning data.
In an alternative embodiment, processor 71, when performing autonomous positioning according to its own coordinate system, is configured to:
and performing autonomous positioning by adopting an SLAM instant positioning and map construction technology.
It should be noted that, for the technical details in the embodiments of the robot, reference may be made to the description of the robot in the related embodiment of the camera calibration system, and for the sake of brevity, no further description is provided herein, but this should not cause a loss of the protection scope of the present application.
Further, as shown in fig. 7, the computing device further includes: power supply components 73, and the like. Only some of the components are schematically shown in fig. 7, and the computing device is not meant to include only the components shown in fig. 7.
Accordingly, the present application further provides a computer readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be performed by the robot in the foregoing method embodiments when executed.
The memory of fig. 6 and 7, among other things, is used to store computer programs and may be configured to store various other data to support operations on the computing platform. Examples of such data include instructions for any application or method operating on the computing platform, contact data, phonebook data, messages, pictures, videos, and so forth. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Wherein the communication components of fig. 6 and 7 are configured to facilitate wired or wireless communication between the device in which the communication components are located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The power supply components of fig. 6 and 7 provide power to various components of the device in which the power supply components are located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (34)

1. A camera calibration method, comprising:
acquiring a positioning coordinate when the robot moves to a target position, wherein the positioning coordinate is generated by the robot performing autonomous positioning according to a robot coordinate system;
converting the positioning coordinates into scene coordinates based on a mapping relation between a robot coordinate system and a scene coordinate system;
according to an image coordinate system, determining image coordinates of the target position in a first image of the robot shot by a camera when the robot is positioned at the target position;
and determining calibration parameters of the camera according to the scene coordinates and the image coordinates of the target position so as to calibrate the camera.
2. The method of claim 1, wherein if the positioning coordinates are coordinates corresponding to a center point of a bottom surface of the robot, the determining image coordinates of the target position comprises:
according to an image coordinate system, determining image coordinates corresponding to the top surface center point of the robot in the first image;
and determining the image coordinate corresponding to the central point of the bottom surface as the image coordinate of the target position according to the image coordinate corresponding to the central point of the top surface based on the image coordinate mapping relation between the top surface and the bottom surface of the robot at the target position.
3. The method of claim 2, wherein the top and bottom surfaces of the robot have a plurality of pairs of symmetrical marker points disposed thereon, the method further comprising:
shooting a second image of the robot when the robot moves to a reference position before the target position by using the camera;
identifying at least two pairs of marking points from the top surface and the bottom surface of the robot in the second image, and determining first image coordinates of the at least two pairs of marking points;
determining second image coordinates of the at least two pairs of marker points in the first image;
and establishing an image coordinate mapping relation between the top surface and the bottom surface of the robot at the target position according to the first image coordinates and the second image coordinates of the at least two pairs of mark points.
4. The method of claim 1, wherein an information carrier for presenting time information is provided on the robot, the positioning coordinates being associated with time information, the method further comprising:
and searching a target image of which the time information displayed by the information carrier is matched with the time information associated with the positioning coordinates from the image which is shot by the camera and contains the robot as the first image.
5. The method according to claim 1, wherein the target position is plural, and the determining the calibration parameters of the camera according to the scene coordinates and the image coordinates of the target position comprises:
selecting a target position with a distance from the visual field boundary of the camera meeting a preset requirement from the plurality of target positions as a marking position;
and determining the calibration parameters of the camera according to the scene coordinates and the image coordinates corresponding to the marking positions respectively.
6. The method of claim 1, wherein the calibration parameters comprise a mapping between the image coordinate system and the scene coordinate system.
7. The method of claim 1, further comprising:
the robot is used for carrying out environment scanning on the scene where the camera is located so as to obtain environment data;
determining a positioning coordinate of at least one environment element in the scene in the robot coordinate system according to the positioning data generated by the robot based on the robot coordinate system in the scanning process and the environment data;
acquiring scene coordinates of the at least one environment element in the scene coordinate system;
and determining a mapping relation between the robot coordinate system and the scene coordinate system according to the positioning coordinates and the scene coordinates of the at least one environment element.
8. The method of claim 1, wherein the robot performs autonomous positioning using SLAM live positioning and mapping techniques.
9. A camera calibration method is suitable for a robot, and is characterized by comprising the following steps:
receiving a navigation instruction sent by a control terminal;
moving in the scene where the camera is located according to the navigation instruction;
in the moving process, performing autonomous positioning according to a self coordinate system to generate positioning data;
and providing the positioning data to a server, so that the server can determine calibration parameters of the camera according to the positioning data and calibrate the camera.
10. The method of claim 9, wherein the autonomous positioning according to the own coordinate system to generate the positioning data comprises:
positioning the bottom center point of the self-body according to a self-body coordinate system to obtain a positioning coordinate of the bottom center point;
and taking the positioning coordinate of the central point of the bottom surface as the positioning data.
11. The method of claim 10, wherein the top and bottom surfaces of the robot have pairs of symmetrical marker points disposed thereon.
12. The method of claim 9, wherein the robot is connected with a marker, the height difference between the marker and the bottom surface of the robot is less than a preset threshold, and the autonomous positioning according to the self coordinate system to generate the positioning data comprises:
positioning the marker according to a self coordinate system to obtain a positioning coordinate of the marker;
and taking the positioning coordinates of the marker as the positioning data.
13. The method of claim 9, wherein an information carrier for presenting time information is provided on the robot, the method further comprising:
in the moving process, displaying time information through the information carrier;
and associating the time information with the positioning data and providing the time information and the positioning data to the server.
14. The method of claim 9, prior to receiving the navigation instruction, further comprising:
performing environment scanning on the scene to obtain environment data;
generating positioning data according to a coordinate system of the scanner in the scanning process;
generating a first map according to the environment data and the positioning data;
and providing the first map to the control terminal so that the control terminal sends out the navigation instruction based on the first map.
15. The method of claim 9, wherein the navigation instruction includes a target location; the moving in the scene where the camera is located according to the navigation instruction comprises the following steps:
moving to the target position according to the navigation instruction;
in the moving process, the autonomous positioning is performed according to the coordinate system of the autonomous positioning device to generate positioning data, and the method comprises the following steps:
and carrying out autonomous positioning according to a self coordinate system to generate positioning coordinates of the target position as the positioning data.
16. The method according to claim 9, wherein the navigation command is a cruise command or the navigation command includes a plurality of waypoints distributed in a shooting field of view of the camera, and the autonomous positioning according to the own coordinate system to generate the positioning data comprises:
and carrying out autonomous positioning according to the coordinate system of the mobile terminal to generate a positioning coordinate of at least one track point in the mobile track as the positioning data.
17. The method of claim 9, wherein the autonomously locating according to the own coordinate system comprises:
and performing autonomous positioning by adopting an SLAM instant positioning and map construction technology.
18. A camera calibration system is characterized by comprising a camera to be calibrated, a robot and a server, wherein the server is respectively in communication connection with the camera and the robot;
the robot is used for carrying out autonomous positioning according to a robot coordinate system in the moving process and providing positioning coordinates of the robot at the target position to the server;
the camera is used for shooting the robot and providing a first image obtained when the robot is located at the target position to the server;
the server is used for converting the positioning coordinates into scene coordinates based on the mapping relation between the robot coordinate system and the scene coordinate system; determining image coordinates of the target position in the first image according to an image coordinate system; and determining calibration parameters of the camera according to the scene coordinates and the image coordinates of the target position so as to calibrate the camera.
19. The system of claim 18, further comprising a control terminal in communicative connection with the robot;
the control terminal is used for controlling the robot to move in the scene where the camera is located.
20. The system of claim 19, wherein the robot is further configured to: carrying out environment scanning on a scene where the camera is located to obtain environment data; generating positioning data according to the robot coordinate system in the scanning process, and generating a first map according to the environment data and the positioning data;
the control terminal is specifically configured to: displaying the first map; controlling the robot to move in the scene in response to robot navigation instructions occurring in the first map.
21. The system of claim 20, wherein the navigation instruction includes the target position, and the control terminal is configured to control the robot to move to the target position;
wherein the number of the target positions is one or more.
22. The system according to claim 20, wherein the navigation command is a patrol command or the navigation command comprises a plurality of approach points distributed in a shooting visual field of the camera, and the control terminal is configured to control the robot to move according to the navigation command;
the server is further used for selecting at least one track point meeting the marking requirement from the moving track of the robot to serve as the target position under the condition that the coverage degree of the moving track of the robot on the shooting visual field of the camera reaches a preset standard.
23. The system of claim 22, wherein the server, when selecting the target location, is configured to:
and selecting at least one track point, the distance between which and the view boundary of the camera meets the preset requirement, from the moving track of the robot to serve as the target position.
24. The system of claim 20, wherein the server is further configured to:
determining a positioning coordinate of at least one environment element in the scene in the robot coordinate system according to the positioning data generated by the robot and the environment data;
acquiring scene coordinates of the at least one environment element in the scene coordinate system;
and determining a mapping relation between the robot coordinate system and the scene coordinate system according to the positioning coordinates and the scene coordinates of the at least one environment element.
25. The system according to claim 18, wherein the robot is specifically configured to locate a bottom center point of the robot when moving to the target position, and use the obtained location coordinates of the bottom center point as the location coordinates of the target position;
the server is specifically configured to: according to an image coordinate system, determining image coordinates corresponding to the top surface center point of the robot in the first image; and determining image coordinates corresponding to the central point of the bottom surface as the image coordinates of the target position according to the image coordinates corresponding to the central point of the top surface based on the image coordinate mapping relationship between the top surface and the bottom surface of the robot at the target position.
26. The system of claim 25, wherein the top and bottom surfaces of the robot have a plurality of pairs of symmetrical marker points disposed thereon, and wherein the server is configured to:
shooting a second image of the robot when the robot moves to a reference position before the target position by using the camera;
identifying at least two pairs of marking points from the top surface and the bottom surface of the robot in the second image, and determining first image coordinates of the at least two pairs of marking points;
determining second image coordinates of the at least two pairs of marker points in the first image;
and establishing an image coordinate mapping relation between the top surface and the bottom surface of the robot at the target position according to the first image coordinates and the second image coordinates of the at least two pairs of mark points.
27. The system according to claim 18, wherein the robot is connected with a marker, wherein a height difference between the marker and a bottom surface of the robot is smaller than a preset threshold, and wherein the robot is specifically configured to: when the marker is moved to the target position, positioning the marker, and taking the obtained positioning coordinates of the marker as the positioning coordinates of the target position;
the server is specifically configured to: and according to an image coordinate system, determining image coordinates corresponding to the marker in the first image as the image coordinates of the target position.
28. The system of claim 18, wherein the robot has an information carrier disposed thereon for presenting time information, the robot further configured to: displaying time information when the mobile terminal moves to the target position through the information carrier; associating the time information with the positioning coordinates of the target position and providing the time information to the server;
the server is specifically configured to: and searching a target image of which the time information displayed by the information carrier is matched with the time information associated with the positioning coordinates from the image which is shot by the camera and contains the robot as the first image.
29. The system of claim 28, wherein the information carrier is a flexible display screen, the number of the flexible display screens being plural, the plural flexible display screens surrounding a sidewall of the robot.
30. The system of claim 18, wherein the calibration parameters comprise a mapping between the image coordinate system and the scene coordinate system.
31. The system of claim 18, wherein the robot performs autonomous positioning using SLAM live positioning and mapping techniques.
32. A computing device comprising a memory, a processor, and a communication component;
the memory is to store one or more computer instructions;
the processor, coupled with the memory and the communication component, to execute the one or more computer instructions to:
acquiring a positioning coordinate when the robot moves to a target position through the communication assembly, wherein the positioning coordinate is generated by the robot performing autonomous positioning according to a robot coordinate system;
converting the positioning coordinates into scene coordinates based on a mapping relation between a robot coordinate system and a scene coordinate system;
according to an image coordinate system, determining image coordinates of the target position in a first image of the robot shot by a camera when the robot is positioned at the target position;
and determining calibration parameters of the camera according to the scene coordinates and the image coordinates of the target position so as to calibrate the camera.
33. A robot, comprising: a memory, a processor, and a communications component;
the memory is to store one or more computer instructions;
the processor, coupled with the memory and the communication component, to execute the one or more computer instructions to:
receiving a navigation instruction sent by a control terminal through the communication assembly;
moving in the scene where the camera is located according to the navigation instruction;
in the moving process, performing autonomous positioning according to a self coordinate system to generate positioning data;
and providing the positioning data to a server through the communication assembly so that the server can calculate calibration parameters of the camera according to the positioning data and calibrate the camera.
34. A computer-readable storage medium storing computer instructions, which when executed by one or more processors, cause the one or more processors to perform the camera calibration method of any one of claims 1-31.
CN202010214083.XA 2020-03-24 2020-03-24 Camera calibration method, device, system and storage medium Pending CN113450414A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010214083.XA CN113450414A (en) 2020-03-24 2020-03-24 Camera calibration method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010214083.XA CN113450414A (en) 2020-03-24 2020-03-24 Camera calibration method, device, system and storage medium

Publications (1)

Publication Number Publication Date
CN113450414A true CN113450414A (en) 2021-09-28

Family

ID=77807471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010214083.XA Pending CN113450414A (en) 2020-03-24 2020-03-24 Camera calibration method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN113450414A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782555A (en) * 2022-06-20 2022-07-22 深圳市海清视讯科技有限公司 Map mapping method, apparatus, and storage medium
CN115953485A (en) * 2023-03-15 2023-04-11 中国铁塔股份有限公司 Camera calibration method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105785989A (en) * 2016-02-24 2016-07-20 中国科学院自动化研究所 System for calibrating distributed network camera by use of travelling robot, and correlation methods
CN108965687A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 Shooting direction recognition methods, server and monitoring method, system and picture pick-up device
US20190066334A1 (en) * 2017-08-25 2019-02-28 Boe Technology Group Co., Ltd. Method, apparatus, terminal and system for measuring trajectory tracking accuracy of target
CN109540144A (en) * 2018-11-29 2019-03-29 北京久其软件股份有限公司 A kind of indoor orientation method and device
WO2019076320A1 (en) * 2017-10-17 2019-04-25 杭州海康机器人技术有限公司 Robot positioning method and apparatus, and computer readable storage medium
WO2019080229A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Chess piece positioning method and system based on machine vision, storage medium, and robot
US20190278288A1 (en) * 2018-03-08 2019-09-12 Ubtech Robotics Corp Simultaneous localization and mapping methods of mobile robot in motion area
CN110888957A (en) * 2019-11-22 2020-03-17 腾讯科技(深圳)有限公司 Object positioning method and related device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105785989A (en) * 2016-02-24 2016-07-20 中国科学院自动化研究所 System for calibrating distributed network camera by use of travelling robot, and correlation methods
CN108965687A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 Shooting direction recognition methods, server and monitoring method, system and picture pick-up device
US20190066334A1 (en) * 2017-08-25 2019-02-28 Boe Technology Group Co., Ltd. Method, apparatus, terminal and system for measuring trajectory tracking accuracy of target
WO2019076320A1 (en) * 2017-10-17 2019-04-25 杭州海康机器人技术有限公司 Robot positioning method and apparatus, and computer readable storage medium
WO2019080229A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Chess piece positioning method and system based on machine vision, storage medium, and robot
US20190278288A1 (en) * 2018-03-08 2019-09-12 Ubtech Robotics Corp Simultaneous localization and mapping methods of mobile robot in motion area
CN109540144A (en) * 2018-11-29 2019-03-29 北京久其软件股份有限公司 A kind of indoor orientation method and device
CN110888957A (en) * 2019-11-22 2020-03-17 腾讯科技(深圳)有限公司 Object positioning method and related device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782555A (en) * 2022-06-20 2022-07-22 深圳市海清视讯科技有限公司 Map mapping method, apparatus, and storage medium
CN114782555B (en) * 2022-06-20 2022-09-16 深圳市海清视讯科技有限公司 Map mapping method, apparatus, and storage medium
CN115953485A (en) * 2023-03-15 2023-04-11 中国铁塔股份有限公司 Camera calibration method and device
CN115953485B (en) * 2023-03-15 2023-06-02 中国铁塔股份有限公司 Camera calibration method and device

Similar Documents

Publication Publication Date Title
US9479703B2 (en) Automatic object viewing methods and apparatus
US20160327946A1 (en) Information processing device, information processing method, terminal device, and setting method
CN111325796A (en) Method and apparatus for determining pose of vision device
US11212463B2 (en) Method for visually representing scanning data
KR100816589B1 (en) Image processing device
WO2018046617A1 (en) Method and system for calibrating multiple cameras
KR101347450B1 (en) Image sensing method using dual camera and apparatus thereof
US20160364853A1 (en) Image processing device and image processing method
JP6704935B2 (en) Lighting layout drawing generator
US10393513B2 (en) Laser scanner and method for surveying an object
US11852960B2 (en) System and method for image projection mapping
JP2011039968A (en) Vehicle movable space detection device
CN106289263A (en) Indoor navigation method and device
CN114842156A (en) Three-dimensional map construction method and device
CN113450414A (en) Camera calibration method, device, system and storage medium
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
JPWO2019193859A1 (en) Camera calibration method, camera calibration device, camera calibration system and camera calibration program
US20190096082A1 (en) Indoor positioning system and indoor positioning method
Wohlfeil et al. Automatic camera system calibration with a chessboard enabling full image coverage
CN112433225A (en) Positioning method based on image recognition
WO2021227082A1 (en) Method and device for positioning internet of things devices
CN110347007B (en) Method and device for calibrating laser in projection lamp
CN107238920B (en) Control method and device based on telescope equipment
CN116136408A (en) Indoor navigation method, server, device and terminal
CN111210471B (en) Positioning method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220701

Address after: Room 5034, building 3, 820 wenerxi Road, Xihu District, Hangzhou City, Zhejiang Province

Applicant after: ZHEJIANG LIANHE TECHNOLOGY Co.,Ltd.

Address before: Box 847, four, Grand Cayman capital, Cayman Islands, UK

Applicant before: ALIBABA GROUP HOLDING Ltd.

TA01 Transfer of patent application right