WO2024012463A1 - 一种定位方法及装置 - Google Patents

一种定位方法及装置 Download PDF

Info

Publication number
WO2024012463A1
WO2024012463A1 PCT/CN2023/106844 CN2023106844W WO2024012463A1 WO 2024012463 A1 WO2024012463 A1 WO 2024012463A1 CN 2023106844 W CN2023106844 W CN 2023106844W WO 2024012463 A1 WO2024012463 A1 WO 2024012463A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
sub
object identification
feature
beacon
Prior art date
Application number
PCT/CN2023/106844
Other languages
English (en)
French (fr)
Inventor
王力
Original Assignee
杭州海康机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康机器人股份有限公司 filed Critical 杭州海康机器人股份有限公司
Publication of WO2024012463A1 publication Critical patent/WO2024012463A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Definitions

  • the present application relates to the field of data processing technology, and in particular to a positioning method and device.
  • the robot needs to face different objects to complete different tasks. Therefore, it needs to accurately locate the objects for the tasks it wants to complete.
  • the robot needs to transport the roller to the corresponding machine platform to realize the docking between the roller and the machine platform.
  • the robot needs to accurately locate the machine platform corresponding to the roller; the robot needs to transport the goods to the designated shelf. In this case, the robot needs to accurately locate the above designated shelves.
  • radio frequency cards are often installed on each object in the robot's working scene.
  • the robot locates the task-oriented object based on the information carried in the received wireless signal transmitted by the radio frequency card.
  • wireless signals are easily interfered by noise signals in the environment, causing distortion of the wireless signals obtained by the robot.
  • the purpose of the embodiments of the present application is to provide a positioning method and device to improve the accuracy of positioning objects.
  • the specific technical solutions are as follows:
  • embodiments of the present application provide a positioning method, which method includes:
  • the object identification sub-region represents the object with a visual beacon posted as the target object, then the image position of the feature point in the feature sub-region in the visual beacon region in the visual beacon region is obtained, wherein the feature The sub-region does not coincide with the object identification sub-region;
  • the target object is positioned based on the obtained image position and the calibration parameters of the image acquisition device.
  • the visual beacon includes at least one beacon unit
  • Each beacon unit includes: an object identification sub-region and a plurality of characteristic sub-regions, each characteristic sub-region is adjacent to the object identification sub-region; or each beacon unit includes: an object identification sub-region and a characteristic sub-region area.
  • each beacon unit includes: an object identification sub-region and an even number of characteristic sub-regions, the number of characteristic sub-regions distributed on the left and right of the object identification sub-region is the same, and the object identification sub-region has an equal number of characteristic sub-regions on the left and right.
  • the graphics in the characteristic subregions distributed on the side and right sides are symmetrical to each other.
  • the characteristic sub-region includes: a first graphic and a second graphic, the first graphic is used to provide preset characteristic points, and the second graphic is used to determine the characteristic sub-region.
  • Obtaining the image position of the feature point in the feature sub-area in the visual beacon area in the visual beacon area includes:
  • Positioning the target object based on the obtained image position and the calibration parameters of the image acquisition device includes:
  • the target object is positioned according to the image position and spatial position corresponding to the feature points in each feature sub-region and the calibration parameters of the image acquisition device.
  • the second graphics in different characteristic sub-areas in the same beacon unit have at least one of the following differences:
  • the colors of the second graphics are different; the layout of the second graphics is different; the number of the second graphics is different; the shapes of the second graphics are different; and the sizes of the second graphics are different.
  • the number of beacon units included in the visual beacon is determined by the viewing angle range of the image acquisition device; and/or the number of characteristic sub-regions included in the beacon unit is determined by the viewing angle range of the image acquisition device;
  • determining the object identification sub-region in the visual beacon region includes:
  • the object identification sub-region is determined from the visual beacon area; or based on the preset target color parameters of the object identification sub-region, from the The object identification sub-region is determined in the visual beacon area; or the object identification sub-region is determined from the visual beacon area based on a preset second target image feature of the object identification sub-region.
  • an embodiment of the present application provides a positioning device, which includes:
  • the visual beacon area recognition module is used to identify the visual beacon area in the target image collected by the image acquisition device;
  • An object identification sub-region determination module configured to determine the object identification sub-region in the visual beacon area
  • a position acquisition module configured to obtain the image position of the feature point in the feature sub-region in the visual beacon region in the visual beacon region if the object identification sub-region represents an object posted with a visual beacon as a target object. , wherein the characteristic sub-region and the object identification sub-region do not overlap;
  • a target object positioning module is used to position the target object based on the obtained image position and the calibration parameters of the image acquisition device.
  • the visual beacon includes at least one beacon unit
  • Each beacon unit includes: an object identification sub-region and a plurality of characteristic sub-regions, each characteristic sub-region is adjacent to the object identification sub-region; or each beacon unit includes: an object identification sub-region and a characteristic sub-region area.
  • each beacon unit includes: an object identification sub-region and an even number of characteristic sub-regions, the number of characteristic sub-regions distributed on the left and right of the object identification sub-region is the same, and the object identification sub-region has an equal number of characteristic sub-regions on the left and right.
  • the graphics in the characteristic subregions distributed on the side and right sides are symmetrical to each other.
  • the characteristic sub-region includes: a first graphic and a second graphic, the first graphic is used to provide preset characteristic points, and the second graphic is used to determine the characteristic sub-region.
  • the position acquisition module is specifically used to identify the preset feature points provided by the first graphic in each feature sub-area in the visual beacon area; obtain the location of the identified feature points in the visual beacon area. image position;
  • the target object positioning module is specifically used to identify the second graphics in each characteristic sub-region in the visual beacon area; determine each characteristic sub-region according to the graphic information of the second graphics in each characteristic sub-region. and the relative positional relationship between the object identification sub-region; according to the relative positional relationship, select the spatial position corresponding to the feature point in each feature sub-region from the preset spatial position of each feature point;
  • the target object is positioned according to the image position and spatial position corresponding to the feature points in each feature sub-region and the calibration parameters of the image acquisition device.
  • the second graphics in different characteristic sub-areas in the same beacon unit have at least one of the following differences:
  • the colors of the second graphics are different; the layout of the second graphics is different; the number of the second graphics is different; the shapes of the second graphics are different; and the sizes of the second graphics are different.
  • the number of beacon units included in the visual beacon is determined by the viewing angle range of the image acquisition device; and/or the number of characteristic sub-regions included in the beacon unit is determined by the viewing angle range of the image acquisition device.
  • the object identification sub-region determination module is specifically configured to determine the object identification sub-region from the visual beacon area based on the preset position information of the object identification sub-region in the visual beacon. area; or based on a preset target color parameter of the object identification subarea, determining the object identification subarea from the visual beacon area; or based on a second target image feature of the preset object identification subarea, The object identification sub-region is determined from the visual beacon region.
  • embodiments of the present application provide an electronic device, including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete communication with each other through the communication bus;
  • Memory used to store computer programs
  • the processor is used to implement the positioning method described in the first aspect when executing the program stored in the memory.
  • embodiments of the present application provide a computer-readable storage medium.
  • a computer program is stored in the computer-readable storage medium.
  • the positioning method described in the first aspect is implemented. .
  • embodiments of the present application provide a computer program product containing instructions that, when run on a computer, cause the computer to execute the positioning method described in the first aspect.
  • the visual beacon area is first identified from the target image collected by the image acquisition device.
  • the visual beacon area contains non-overlapping object identification sub-regions and features. sub-region, when it is recognized that the information in the object identification sub-region indicates that the object with the visual beacon is the target object, the image position of the feature point in the above-mentioned characteristic sub-region can be further obtained, so that based on the above-mentioned image position and image acquisition device
  • the calibration parameters can realize the positioning of the target object.
  • the visual beacon is divided into non-overlapping object identification sub-regions and characteristic sub-regions, wherein the object identification sub-region is used to identify the target object, and the characteristic sub-region is used to identify the target object.
  • the object identification sub-region is used to identify the target object
  • the characteristic sub-region is used to identify the target object.
  • Figure 1 is a schematic flow chart of the first positioning method provided by the embodiment of the present application.
  • Figure 2 is a schematic diagram of the first visual beacon provided by the embodiment of the present application.
  • Figure 3 is a schematic diagram of a second visual beacon provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of a third visual beacon provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of a fourth visual beacon provided by an embodiment of the present application.
  • Figure 6 is a schematic flow chart of the second positioning method provided by the embodiment of the present application.
  • Figure 7 is a schematic structural diagram of a positioning device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the execution subject of the solution provided by the embodiments of this application can be a robot equipped with an image acquisition device and having a data processing function, or it can be any electronic device that can perform data communication with the robot and has a data processing function.
  • Figure 1 is a schematic flowchart of a first positioning method provided by an embodiment of the present application.
  • the above method includes the following steps S101-S104.
  • Step S101 Identify the visual beacon area in the target image collected by the image collection device.
  • the image capture device can be any device capable of capturing images.
  • the image acquisition device may be a monocular camera.
  • the visual beacon area is the area occupied by the visual beacon in the target image. After the robot obtains the target image collected by the image acquisition device, in order to determine that the image acquisition device has captured the visual beacon, it needs to identify the visual signal from the target image. mark area.
  • the visual beacon area can be identified from the target image according to the preset characteristics of the visual beacon. Specifically, the visual beacon area can be identified in the following ways.
  • the first target image feature of the visual beacon area can be preset.
  • the image features of the target image can be extracted, and the image features can be combined with The image area corresponding to the feature matching the first target image feature is determined as the visual beacon area in the target image.
  • the boundary line shape of the visual beacon area can be preset, and then the edge image features of the target image are extracted, and the visual beacon area in the target image is identified based on the edge image features. For example, you can preset a rectangular boundary line for the visual beacon, and then select an area surrounded by a rectangle composed of continuous edge feature points as the visual beacon area.
  • Step S102 Determine the object identification sub-region in the visual beacon region.
  • the visual beacon is introduced.
  • the visual beacon includes two sub-regions, one is the feature sub-region and the other is the object identification sub-region.
  • the characteristic sub-region is composed of at least one specific graphic, which is used to provide characteristic points or to determine the relative position relationship between the characteristic sub-region and the object identification sub-region.
  • the object identification sub-area carries the identification information of the object posted by the visual beacon, and is used to identify the object posted by the visual beacon, which will be introduced in detail later.
  • the solution provided by the embodiment of the present application does not limit the number of feature sub-regions and object identification sub-regions contained in the visual beacon, but no matter how many feature sub-regions and object identification sub-regions are included, the feature sub-region Between area and object identification subarea The time does not overlap.
  • a visual beacon region may contain an object identification subregion and a feature subregion.
  • Figure 2 is a schematic diagram of a first visual beacon provided by an embodiment of the present application. It can be seen from the figure that the visual beacon includes an object identification sub-region and a feature sub-region, and the object identification sub-region and the feature sub-region do not overlap.
  • FIG. 2 is only one of the various visual beacons provided by the embodiment of the present application, and the embodiment of the present application does not limit the specific form of each sub-region and the layout of each sub-region in the visual beacon.
  • Other implementation methods of sub-region layout in visual beacons are detailed in subsequent embodiments and will not be described in detail here.
  • the specific form of the object identification sub-region can be a two-dimensional code such as a DM (Data Matrix) code carrying identification information, a QR (Quick Response) code, or a bar code or ARUCO code carrying identification information, or It can be a pattern generated based on other encoding rules and carrying identification information.
  • the above identification information can be the number of the object posted by the visual beacon, for example, the number of the shelf or the machine; it can also further include other information of the object posted by the visual beacon according to the actual scene, such as the capacity of the shelf, the number of the shelf Type of goods stored, etc.
  • the embodiments of the present application do not limit the size, color, shape, etc. of the object identification sub-region.
  • the object identification sub-region in the visual beacon area can be determined in the following manner.
  • the target color parameter of the object identification sub-region can be set in advance, and based on the target color parameter, the object identification sub-region is determined from the visual beacon area. Specifically, the area in the visual beacon area whose color parameter is the target color parameter is identified, and the identified area is determined as the object identification sub-area in the target image. For example, the target color of the object identification sub-region can be set to a color that distinguishes it from other regions in the visual beacon area. It can be seen that based on the target color parameter of the object identification sub-region, the object identification sub-region can be quickly and accurately determined from the visual beacon area.
  • the second target image feature of the object identification sub-region can be preset, and the object identification sub-region is determined from the visual beacon region based on the second target image feature.
  • the image features of the visual beacon area can be extracted, and the image area corresponding to the feature in the image feature that matches the second target image feature is determined as the target Objects in an image identify subregions.
  • the object identification sub-region can be determined using the above feature matching method. It can be seen that based on the second target image feature of the object identification sub-region, the object identification sub-region can be quickly and accurately determined from the visual beacon area.
  • the object identification sub-region may be determined from the visual beacon area based on the preset position information of the object identification sub-region in the visual beacon. Since the visual beacon is set in advance and the position of the object identification sub-region in the visual beacon is predetermined, the preset position information of the object identification sub-region in the visual beacon can be obtained. Based on the differences in the above preset information, the following two situations are specifically used as examples to illustrate the method of determining the object identification sub-region.
  • the preset position information may be the relative position of the object identification sub-region in the visual beacon.
  • the preset location information may be: the object identification sub-region is located in the left half of the visual beacon, or it may be: the object identification sub-region is located in the middle third of the visual beacon, etc.
  • the preset position information may be the coordinates of the corner point of the object identification sub-region in the visual beacon.
  • the coordinates of the corner point in the visual beacon are (20,10), and the length and width of the visual beacon are 40 units and 20 units respectively
  • the The ratio of the horizontal and vertical coordinates of the coordinates to the length and width of the visual beacon is 0.5; if the length and width of the visual beacon area in the target image are 800 pixels and 400 pixels respectively, then calculate the above ratio and the length of the visual beacon area
  • the product of the width and width can be used to obtain the coordinates of the corner point in the visual beacon area as (400,200).
  • the remaining three corner points of the object identification sub-area are The pixel coordinates corresponding to the visual beacon area, and then the area enclosed by the rectangle whose corner point coordinates are the above-mentioned pixel coordinates can be determined as the object identification sub-area in the visual beacon area.
  • the object identification sub-region can be accurately determined from the visual beacon area.
  • Step S103 If the object identification sub-region indicates that the object with the visual beacon posted is the target object, obtain the image position of the feature point in the feature sub-region in the visual beacon region in the visual beacon region.
  • the above target objects vary according to the objects that the robot faces when performing tasks.
  • the robot needs to transport the roller to the corresponding machine platform to realize the docking between the roller and the machine platform.
  • the target object is the machine platform corresponding to the roller; the robot needs to transport the goods to the designated shelf.
  • the target object is the specified shelf above.
  • the object identification sub-area carries identification information of the object posted by the visual beacon
  • the above information can be read from the object identification sub-area. If the above information indicates that the object posted by the visual beacon is the target object, it means that the object posted by the visual beacon is the object that needs to be located. In order to locate the target object, the information in the characteristic sub-region of the visual beacon area can be further obtained. .
  • the characteristic sub-area consists of at least one specific graphic.
  • the characteristic sub-region may be a combination of one of the shapes such as rectangle, circle, triangle or trapezoid, or may be a combination of multiple of the above shapes.
  • the embodiment of the present application does not limit the number of specific graphics constituting the characteristic sub-region.
  • the embodiment of the present application does not limit the color of each specific graphic in the feature sub-region.
  • the color of each specific graphic may be the same or different.
  • the color of the specific graphics constituting the characteristic sub-region may be a color that contrasts significantly with the color of the object identification sub-region.
  • the color of the object identification sub-region is black and white
  • the color of the specific graphics constituting the feature sub-region is red, etc.
  • the feature sub-area in the visual beacon area can be determined first, and then the feature points in the above-mentioned feature sub-area are extracted, and the pixel coordinates of the feature points in the visual beacon area are obtained according to the extracted feature points, as The image location of the feature point in the visual beacon area.
  • the meaning of the above feature points is determined based on the specific graphics that constitute the feature sub-region. For example, if the feature sub-region consists of a rectangle, as shown in Figure 2, the corner points of the rectangle can be extracted as feature points; if the feature sub-region consists of a circle with a center If the feature sub-region is composed of a rectangle and a circle, the corner points of the rectangle and the center of the circle can be extracted as feature points.
  • the characteristic sub-region in the visual beacon area can be determined in the following way.
  • the characteristic sub-region in the visual beacon area can be determined by referring to the method of determining the object identification sub-region in the aforementioned step S102. The only difference is that the object identification sub-region is replaced by the characteristic sub-region, which will not be described again here. .
  • the area in the visual beacon area excluding the object identification sub-region may be determined as the characteristic sub-region.
  • Step S104 Position the target object based on the obtained image position and the calibration parameters of the image acquisition device.
  • the calibration parameters of the image acquisition device can be pre-calibrated and stored in the memory of the robot.
  • the above-mentioned calibration parameters of the image acquisition device may be internal parameters and/or external parameters of the image acquisition device.
  • the internal parameter can be an internal parameter matrix of the image acquisition device, which contains information such as the focal length of the image acquisition device.
  • the above internal parameters can be pre-calibrated and stored using a checkerboard calibration method or other methods.
  • the external parameter can be the spatial position of the image acquisition device when it collects the target image, that is, the position of the image acquisition device in the world coordinate system when it collects the target image.
  • the above external parameter can be determined by the robot based on the acquisition time of the target image and the internal positioning of the robot. The device determines that, for example, the acquisition time of target image A is time a, and the robot can obtain the spatial position of the image acquisition device at time a through the positioning device. In this way, the spatial position determined by the above-mentioned robot can be obtained as the above-mentioned external parameter.
  • the spatial position corresponding to the feature point can be combined to calculate Target object is positioned.
  • the spatial position of the above-mentioned feature point may be the position of the feature point in the world coordinate system. The method of determining the spatial position of the above feature points is introduced below.
  • the spatial position of the feature points of the feature sub-region in the visual beacon in the world coordinate system is known.
  • the above-mentioned known spatial position can be called the preset spatial position, for example, It can be known through pre-measurement that the preset spatial positions of the feature points in the world coordinate system are P1, P2, P3, and P4.
  • the spatial position of each feature point in the feature sub-region can be determined from the preset spatial position. For example, for the feature point S1 in the feature sub-region, if the preset spatial position corresponding to the feature point S1 is determined to be P1, the spatial position corresponding to the feature point S1 can be determined to be P1.
  • the spatial position corresponding to the feature point can be obtained from the preset spatial position in the following manner.
  • the spatial position corresponding to each feature point can be determined from the preset spatial positions of the feature points in the feature sub-region in a specific order. In one case, the spatial position corresponding to each feature point can be determined from the above-mentioned preset spatial positions according to the order in which the feature points appear in the feature sub-region from top to bottom and from left to right.
  • the preset spatial position is P1-P4, and the order of appearance of feature points S1-S4 from top to bottom and from left to right in the feature sub-area is: S1 ⁇ S3 ⁇ S2 ⁇ S4, then P1 can be determined respectively.
  • is the spatial position corresponding to S1 determines P2 as the spatial position corresponding to S3, determines P3 as the spatial position P1 corresponding to S2, and determines P4 as the spatial position corresponding to S4.
  • the relative position relationship between the feature sub-region and the object identification sub-region can be determined first, and the spatial position corresponding to the feature point is determined from the preset spatial position based on the relative position relationship.
  • the specific implementation details are shown in steps S605-S607 in the implementation shown in Figure 6, which will not be described in detail here.
  • the PNP algorithm can be used to calculate the pose and determine the phase plane relative to the camera.
  • the rotation matrix and translation matrix of the visual beacon plane represents the rotation angle of the camera's phase plane relative to the visual beacon plane
  • the above-mentioned translation matrix represents the translation distance of the camera's phase plane relative to the visual beacon plane, because the camera's phase plane can be understood as the plane where the robot is located
  • the plane of the visual beacon is the plane where the target object is located.
  • the above rotation angle and translation distance can be considered as the rotation angle and translation distance of the robot relative to the target object. Therefore, based on the rotation angle and translation distance of the robot relative to the target object, Being able to determine the position of the target object relative to the robot is the positioning of the target object.
  • the visual beacon area is first identified from the target image collected by the image acquisition device.
  • the visual beacon area includes non-overlapping object identification sub-regions and feature sub-regions. area, when it is recognized that the information in the object identification sub-area indicates that the object with the visual beacon is the target object, the characteristics in the above-mentioned characteristic sub-area can be further obtained.
  • the image position of the characteristic point is determined, so that the target object can be positioned based on the above image position and the pre-calibrated parameters of the image acquisition device.
  • the visual beacon is divided into non-overlapping object identification sub-regions and characteristic sub-regions, wherein the object identification sub-region is used to identify the target object, and the characteristic sub-region is used to identify the target object.
  • the object identification sub-region is used to identify the target object
  • the characteristic sub-region is used to identify the target object.
  • the embodiment of the present application can achieve highly accurate target object positioning using only a single image acquisition device installed in the robot and a pre-posted visual beacon. Compared with positioning using radio frequency technology and based on 3D Depth sensors and other methods are used for positioning, which effectively reduces the cost of positioning while ensuring high positioning accuracy.
  • the visual beacon may include an object identification sub-region and multiple characteristic sub-regions, and each characteristic sub-region is adjacent to the object identification sub-region.
  • the plurality of characteristic sub-regions mentioned above may be arranged on the same side of the object identification sub-region, or they may be arranged on different sides of the object identification sub-region.
  • the respective feature sub-regions may be located on the left or right side of the object identification sub-region, or may be located on the upper or lower side of the object identification sub-region, etc.
  • the visual beacon may include: an object identification sub-region and an even number of feature sub-regions, the number of feature sub-regions distributed on the left and right of the object identification sub-region is the same, and the number of feature sub-regions distributed on the left and right sides of the object identification sub-region is The shapes in the area are symmetrical to each other.
  • Figure 3 is a schematic diagram of the second visual beacon provided by the embodiment of the present application.
  • the visual beacon includes an object identification sub-region and two characteristic sub-regions.
  • the two characteristic sub-regions They are feature sub-area 1 and feature sub-area 2 respectively.
  • One feature sub-area is distributed on both left and right sides of the object identification sub-area, and the graphics in the feature sub-areas distributed on the left and right sides of the object identification sub-area are symmetrical to each other.
  • the two feature sub-regions can provide more feature points, thereby obtaining the image positions of more feature points, which is beneficial to improving the accuracy of positioning the target object based on the above image positions.
  • Spend is a schematic diagram of the second visual beacon provided by the embodiment of the present application.
  • Figure 4 is a schematic diagram of the third visual beacon provided by the embodiment of the present application.
  • the visual beacon includes an object identification sub-region and four feature sub-regions.
  • the four feature sub-regions The regions are respectively: characteristic sub-region 1, characteristic sub-region 2, characteristic sub-region 3 and characteristic sub-region 4.
  • two characteristic sub-regions are distributed on the left and right sides of the object identification sub-region, and the graphics in the characteristic sub-regions distributed on the left and right sides of the object identification sub-region are symmetrical to each other.
  • the four feature sub-regions can provide more feature points, thereby obtaining the image positions of more feature points, making the positioning of the target object based on the above image positions more accurate.
  • the visual beacon may include at least one beacon unit, and the sub-areas in each beacon unit may have the following layout:
  • Each beacon unit includes: an object identification sub-region and multiple characteristic sub-regions, and each characteristic sub-region is adjacent to the object identification sub-region.
  • Each beacon unit includes: an object identification sub-region and a characteristic sub-region.
  • Each beacon unit includes: an object identification sub-region and an even number of characteristic sub-regions.
  • the number of characteristic sub-regions distributed on the left and right of the object identification sub-region is the same, and the characteristic sub-regions distributed on the left and right sides of the object identification sub-region are The shapes are symmetrical to each other.
  • the sub-region layout form of the above-mentioned beacon unit has been explained in the previous introduction of the sub-region layout method of the visual beacon. The difference is only that the visual beacon is replaced by the beacon unit, which will not be described again here.
  • FIG. 5 is a schematic diagram of the fourth visual beacon provided by an embodiment of the present application.
  • the visual beacon includes two beacon units: beacon unit A and beacon unit B.
  • Each A beacon unit contains feature sub-regions and object identification sub-regions: beacon unit A includes feature sub-regions A1 and A2 and object identification sub-region A; beacon unit B includes feature sub-regions B1 and B2 and object identification sub-region B .
  • each beacon unit in the visual beacon contains an object identification sub-region and a feature sub-region. This can ensure that as long as any information in the visual beacon is collected in the image collection area,
  • the target unit By using the target unit, the target object can be positioned based on the information contained in the object identification sub-region and the characteristic sub-region in the above-mentioned beacon unit, which reduces the difficulty for the image acquisition device to collect the target image containing the visual beacon and improves the positioning of the target object. s efficiency.
  • beacon units included in the visual beacon can be exactly the same, or they can have the same object identification sub-region, but the number of feature sub-regions is different, or they can have the same object identification sub-region, but the number of feature sub-regions is different.
  • the layout method of the object identification sub-region is different, etc., and this is not limited in the embodiment of the present application.
  • the number of beacon units included in the above-mentioned visual beacon may be determined by the viewing angle range of the image collection device. For example, if the image collection equipment has a larger collection range, it can be set as shown in Figure 5, and the visual beacon includes multiple beacon units; if the image collection equipment has a smaller collection range, it can be as shown in Figure 2, Figure 3 or Figure 4. shows that only one beacon unit is included in setting the visual beacon.
  • the corresponding relationship between the viewing angle range of the image collection device and the number of beacon units included in the visual beacon can be set in advance, and then the number of beacon units included in the visual beacon is set according to the above corresponding relationship.
  • the number of beacon units included in the visual beacon is selected based on the viewing angle range of the image acquisition device. This can make full use of the viewing angle range of the image acquisition device, which is beneficial to the image acquisition device to collect the target image containing the visual beacon area, and improves the accuracy of the target object. Positioning efficiency. In addition, this can include as many beacon units as possible within the viewing angle range of the image acquisition device, so that a target image containing multiple beacon units can be collected, and then the characteristic sub-regions of the multiple beacon units in the target image can be collected. A large number of feature points are provided to locate the target object, further improving the accuracy of locating the target object.
  • the number of characteristic sub-regions included in the above-mentioned beacon unit may be determined by the viewing angle range of the image acquisition device. For example, if the collection range of the image collection device is large, the beacon unit can be set to include multiple characteristic sub-regions. As shown in the aforementioned Figure 3, the visual beacon in the figure includes a beacon unit, and the beacon unit includes two characteristic sub-areas. As shown in Figure 4, the visual beacon in the figure includes a beacon unit, and the beacon unit includes two characteristic sub-areas. The beacon unit includes 4 characteristic sub-regions; if the collection range of the image acquisition device is small, the beacon unit can be set to include only 1 characteristic sub-region. As shown in the aforementioned Figure 2, the visual beacon in the figure includes a beacon unit, and the beacon unit only includes one characteristic sub-region.
  • the corresponding relationship between the viewing angle range of the image acquisition device and the number of characteristic sub-regions included in the beacon unit can be set in advance, and then the number of characteristic sub-regions included in the beacon unit is set according to the above-mentioned corresponding relationship.
  • the number of characteristic sub-areas included in the beacon unit is selected based on the viewing angle range of the image acquisition device. This can make full use of the viewing angle range of the image acquisition device, which is conducive to the image acquisition device collecting target images containing the visual beacon area, and improving target object positioning. s efficiency. In addition, this can make the viewing angle range of the image acquisition device include as many feature sub-regions as possible, so that a target image containing multiple feature sub-regions can be collected, and then a large number of features provided by multiple feature sub-regions in the target image can be collected Point positioning of the target object further improves the accuracy of positioning the target object.
  • the number of beacon units included in the above-mentioned visual beacon and the number of characteristic sub-regions included in the beacon unit may both be determined by the viewing angle range of the image acquisition device.
  • the feature sub-region may include: a first graphic and a second graphic.
  • the first graphic is used to provide preset feature points
  • the second graphic is used to determine the relationship between the characteristic sub-region and the object.
  • the relative position relationship between the identification sub-regions In this case, the image position of the feature point in the first graphic can be obtained, and the spatial position corresponding to the feature point is determined based on the relative position relationship between the feature sub-region and the object identification sub-region.
  • the embodiment of the present application provides another positioning method.
  • Figure 6 is a schematic flowchart of the second positioning method provided by an embodiment of the present application.
  • the above method includes the following steps S601-S608.
  • Step S601 Identify the visual beacon area in the target image collected by the image collection device.
  • Step S602 Determine the object identification sub-region in the visual beacon region.
  • Step S603 If the object identification sub-region indicates that the object with the visual beacon posted is the target object, identify the preset feature points provided by the first graphic in each feature sub-region in the visual beacon region.
  • the characteristic sub-region includes a first graphic and a second graphic.
  • the first graphic is used to provide feature points
  • the second graphic is used to identify the relative positional relationship between each characteristic sub-region and the object identification sub-region.
  • the detailed introduction of the second graphic can be found in subsequent steps S605-S606, which will not be described in detail here.
  • the function of the first graphic is to provide feature points. Therefore, any graphic that is convenient for extracting feature points can be used as the first graphic.
  • the embodiment of the present application does not limit the specific shape and type of the second graphic.
  • the first graphic may contain only one graphic or multiple graphics.
  • the first graphic can be a rectangle, in which case the corner points of the rectangle can be extracted as feature points;
  • the first graphic can be a circle with a center, in which case the center of the circle can be extracted as the feature point ;
  • the first graphic may also contain a rectangle and a circle. In this case, the corner points of the rectangle and the center of the circle can be extracted as feature points.
  • the color of the first graphic can be any color, and this application does not limit the color of the first graphic.
  • the target image can also be preprocessed, and the feature points provided by the first graphic in the above image are extracted based on the processed image. Please refer to subsequent embodiments for specific implementation details, which will not be described in detail here.
  • the identified feature points can also be verified based on a preset distribution relationship between each feature point and the second graphic. Please refer to subsequent embodiments for specific implementation details, which will not be described in detail here.
  • Step S604 Obtain the image position of the identified feature point in the visual beacon area.
  • the above-mentioned image position may be the pixel coordinates of the feature point in the visual beacon area.
  • the pixel coordinates of each feature point in the visual beacon area can be determined as the above-mentioned image position.
  • Step S605 Identify the second graphics in each feature sub-area in the visual beacon area.
  • the above-mentioned second graphic is used to identify the relative positional relationship between each feature sub-region and the object identification sub-region.
  • the embodiments of the present application do not limit the specific shape and types of the second graphics.
  • the above-mentioned second graphics may include only one type of graphics, or may include multiple graphics.
  • the second graphic may be a circle, a triangle, or may include a circle, a triangle, etc.
  • the color of the second graphic can be any color, and the embodiment of the present application does not limit the color of the second graphic.
  • edge image features of the visual beacon area of the target image can be extracted, and then the second graphics in each feature sub-area of the visual beacon area of the target image can be identified based on the edge image features. For example, if the second graphic in the characteristic sub-region A is a circle, then a circle composed of continuous edge feature points can be selected as the second graphic in the characteristic sub-region A. Other ways of identifying the second graphic can be obtained based on the way of identifying the visual beacon area introduced in step S101 of the embodiment shown in FIG. 1 , and will not be described again here.
  • the target image may also be pre-processed, and the second graphic may be extracted based on the processed image. Please refer to subsequent embodiments for specific implementation details, which will not be described in detail here.
  • Step S606 Determine the relative positional relationship between each characteristic sub-region and the object identification sub-region based on the graphic information of the second graphic in each characteristic sub-region.
  • the above relative position relationship can be determined based on different graphic information of the second graphic in each characteristic sub-region.
  • the above graphic information may be the color, shape, quantity, layout, size, etc. of the second graphic.
  • the second graphics in different characteristic sub-regions may have at least one of the following differences:
  • the color of the second graphic is different.
  • the above-mentioned relative position relationship can be determined based on the color of the second graphic in each characteristic sub-area.
  • the color of the second graphic of feature subregion A is red, which indicates that feature subregion A is on the left side of the object identification subregion
  • the color of the second graphic of feature subregion B is blue, which indicates that feature subregion B is on the left side of the object identification subregion. Identifies the right side of the subregion, etc.
  • the layout of the second graph is different.
  • the above relative position relationship can be determined according to the layout of the second graphics in each characteristic sub-area.
  • the second graphic of the characteristic sub-region C is located in the upper half of the characteristic sub-region C, and the characteristic sub-region C is on the left side of the object identification sub-region;
  • the second graphic of the characteristic sub-region D is located in the lower half of the characteristic sub-region C. part, the characterizing feature sub-region D is on the left side of the object identification sub-region, etc.
  • the second graphic in the characteristic sub-region E and the second graphic in the characteristic sub-region F are mirror images of each other, and the relative positional relationship can be determined based on different mirror image features of the second graphic.
  • the number of second graphics is different. In this way, the above-mentioned relative position relationship can be determined based on the number of second graphics in each characteristic sub-area.
  • the characteristic sub-region G contains a second figure, and the characteristic sub-region G is above the object identification sub-region.
  • the characteristic sub-region H contains two second graphics, and the characteristic sub-region H is below the object identification sub-region. Wait.
  • the shape of the second figure is different.
  • the above-mentioned relative position relationship can be determined based on the shape of the second figure in each characteristic sub-region.
  • the second graphic in the characteristic sub-region I is a circle, and the characteristic sub-region I is on the upper side of the object identifier sub-region.
  • the second graphic in the characteristic sub-region J is a triangle, and the characteristic sub-region J is on the upper side of the object identifier sub-region. the lower side of the area, etc.
  • the size of the second graphic is different.
  • the above-mentioned relative position relationship can be determined according to the size of the second graphic in each characteristic sub-region.
  • the second graphic area in the feature sub-region K is larger, and the characterizing feature sub-region K is on the upper side of the object identification sub-region.
  • the second graphic area in the characterizing sub-region L is smaller, and the characterizing feature sub-region L is on the upper side of the object identification sub-region. the lower side of the subregion, etc.
  • the different characteristics of the second graphic in each characteristic sub-region mentioned above are only examples.
  • the embodiments of the present application do not limit the method of using the characteristics of the second graphic to determine the relative positional relationship between the characteristic sub-region and the object identification sub-region.
  • Step S607 According to the relative position relationship, select the spatial position corresponding to the feature point in each feature sub-region from the preset spatial position of each feature point.
  • the same position as the above-mentioned relative relationship can be selected from the preset spatial position.
  • the target preset spatial position of the feature point is further selected from the target preset spatial position, and the spatial position corresponding to the feature point in each feature sub-region is selected.
  • the feature sub-region A is located on the left side of the object identification sub-region, you can first select the target preset spatial position of the feature point located on the left side of the object identification sub-region. This is equivalent to narrowing the selection range, and then select the target preset space position from the target preset In the spatial position, select the spatial position corresponding to the feature point in each feature sub-region.
  • step S104 for further selecting the spatial position corresponding to the feature point in each feature sub-region from the target preset spatial position, you can refer to the method given in step S104 in the embodiment shown in Figure 1. According to a specific order, from the target Determine each feature in a preset spatial location The spatial position corresponding to the point, etc. will not be described again here.
  • a rectangular coordinate system can be established with the geometric center of the second figure in the feature sub-region as the origin, and then the distance between each feature point and the origin point, as well as the connection line between each feature point and the origin point, can be obtained
  • the angle between the target and the horizontal line, and the target preset spatial position corresponding to the feature point in each feature sub-region is selected based on the above distance and angle. For example, a feature point with a distance of 10 units from the origin and an angle of 30 degrees corresponds to the target preset spatial position a, and a feature point with a distance of 15 units from the origin and an angle of 50 degrees corresponds to the target preset position Spatial position b etc.
  • Step S608 Position the target object according to the image position and spatial position corresponding to the feature points in each feature sub-region and the calibration parameters of the image acquisition device.
  • step S608 has been described in step S104 and will not be described again here.
  • the image position of the feature point can be quickly obtained through the first graphic, and the relative position relationship between the feature sub-region and the object identification sub-region can be determined based on the second graphic, which in turn can facilitate faster detection based on the above relative position relationship.
  • the spatial position corresponding to each feature point is determined from the preset spatial position.
  • the following describes another method of extracting feature points provided by the first graphic, identifying the second graphic, and verifying the feature points mentioned above through steps A to C.
  • Step A Preprocess the target image.
  • the above preprocessing may be to perform gradient detection on the target image, and convert the target image into a gradient image based on the detection results.
  • Gradient images can represent the changing information of edges, textures, etc. throughout the image.
  • noise may exist in the target image.
  • the preprocessing may include image filtering on the image, such as Gaussian filtering on the image, mean filtering on the image, etc.
  • the above-mentioned preprocessing may be morphological processing of the target image to make the first graphic and the second graphic in the target image more obvious, so as to facilitate the subsequent identification of the third graphic from the target image based on information such as connected domains. a first figure and a second figure.
  • the above-mentioned morphological processing can be image erosion, image dilation, etc.
  • Step B Based on the processed image, extract the first graphic and the second graphic in the above image.
  • the first graphic and the second graphic can be extracted from the processed image more quickly and accurately using the aforementioned method of extracting edge features.
  • Step C Extract feature points in the first graphic, and verify the extracted feature points based on the preset distribution relationship between the second graphic and each feature point.
  • the distribution relationship between the extracted feature points and the extracted second graphic may be compared with a preset distribution relationship, and the feature points whose distribution relationship does not conform to the preset distribution relationship may be determined as invalid feature points.
  • each feature point is pre-set above the second figure. If the extracted feature points include feature points distributed below the second figure, then the above-mentioned feature points can be considered to be misidentified feature points, so the above-mentioned feature points can be The feature point is determined to be an invalid feature point. This can further improve the accuracy of the extracted feature points.
  • embodiments of the present application also provide a positioning device.
  • Figure 7 is a schematic structural diagram of a positioning device provided by an embodiment of the present application.
  • the device includes the following modules 701-704.
  • the visual beacon area identification module 701 is used to identify the visual beacon area in the target image collected by the image acquisition device;
  • the object identification sub-region determination module 702 is used to determine the object identification sub-region in the visual beacon area
  • the position obtaining module 703 is used to obtain an image of the feature point in the characteristic sub-region of the visual beacon region in the visual beacon region if the object identification sub-region represents an object posted with a visual beacon as a target object. location, where the characteristic sub-region is the same as the The above object identification sub-regions do not overlap;
  • the target object positioning module 704 is used to position the target object based on the obtained image position and the calibration parameters of the image acquisition device.
  • the visual beacon area is first identified from the target image collected by the image acquisition device.
  • the visual beacon area includes non-overlapping object identification sub-regions and feature sub-regions. area, when it is recognized that the information in the object identification sub-area represents the object posted with the visual beacon as the target object, the image position of the feature point in the above-mentioned feature sub-area can be further obtained, so that based on the above-mentioned image position and the image acquisition device Pre-calibrated parameters enable positioning of target objects.
  • the visual beacon is divided into non-overlapping object identification sub-regions and characteristic sub-regions, wherein the object identification sub-region is used to identify the target object, and the characteristic sub-region is used to identify the target object.
  • the object identification sub-region is used to identify the target object
  • the characteristic sub-region is used to identify the target object.
  • the embodiment of the present application can achieve highly accurate target object positioning using only a single image acquisition device installed in the robot and a pre-posted visual beacon. Compared with positioning using radio frequency technology and based on 3D Depth sensors and other methods are used for positioning, which effectively reduces the cost of positioning while ensuring high positioning accuracy.
  • the visual beacon includes at least one beacon unit, each beacon unit includes: an object identification sub-region and a plurality of characteristic sub-regions, each characteristic sub-region is consistent with the object identification sub-region. Adjacent.
  • the visual beacon includes at least one beacon unit, and each beacon unit includes: an object identification sub-region and a characteristic sub-region.
  • each beacon unit in the visual beacon includes an object identification sub-region and a feature sub-region. This can ensure that as long as any beacon in the visual beacon is collected in the image collection area unit, the target object can be positioned based on the information contained in the object identification sub-region and characteristic sub-region in the above-mentioned beacon unit, which reduces the difficulty of the image acquisition device to collect the target image containing the visual beacon, and improves the accuracy of target object positioning. efficiency.
  • each beacon unit includes: an object identification sub-region and an even number of characteristic sub-regions, the number of characteristic sub-regions distributed on the left and right of the object identification sub-region is the same, and the object identification sub-region has an equal number of characteristic sub-regions on the left and right.
  • the graphics in the characteristic subregions distributed on the side and right sides are symmetrical to each other.
  • the symmetrically distributed even number of feature sub-regions can provide more feature points, thereby obtaining the image positions of more feature points, which is beneficial to improving the detection of target objects based on the above image positions. Positioning accuracy.
  • the characteristic sub-region includes: a first graphic and a second graphic, the first graphic is used to provide preset characteristic points, and the second graphic is used to determine the characteristic sub-region.
  • the position acquisition module 703 is specifically used to identify the preset feature points provided by the first graphic in each feature sub-area in the visual beacon area; obtain the position of the identified feature point in the visual beacon area. image position in;
  • the target object positioning module 704 is specifically used to identify the second graphics in each characteristic sub-region in the visual beacon area; determine each characteristic sub-region according to the graphic information of the second graphics in each characteristic sub-region.
  • the image position of the feature point can be quickly obtained through the first graphic, and the relative position relationship between the feature sub-region and the object identification sub-region can be determined based on the second graphic, which in turn can facilitate faster detection based on the above relative position relationship.
  • the spatial position corresponding to each feature point is determined from the preset spatial position.
  • the second graphics in different characteristic sub-areas in the same beacon unit have at least one of the following differences:
  • the colors of the second graphics are different; the layout of the second graphics is different; the number of the second graphics is different; the shapes of the second graphics are different; and the sizes of the second graphics are different.
  • the number of beacon units included in the visual beacon is determined by the viewing angle range of the image acquisition device
  • the number of beacon units included in the visual beacon is selected based on the viewing angle range of the image acquisition device, which can make full use of the viewing angle range of the image acquisition device, which is conducive to the image acquisition device collecting the target image containing the visual beacon area, and improves the accuracy of the target object. Positioning efficiency. In addition, this can include as many beacon units as possible within the viewing angle range of the image acquisition device, so that a target image containing multiple beacon units can be collected, and then the characteristic sub-regions of the multiple beacon units in the target image can be collected. A large number of feature points are provided to locate the target object, further improving the accuracy of locating the target object.
  • the number of characteristic sub-regions included in the beacon unit is determined by the viewing angle range of the image acquisition device.
  • the number of characteristic sub-areas included in the beacon unit is selected based on the viewing angle range of the image acquisition device, which can make full use of the viewing angle range of the image acquisition device, which is conducive to the image acquisition device collecting the target image containing the visual beacon area, and improves the positioning of the target object. s efficiency.
  • this can make the viewing angle range of the image acquisition device include as many feature sub-regions as possible, so that a target image containing multiple feature sub-regions can be collected, and then a large number of features provided by multiple feature sub-regions in the target image can be collected Point positioning of the target object further improves the accuracy of positioning the target object.
  • the object identification sub-region determination module 702 is specifically configured to determine the object from the visual beacon area based on the preset position information of the object identification sub-region in the visual beacon area. Identifies the subregion.
  • the object identification sub-region can be accurately determined from the visual beacon area.
  • the object identification sub-region determination module 702 is specifically configured to determine the object identification sub-region from the visual beacon area based on a preset target color parameter of the object identification sub-region. .
  • the object identification sub-region can be quickly and accurately determined from the visual beacon area.
  • the object identification sub-region determination module 702 is specifically configured to determine the object identification from the visual beacon area based on the second target image feature of the preset object identification sub-region. sub-region.
  • the object identification sub-region can be quickly and accurately determined from the visual beacon area.
  • the embodiment of the present application also provides an electronic device, as shown in Figure 8, including a processor 801, a communication interface 802, and a memory 803. and communication bus 804, in which the processor 801, communication interface 802, and memory 803 complete communication with each other through the communication bus 804,
  • Memory 803 used to store computer programs
  • the processor 801 is used to implement the positioning method provided by the embodiment of the present application when executing the program stored in the memory 803.
  • the communication bus mentioned in the above-mentioned electronic equipment can be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, etc.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into address bus, data bus, control bus, etc. For ease of presentation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the above-mentioned electronic devices and other devices.
  • the memory may include random access memory (Random Access Memory, RAM) or non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk memory.
  • RAM Random Access Memory
  • NVM Non-Volatile Memory
  • the memory may also be at least one storage device located far away from the aforementioned processor.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), special integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU central processing unit
  • NP Network Processor
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • a computer-readable storage medium stores a computer program.
  • the computer program When executed by a processor, it implements the methods provided by the embodiments of this application. Positioning method.
  • a computer program product containing instructions is also provided, which when run on a computer causes the computer to execute the positioning method provided by the embodiment of the present application.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the available media may be magnetic media (eg, floppy disk, hard disk, tape), optical media (eg, DVD), or other media (eg, Solid State Disk (SSD)), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例提供了一种定位方法及装置,涉及数据处理技术领域,上述方法包括:识别图像采集设备所采集的目标图像中的视觉信标区域;确定所述视觉信标区域中的对象标识子区域;若所述对象标识子区域表征张贴有视觉信标的对象为目标对象,则获得所述视觉信标区域中特征子区域内特征点在所述视觉信标区域中的图像位置,其中,所述特征子区域与所述对象标识子区域不重合;基于所获得的图像位置以及所述图像采集设备的标定参数,对所述目标对象进行定位。应用本申请实施例提供的方案,能够提高对目标对象进行定位的准确度。

Description

一种定位方法及装置
本申请要求于2022年7月11日提交中国专利局、申请号为202210810431.9发明名称为“一种定位方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据处理技术领域,特别是涉及一种定位方法及装置。
背景技术
近年来,与机器人相关的技术得到了迅速发展,进而机器人在各行各业的应用越来越广泛。机器人在工作过程中需要面向不同的对象完成不同的工作,因此,需要精确定位其所要完成的任务面向的对象。例如,机器人要将辊筒搬运至对应的机台处,实现辊筒与机台的对接,这种情况下机器人需要准确定位辊筒对应的机台;机器人要将货物运送到指定货架上,这种情况下机器人需要准确定位上述指定货架。
现有技术中,往往在机器人工作场景中的各个对象上安装射频卡,机器人基于接收到的射频卡发射的无线信号中携带的信息,对任务面向的对象进行定位。然而在实际场景中,无线信号容易受到环境中噪声信号的干扰,导致机器人获得的无线信号失真,从而机器人难以基于上述无线信号获得对象的准确信息,进而导致对对象进行定位的准确度较低。
发明内容
本申请实施例的目的在于提供一种定位方法及装置,以提高对对象进行定位的准确度。具体技术方案如下:
第一方面,本申请实施例提供了一种定位方法,所述方法包括:
识别图像采集设备所采集的目标图像中的视觉信标区域;
确定所述视觉信标区域中的对象标识子区域;
若所述对象标识子区域表征张贴有视觉信标的对象为目标对象,则获得所述视觉信标区域中特征子区域内特征点在所述视觉信标区域中的图像位置,其中,所述特征子区域与所述对象标识子区域不重合;
基于所获得的图像位置以及所述图像采集设备的标定参数,对所述目标对象进行定位。
本申请的一个实施例中,所述视觉信标包括至少一个信标单元,
每一信标单元包括:一个对象标识子区域和多个特征子区域,各特征子区域与所述对象标识子区域相邻;或每一信标单元包括:一个对象标识子区域和一个特征子区域。
本申请的一个实施例中,每一信标单元包括:一个对象标识子区域和偶数个特征子区域,所述对象标识子区域左右分布的特征子区域数量相同,且所述对象标识子区域左侧与右侧分布的特征子区域中的图形互相对称。
本申请的一个实施例中,所述特征子区域包括:第一图形和第二图形,所述第一图形用于提供预设的特征点,所述第二图形用于确定所述特征子区域与所述对象标识子区域间的相对位置关系;
所述获得所述视觉信标区域中特征子区域内特征点在所述视觉信标区域中的图像位置,包括:
识别所述视觉信标区域中各特征子区域内由所述第一图形提供的预设的特征点;
获得所识别到特征点在所述视觉信标区域中的图像位置;
所述基于所获得图像位置以及所述图像采集设备的标定参数,对所述目标对象进行定位,包括:
识别所述视觉信标区域中各特征子区域内的所述第二图形;
根据各特征子区域内所述第二图形的图形信息,确定各特征子区域与所述对象标识子区域间的相对 位置关系;
根据所述相对位置关系,从各特征点的预设空间位置中,选择各特征子区域内的特征点对应的空间位置;
根据各特征子区域内的特征点对应的图像位置和空间位置、以及所述图像采集设备的标定参数,对所述目标对象进行定位。
本申请的一个实施例中,同一信标单元中不同特征子区域中的所述第二图形存在以下不同中的至少一种:
所述第二图形的颜色不同;所述第二图形的布局不同;所述第二图形的数量不同;所述第二图形的形状不同;所述第二图形的大小不同。
本申请的一个实施例中,
所述视觉信标中包括的信标单元的数量由所述图像采集设备的视角范围确定;和/或所述信标单元包括的特征子区域的数量由所述图像采集设备的视角范围确定;
本申请的一个实施例中,所述确定所述视觉信标区域中的对象标识子区域,包括:
基于对象标识子区域在视觉信标中的预设位置信息,从所述视觉信标区域中确定所述对象标识子区域;或基于预先设定的对象标识子区域的目标颜色参数,从所述视觉信标区域中确定所述对象标识子区域;或基于预先设定的对象标识子区域的第二目标图像特征,从所述视觉信标区域中确定所述对象标识子区域。
第二方面,本申请实施例提供的一种定位装置,所述装置包括:
视觉信标区域识别模块,用于识别图像采集设备所采集的目标图像中的视觉信标区域;
对象标识子区域确定模块,用于确定所述视觉信标区域中的对象标识子区域;
位置获得模块,用于若所述对象标识子区域表征张贴有视觉信标的对象为目标对象,则获得所述视觉信标区域中特征子区域内特征点在所述视觉信标区域中的图像位置,其中,所述特征子区域与所述对象标识子区域不重合;
目标对象定位模块,用于基于所获得的图像位置以及所述图像采集设备的标定参数,对所述目标对象进行定位。
本申请的一个实施例中,所述视觉信标包括至少一个信标单元,
每一信标单元包括:一个对象标识子区域和多个特征子区域,各特征子区域与所述对象标识子区域相邻;或每一信标单元包括:一个对象标识子区域和一个特征子区域。
本申请的一个实施例中,每一信标单元包括:一个对象标识子区域和偶数个特征子区域,所述对象标识子区域左右分布的特征子区域数量相同,且所述对象标识子区域左侧与右侧分布的特征子区域中的图形互相对称。
本申请的一个实施例中,所述特征子区域包括:第一图形和第二图形,所述第一图形用于提供预设的特征点,所述第二图形用于确定所述特征子区域与所述对象标识子区域间的相对位置关系;
所述位置获得模块,具体用于识别所述视觉信标区域中各特征子区域内由所述第一图形提供的预设的特征点;获得所识别到特征点在所述视觉信标区域中的图像位置;
所述目标对象定位模块,具体用于识别所述视觉信标区域中各特征子区域内的所述第二图形;根据各特征子区域内所述第二图形的图形信息,确定各特征子区域与所述对象标识子区域间的相对位置关系;根据所述相对位置关系,从各特征点的预设空间位置中,选择各特征子区域内的特征点对应的空间位置; 根据各特征子区域内的特征点对应的图像位置和空间位置、以及所述图像采集设备的标定参数,对所述目标对象进行定位。
本申请的一个实施例中,同一信标单元中不同特征子区域中的所述第二图形存在以下不同中的至少一种:
所述第二图形的颜色不同;所述第二图形的布局不同;所述第二图形的数量不同;所述第二图形的形状不同;所述第二图形的大小不同。
本申请的一个实施例中,
所述视觉信标中包括的信标单元的数量由所述图像采集设备的视角范围确定;和/或所述信标单元包括的特征子区域的数量由所述图像采集设备的视角范围确定。
本申请的一个实施例中,所述对象标识子区域确定模块,具体用于基于对象标识子区域在视觉信标中的预设位置信息,从所述视觉信标区域中确定所述对象标识子区域;或基于预先设定的对象标识子区域的目标颜色参数,从所述视觉信标区域中确定所述对象标识子区域;或基于预先设定的对象标识子区域的第二目标图像特征,从所述视觉信标区域中确定所述对象标识子区域。
第三方面,本申请实施例提供了一种电子设备,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;
存储器,用于存放计算机程序;
处理器,用于执行存储器上所存放的程序时,实现上述第一方面所述的定位方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面所述的定位方法。
第五方面,本申请实施例提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面所述的定位方法。
由以上可见,应用本申请实施例提供的方案进行对象定位时,首先从图像采集设备所采集的目标图像中识别出视觉信标区域,视觉信标区域中包含不重合的对象标识子区域和特征子区域,在识别到对象标识子区域中的信息表征张贴有视觉信标的对象为目标对象的情况下,可以进一步获取上述特征子区域中特征点的图像位置,这样基于上述图像位置和图像采集设备的标定参数,能够实现对目标对象进行定位。
另外,应用本申请实施例提供的方案进行对象定位时,不涉及射频卡,从而定位流程不再依赖于射频卡发送的无线信号,这样整个定位过程不会出现因无线信号被环境噪声信号干扰导致定位准确度遭受影响的情况,因此,能够提高对目标对象进行定位的准确度。
再者,在本申请实施例提供的方案中,在视觉信标中划分了互不重合的对象标识子区域和特征子区域,其中,对象标识子区域用于标识目标对象,特征子区域用于提供丰富的特征点来进行对目标对象的定位,这样能够保证上述两个子区域互不干扰,提高了所提取到的特征子区域中特征点位置信息的精度,进而进一步提高了基于上述特征点的位置信息对目标对象进行定位的准确度。
当然,实施本申请的任一产品或方法并不一定需要同时达到以上所述的所有优点。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的实施例。
图1为本申请实施例提供的第一种定位方法的流程示意图;
图2为本申请实施例提供的第一种视觉信标的示意图;
图3为本申请实施例提供的第二种视觉信标的示意图;
图4为本申请实施例提供的第三种视觉信标的示意图;
图5为本申请实施例提供的第四种视觉信标的示意图;
图6为本申请实施例提供的第二种定位方法的流程示意图;
图7为本申请实施例提供的一种定位装置的结构示意图;
图8为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员基于本申请所获得的所有其他实施例,都属于本申请保护的范围。
首先对发明实施例所提供方案的执行主体进行说明:
本申请实施例所提供方案的执行主体可以为安装有图像采集设备、并具有数据处理功能的机器人,也可以是任意一台能够与机器人进行数据通信、且具有数据处理功能的电子设备。
下面以执行主体为机器人为例,对本申请实施例提供的定位方法进行具体说明。
参见图1,图1为本申请实施例提供的第一种定位方法的流程示意图,上述方法包括以下步骤S101-S104。
步骤S101:识别图像采集设备所采集的目标图像中的视觉信标区域。
图像采集设备可以是任意能够采集图像的设备。一种实施方式中,图像采集设备可以是单目相机。
视觉信标区域为视觉信标在目标图像中所占的区域,机器人获取到图像采集设备采集到的目标图像之后,为了确定图像采集设备拍摄到了视觉信标,需要从目标图像中识别出视觉信标区域。
在识别视觉信标区域时,可以根据预先设定的视觉信标的特征从目标图像中识别出视觉信标区域。具体的,可以通过以下方式识别出视觉信标区域。
一种实施方式中,可以预先设定视觉信标区域的第一目标图像特征,这种情况下,确定目标图像中的视觉信标区域时,可以提取目标图像的图像特征,将图像特征中与第一目标图像特征相匹配的特征所对应的图像区域确定为目标图像中的视觉信标区域。
另一种实施方式中,可以预先设定视觉信标区域的边界线形状,然后提取目标图像的边缘图像特征,根据边缘图像特征识别目标图像中的视觉信标区域。例如,可以为视觉信标预设形状为矩形的边界线,然后选择由连续的边缘特征点组成的矩形所围成的区域,作为视觉信标区域。
步骤S102:确定视觉信标区域中的对象标识子区域。
首先对视觉信标进行介绍,本申请实施例提供的方案中,视觉信标包含两种子区域,一种是特征子区域,一种是对象标识子区域。其中,特征子区域由至少一种特定图形构成,用于提供特征点或用于确定特征子区域与对象标识子区域间的相对位置关系,针对特征子区域的详细介绍参见后续步骤S103,这里暂不详述;对象标识子区域携带有视觉信标所张贴的对象的标识信息,用于标识出视觉信标所张贴的对象,后续将进行详细介绍。
需要说明的是,本申请实施例提供的方案中并不对视觉信标中包含的特征子区域和对象标识子区域的数量进行限定,但是不管包含多少个特征子区域和对象标识子区域,特征子区域和对象标识子区域之 间是不重合的。
例如,视觉信标区域可以包含一个对象标识子区域和一个特征子区域。参见图2,图2为本申请实施例提供的第一种视觉信标的示意图。由图可以看出,该视觉信标包含了一个对象标识子区域和一个特征子区域,且对象标识子区域和特征子区域不重合。
可以理解的是,图2仅为本申请实施例提供的多种视觉信标中的一种,本申请实施例不限定视觉信标中各子区域的具体形式及各子区域的布局。视觉信标中子区域布局的其他实施方式详见后续实施例,这里暂不详述。
接下来对对象标识子区域进行详细介绍。
对象标识子区域的具体形式可以为携带标识信息的DM(Data Matrix,数据矩阵)码、QR(Quick Response,快速反应)码等二维码,也可以是携带标识信息的条形码或ARUCO码,还可以是基于其他编码规则生成的、携带有标识信息的图案。其中,上述标识信息可以为视觉信标所张贴对象的编号,例如,货架或机台的编号;还可以根据实际场景,进一步包含视觉信标所张贴对象的其他信息,例如,货架的容量、货架所存放的货物类型等。另外,本申请实施例不限定对象标识子区域的大小、颜色、形状等。
具体的,可以通过以下方式确定视觉信标区域中的对象标识子区域。
一种实施方式中,可以预先设定对象标识子区域的目标颜色参数,基于上述目标颜色参数,从视觉信标区域中确定对象标识子区域。具体的,识别视觉信标区域中颜色参数为目标颜色参数的区域,并将识别到的区域确定为目标图像中的对象标识子区域。例如,可以将对象标识子区域的目标颜色设置为与视觉信标区域中其他区域相区分的颜色。可见,这样基于对象标识子区域的目标颜色参数,能够快捷、准确的从视觉信标区域中确定对象标识子区域。
另一种实施方式中,可以预先设定对象标识子区域的第二目标图像特征,基于上述第二目标图像特征,从视觉信标区域中确定对象标识子区域。这种情况下,确定视觉信标区域中的对象标识子区域时,可以提取视觉信标区域的图像特征,将图像特征中与第二目标图像特征相匹配的特征所对应的图像区域确定为目标图像中的对象标识子区域。以对象标识子区域为携带信息的二维码为例,由于二维码的图像特征较为明显,因此可以采用上述特征匹配的方式进行对象标识子区域的确定。可见,这样基于对象标识子区域的第二目标图像特征,能够快捷、准确的从视觉信标区域中确定对象标识子区域。
再一种实施方式中,可以基于对象标识子区域在视觉信标中的预设位置信息,从视觉信标区域中确定对象标识子区域。由于视觉信标是预先设置好的,对象标识子区域在视觉信标中的位置是预先确定的,因此可以得到对象标识子区域在视觉信标中的预设位置信息。根据上述预设信息的不同,具体以下面两种情况为例说明确定对象标识子区域的方式。
第一种情况,预设位置信息可以是对象标识子区域在视觉信标中的相对位置。例如,预设位置信息可以为:对象标识子区域位于视觉信标左侧二分之一的区域,也可以为:对象标识子区域位于视觉信标中间三分之一的区域等。
第二种情况,预设位置信息可以是对象标识子区域的角点在视觉信标中的坐标。以对象标识子区域中的其中一个角点为例,若该角点在视觉信标中的坐标为(20,10),视觉信标的长和宽分别为40单位和20单位,可以计算出该坐标的横纵坐标所占视觉信标长和宽的比例均为0.5;若目标图像中视觉信标区域的长和宽分别为800像素和400像素,那么计算上述比例与视觉信标区域的长和宽之积,可以得到该角点在视觉信标区域对象的坐标为(400,200)。依此方式可以确定出对象标识子区域其余3个角点在 视觉信标区域对应的像素坐标,进而可以将角点坐标为上述像素坐标的矩形围成的区域确定为视觉信标区域中的对象标识子区域。
第三种情况,预设位置信息可以是对象标识子区域的各边相对于视觉信标边界的距离。以对象标识子区域中的其中一个边为例,若该边与视觉信标上边界的距离为5单位,视觉信标宽为20单位,可以计算出该距离与视觉信标宽的比例为0.25;若目标图像中视觉信标区域的宽为400像素,那么计算上述比例与视觉信标区域的宽之积,可以得到该边相对于视觉信标上边界的距离为400×0.25=100像素。依次方式可以确定出对象标识子区域的其余3边相对于视觉信标边界的距离,进而可以将所确定出的4边围成的区域确定为视觉信标区域中的对象标识子区域。
这样基于对象标识子区域在视觉信标中的预设位置信息,能够准确的从视觉信标区域中确定对象标识子区域。
步骤S103:若对象标识子区域表征张贴有视觉信标的对象为目标对象,则获得视觉信标区域中特征子区域内特征点在视觉信标区域中的图像位置。
上述目标对象根据机器人所要执行任务时面向的对象不同而不同。例如,机器人要将辊筒搬运至对应的机台处,实现辊筒与机台的对接,这种情况下目标对象为辊筒对应的机台;机器人要将货物运送到指定货架上,这种情况下目标对象为上述指定货架。
由于对象标识子区域携带有视觉信标所张贴的对象的标识信息,因此可以从对象标识子区域中读取上述信息。若上述信息表明视觉信标所张贴的对象为目标对象,说明视觉信标所张贴的对象为需要定位的对象,进而为了定位目标对象,可以进一步的获取视觉信标区域中特征子区域内的信息。
接下来对特征子区域进行详细介绍。
特征子区域由至少一种特定图形构成。例如,特征子区域可以是矩形、圆形、三角形或梯形等图形其中的一种组合而成,也可以是上述图形中的多种组合而成。
其中,本申请实施例不限定组成特征子区域的各特定图形的数量。
本申请实施例也不限定特征子区域中各特定图形的颜色,各特定图形的颜色可以相同,也可以不同。一种情况下,上述构成特征子区域的特定图形的颜色可以是与对象标识子区域的颜色对比明显的颜色。例如,对象标识子区域的颜色为黑白色,构成特征子区域的特定图形的颜色为红色等。
接下来对获取特征子区域内特征点图像位置的方式进行介绍。
具体的,首先可以确定出视觉信标区域中的特征子区域,然后提取出上述特征子区域中的特征点,根据提取到的特征点获取到特征点在视觉信标区域中的像素坐标,作为特征点在视觉信标区域中的图像位置。
上述特征点的含义根据构成特征子区域的特定图形确定,例如,若特征子区域由矩形构成,如图2所示,可以提取矩形的角点作为特征点;若特征子区域由带有圆心的圆形构成,可以提取圆形的圆心作为特征点;若特征子区域由矩形和圆形构成,可以提取矩形的角点和圆形的圆心作为特征点。
其中,可以通过以下方式确定视觉信标区域中的特征子区域。
一种实施方式中,可以参考前述步骤S102中确定对象标识子区域的方式,确定出视觉信标区域中的特征子区域,区别仅为将对象标识子区域替换为特征子区域,这里不再赘述。
另一实施方式中,由于已确定出视觉信标区域中的对象标识子区域,因此可以将视觉信标区域中除去对象标识子区域之外的区域确定为特征子区域。
步骤S104:基于所获得的图像位置以及图像采集设备的标定参数,对目标对象进行定位。
本步骤中,图像采集设备的标定参数可以预先标定并存储于机器人的存储器中。
上述图像采集设备的标定参数可以是图像采集设备的内参和/或外参。
其中,内参可以是图像采集设备的内参矩阵,包含了图像采集设备的焦距等信息,上述内参可以采用棋盘标定法等方式预先标定并存储。
外参可以是图像采集设备在采集目标图像时的空间位置,即图像采集设备在采集目标图像时在世界坐标系中的位置,上述外参可以由机器人基于目标图像的采集时间和机器人内部的定位装置确定,例如,目标图像A的采集时间为时间a,机器人通过定位装置可以获取到图像采集设备在时间a的空间位置。这样可以获取到上述机器人确定出的空间位置作为上述外参。
获得了特征点的图像位置、图像采集设备的标定参数后,由于特征点对应空间位置,因此,在特征点对应的图像位置和上述标定参数的基础上,可以结合特征点对应的空间位置,对目标对象进行定位。上述特征点的空间位置可以为特征点在世界坐标系中的位置。下面对确定上述特征点的空间位置的方式进行介绍。
由于视觉信标是预先设置好的,因此视觉信标中特征子区域的特征点在世界坐标系中的空间位置是已知的,上述已知的空间位置可以称为预设空间位置,例如,可以通过预先测定得知特征点在世界坐标系中的预设空间位置为P1、P2、P3、P4。
那么,只需要获取特征子区域中的各特征点与各预设空间位置之间的对应关系,就可以从预设空间位置中确定特征子区域中各特征点的空间位置。如,针对特征子区域中的特征点S1,若确定特征点S1对应的预设空间位置为P1,即可确定特征点S1对应的空间位置为P1。
具体的,可以通过以下方式从预设空间位置中获取特征点对应的空间位置。
一种实施方式中,可以按照特定顺序,从特征子区域中特征点的预设空间位置中确定各特征点对应的空间位置。一种情况下,可以按照特征点在特征子区域中由上至下、且由左至右的出现顺序,分别从上述预设空间位置中确定各特征点对应的空间位置。
例如,预设空间位置为P1-P4,特征点S1-S4在特征子区域中由上至下、且由左至右的出现顺序为:S1→S3→S2→S4,则可以分别将P1确定为S1对应的空间位置、将P2确定为S3对应的空间位置、P3确定为S2对应的空间位置P1以及将P4确定为S4对应的空间位置。
另一种实施方式中,可以先确定出特征子区域与对象标识子区域的相对位置关系,基于上述相对位置关系从预设空间位置中确定特征点对应的空间位置。具体实施方式详见图6所示实施中步骤S605-S607,这里暂不详述。
得到特征点对应的空间位置之后,具体的,基于特征点的图像位置、图像采集设备的标定参数以及特征点对应的空间位置,可以采用PNP算法进行位姿解算,确定出相机的相平面相对于视觉信标平面的旋转矩阵和平移矩阵。其中,上述旋转矩阵表征相机的相平面相对于视觉信标平面的旋转角度,上述平移矩阵表征相机的相平面相对于视觉信标平面的平移距离,由于相机的相平面可以理解为机器人所在的平面,视觉信标的平面为目标对象所在的平面,因此,上述旋转角度和平移距离可以认为是机器人相对于目标对象的旋转角度和平移距离,从而基于机器人相对于目标对象的旋转角度和平移距离,就能够确定目标对象相对于机器人的位置,即实现了目标对象的定位。
由以上可见,应用本申请实施例提供的方法进行定位时,首先从图像采集设备所采集的目标图像中识别出视觉信标区域,视觉信标区域中包含不重合的对象标识子区域和特征子区域,在识别到对象标识子区域中的信息表征张贴有视觉信标的对象为目标对象的情况下,可以进一步获取上述特征子区域中特 征点的图像位置,这样基于上述图像位置和图像采集设备的预先标定的参数,能够实现对目标对象进行定位。
另外,应用本申请实施例提供的方案进行对象定位时,不涉及射频卡,从而定位流程不再依赖于射频卡发送的无线信号,这样整个定位过程不会出现因无线信号被环境噪声信号干扰导致定位准确度遭受影响的情况,因此,能够提高对目标对象进行定位的准确度。
再者,在本申请实施例提供的方案中,在视觉信标中划分了互不重合的对象标识子区域和特征子区域,其中,对象标识子区域用于标识目标对象,特征子区域用于提供丰富的特征点来进行对目标对象的定位,这样能够保证上述两个子区域互不干扰,提高了所提取到的特征子区域中特征点位置信息的精度,进而进一步提高了基于上述特征点的位置信息对目标对象进行定位的准确度。
并且,可以看出,本申请实施例仅使用安装于机器人中的单个图像采集设备及预先张贴的视觉信标就能够实现高准确度的目标对象定位,相较于采用射频技术进行定位、基于3D深度传感器等方式进行定位,在保证了较高的定位准确度的基础上,有效的降低了定位所耗费的成本。
下面对视觉信标中子区域布局的其他实施方式进行说明。
一种实施方式中,视觉信标可以包含一个对象标识子区域和多个特征子区域,各特征子区域与对象标识子区域相邻。
其中,上述多个特征子区域可以布设于对象标识子区域的同一侧,也可以分别布设于对象标识子区域的不同侧。例如,各自特征子区域可以分别位于对象标识子区域的左侧或右侧,也可以分别位于对象标识子区域的上侧或下侧等。
一种情况下,视觉信标可以包括:一个对象标识子区域和偶数个特征子区域,对象标识子区域左右分布的特征子区域数量相同,且对象标识子区域左侧与右侧分布的特征子区域中的图形互相对称。
例如,参见图3,图3为本申请实施例提供的第二种视觉信标的示意图,可以看出,该视觉信标包含了一个对象标识子区域和2个特征子区域,2个特征子区域分别为特征子区域1和特征子区域2,对象标识子区域的左右两侧均分布1个特征子区域,且对象标识子区域左侧与右侧分布的特征子区域中的图形互相对称。在这种视觉信标子区域布局下,两个特征子区域可以提供较多的特征点,进而获取到较多的特征点的图像位置,有利于提高基于上述图像位置对目标对象的定位的准确度。
又如,参见图4,图4为本申请实施例提供的第三种视觉信标的示意图,可以看出,该视觉信标包含了一个对象标识子区域和4个特征子区域,4个特征子区域分别为:特征子区域1、特征子区域2、特征子区域3以及特征子区域4。其中,对象标识子区域的左右两侧均分布2个特征子区域,且对象标识子区域左侧与右侧分布的特征子区域中的图形互相对称。在这种视觉信标子区域布局下,4个特征子区域可以提供更多的特征点,进而获取到更多的特征点的图像位置,使得基于上述图像位置对目标对象的定位更加准确。
另一种实施方式中,视觉信标可以包括至少一个信标单元,每一信标单元中的子区域可以有以下布局形式:
1、每一信标单元包括:一个对象标识子区域和多个特征子区域,各特征子区域与对象标识子区域相邻。
2、每一信标单元包括:一个对象标识子区域和一个特征子区域。
3、每一信标单元包括:一个对象标识子区域和偶数个特征子区域,对象标识子区域左右分布的特征子区域数量相同,且对象标识子区域左侧与右侧分布的特征子区域中的图形互相对称。
其中,上述信标单元中子区域布局形式已在前述介绍视觉信标的子区域布局方式时说明,区别仅为将视觉信标替换为信标单元,这里不再赘述。
例如,参见图5,图5为本申请实施例提供的第四种视觉信标的示意图,由图5可以看出,视觉信标包括两个信标单元:信标单元A和信标单元B,每一信标单元都包含特征子区域和对象标识子区域:信标单元A包含特征子区域A1和A2以及对象标识子区域A;信标单元B包含特征子区域B1和B2以及对象标识子区域B。
在这种视觉信标子区域布局下,视觉信标中的每一信标单元都包含了对象标识子区域和特征子区域,这样能够保证图像采集区域只要采集到了视觉信标中的任一信标单元,就能够根据上述信标单元内对象标识子区域和特征子区域包含的信息来实现目标对象的定位,降低了图像采集设备采集到包含视觉信标的目标图像的难度,提高了目标对象定位的效率。
需要说明的是,视觉信标中包含的各信标单元可以完全相同,也可以是对象标识子区域相同,而特征子区域的数量不相同,还可以是对象标识子区域相同,而特征子区域与对象标识子区域的布局方式不相同等,本申请实施例对此不作限定。
本申请的一个实施例中,上述视觉信标中包括的信标单元的数量可以由图像采集设备的视角范围确定。例如,图像采集设备的采集范围较大,可以如图5所示,设置视觉信标中包括多个信标单元;图像采集设备的采集范围较小,可以如图2、图3或图4所示,设置视觉信标中仅包括一个信标单元。
如,可以预先设定图像采集设备的视角范围与视觉信标中包括的信标单元的数量之间的对应关系,然后按照上述对应关系设置视觉信标中包括的信标单元的数量。
基于图像采集设备的视角范围选取视觉信标中包括的信标单元数量,这样能够充分利用图像采集设备的视角范围,有利于图像采集设备采集到包含视觉信标区域的目标图像,提高了目标对象定位的效率。另外,这样可以使得图像采集设备的视角范围内包含尽可能多的信标单元,从而可以采集到包含多个信标单元的目标图像,进而可以依据目标图像中多个信标单元中特征子区域提供的大量特征点对目标对象进行定位,进一步提高了对目标对象进行定位时的准确度。
本申请的另一个实施例中,上述信标单元包括的特征子区域的数量可以由图像采集设备的视角范围确定。例如,若图像采集设备的采集范围较大,可以设置信标单元中包括多个特征子区域。如前述图3所示,图中的视觉信标包括一个信标单元,信标单元中包括2个特征子区域,又如图4所示,图中的视觉信标包括一个信标单元,信标单元中包括4个特征子区域;若图像采集设备的采集范围较小,可以设置信标单元中仅包括1个特征子区域。如前述图2所示,图中的视觉信标包括一个信标单元,信标单元中仅包括1个特征子区域。
如,可以预先设定图像采集设备的视角范围与信标单元中包括的特征子区域的数量之间的对应关系,然后按照上述对应关系设置信标单元中包括的特征子区域的数量。
基于图像采集设备的视角范围选取信标单元包括的特征子区域数量,这样能够充分利用图像采集设备的视角范围,有利于图像采集设备采集到包含视觉信标区域的目标图像,提高了目标对象定位的效率。另外,这样可以使得图像采集设备的视角范围内包含尽可能多的特征子区域,从而可以采集到包含多个特征子区域的目标图像,进而可以依据目标图像中多个特征子区域提供的大量特征点对目标对象进行定位,进一步提高了对目标对象进行定位时的准确度。
当然,一种实现方式中,上述视觉信标中包括的信标单元的数量以及信标单元中包括的特征子区域的数量可以均由图像采集设备的视角范围确定。
在图1所示实施例的基础上,特征子区域可以包括:第一图形和第二图形,上述第一图形用于提供预设的特征点,上述第二图形用于确定特征子区域与对象标识子区域间的相对位置关系,这种情况下,可以获取第一图形中特征点的图像位置,并基于特征子区域与对象标识子区域间的相对位置关系确定上述特征点对应的空间位置。鉴于上述情况,本申请实施例提供了另一种定位方法。
参见图6,图6为本申请实施例提供的第二种定位方法的流程示意图,上述方法包括以下步骤S601-S608。
步骤S601:识别图像采集设备所采集的目标图像中的视觉信标区域。
步骤S602:确定视觉信标区域中的对象标识子区域。
上述步骤S601和S602与前述步骤S101和S102相同,这里不再赘述。
步骤S603:若对象标识子区域表征张贴有视觉信标的对象为目标对象,则识别视觉信标区域中各特征子区域内由第一图形提供的预设的特征点。
本步骤中,特征子区域中包括第一图形和第二图形,第一图形的作用为提供特征点,第二图形的作用为标识各特征子区域与对象标识子区域间的相对位置关系。其中,第二图形的详细介绍详见后续步骤S605-S606,这里暂不详述。
第一图形的作用是提供特征点,因此,可以采用任何便于提取特征点的图形作为第一图形,本申请实施例不限定第二图形的具体形状和包含的种类。第一图形可以仅包含一种图形,也可以包含多种图形。例如,第一图形可以是矩形,这种情况下,可以提取矩形的角点作为特征点;第一图形可以是带有圆心的圆形,这种情况下,可以提取圆形的圆心作为特征点;第一图形也可以包含矩形和圆形,这种情况下,可以提取矩形的角点和圆形的圆心作为特征点。另外,第一图形的颜色可以是任何颜色,本申请不限定第一图形的颜色。
本申请的一个实施例中,还可以对目标图像进行预处理,基于处理后的图像提取上述图像中的第一图形提供的特征点。具体实施方式详见后续实施例,这里暂不详述。
本申请的另一个实施例中,提取到特征点之后,还可以基于各特征点和第二图形的预设分布关系验证识别到的特征点。具体实施方式详见后续实施例,这里暂不详述。
步骤S604:获得所识别到特征点在视觉信标区域中的图像位置。
上述图像位置可以是特征点在视觉信标区域中的像素坐标。
具体的,识别出第一图形提供的各特征点之后,可以确定出上述各特征点在视觉信标区域中的像素坐标作为上述图像位置。
步骤S605:识别视觉信标区域中各特征子区域内的第二图形。
上述第二图形用于标识各特征子区域与对象标识子区域间的相对位置关系。本申请实施例不限定第二图形的具体形状和包含的种类。上述第二图形可以仅包含一种图形,也可以包含多种图形。例如,第二图形可以是圆形,可以是三角形,也可以包含圆形和三角形等。另外,第二图形的颜色可以是任何颜色,本申请实施例不限定第二图形的颜色。
具体的,可以提取目标图像的视觉信标区域的边缘图像特征,然后根据边缘图像特征识别目标图像的视觉信标区域中各特征子区域内的第二图形。例如,特征子区域A的第二图形为圆形,这样可以选择由连续的边缘特征点组成的圆形,即为特征子区域A中的第二图形。识别第二图形的其他方式可以在前述图1所示实施例步骤S101中介绍的识别视觉信标区域的方式的基础上得到,这里不再赘述。
本申请的一个实施例中,在识别上述第二图形之前,还可以先对目标图像进行预处理,基于处理后的图像提取第二图形。具体实施方式详见后续实施例,这里暂不详述。
步骤S606:根据各特征子区域内第二图形的图形信息,确定各特征子区域与对象标识子区域间的相对位置关系。
可以根据各特征子区域内第二图形不同的图形信息确定上述相对位置关系。其中,上述图形信息可以是第二图形的颜色、形状、数量、布局、大小等。
具体的,不同特征子区域中的第二图形可以存在以下不同中的至少一种:
第二图形的颜色不同。这样可以根据各特征子区域中第二图形的颜色确定上述相对位置关系。例如,特征子区域A的第二图形的颜色为红色,表征特征子区域A在对象标识子区域的左侧;特征子区域B的第二图形的颜色为蓝色,表征特征子区域B在对象标识子区域的右侧等。
第二图形的布局不同。这样可以根据各特征子区域中第二图形的布局确定上述相对位置关系。例如,特征子区域C的第二图形位于特征子区域C的上半部分,表征特征子区域C在对象标识子区域的左侧;特征子区域D的第二图形位于特征子区域C的下半部分,表征特征子区域D在对象标识子区域的左侧等。又如,特征子区域E中的第二图形和特征子区域F中的第二图形互为镜像,可以根据上述第二图形的不同镜像特征确定上述相对位置关系。
第二图形的数量不同。这样可以根据各特征子区域中第二图形的数量确定上述相对位置关系。例如,特征子区域G包含1个第二图形,表征特征子区域G在对象标识子区域的上侧,特征子区域H包含两个第二图形,表征特征子区域H在对象标识子区域的下侧等。
第二图形的形状不同。这样可以根据各特征子区域中第二图形的形状确定上述相对位置关系。例如,特征子区域I中的第二图形为圆形,表征特征子区域I在对象标识子区域的上侧,特征子区域J中的第二图形为三角形,表征特征子区域J在对象标识子区域的下侧等。
第二图形的大小不同。这样可以根据各特征子区域中第二图形的大小确定上述相对位置关系。例如,特征子区域K中的第二图形面积较大,表征特征子区域K在对象标识子区域的上侧,特征子区域L中的第二图形面积较小,表征特征子区域L在对象标识子区域的下侧等。
上述各特征子区域中第二图形的不同特征仅为举例,本申请实施例不限定使用第二图形的特征确定特征子区域与对象标识子区域的相对位置关系的方式。
通过设置第二图形不同的图形信息,便于基于各特征子区域的第二图形的图形信息快速确定出各特征子区域相对于对象标识子区域的位置。
步骤S607:根据相对位置关系,从各特征点的预设空间位置中,选择各特征子区域内的特征点对应的空间位置。
具体的,由于确定了各特征子区域与对象标识子区域的相对位置关系,那么针对各特征子区域,可以根据上述相对位置关系,从预设空间位置中选择出位置与上述相对关系的相同的特征点的目标预设空间位置,再进一步的从上述目标预设空间位置中选择各特征子区域内的特征点对应的空间位置。
例如,特征子区域A位于对象标识子区域的左侧,那么可以先选取位于对象标识子区域左侧的特征点的目标预设空间位置,这样相当于缩小了选择范围,然后再从目标预设空间位置中选择各特征子区域内的特征点对应的空间位置。
其中,进一步的从目标预设空间位置中选择各特征子区域内的特征点对应的空间位置的方式,可以参见前述图1所示实施例中步骤S104给出的方式,按照特定顺序,从目标预设空间位置中确定各特征 点对应的空间位置等,这里不再赘述。
本申请的一个实施例中,可以以该特征子区域中的第二图形的几何中心为原点建立直角坐标系,然后获取各特征点与原点间的距离,以及各特征点到原点间的连线与水平线的夹角,基于上述距离与夹角选择各特征子区域内的特征点对应的目标预设空间位置。例如,与原点间的距离为10单位、上述夹角为30度的特征点对应目标预设空间位置a,与原点间的距离为15单位、上述夹角为50度的特征点对应目标预设空间位置b等。
步骤S608:根据各特征子区域内的特征点对应的图像位置和空间位置、以及图像采集设备的标定参数,对目标对象进行定位。
上述步骤S608的实施方式已经在前述步骤S104中说明,这里不再赘述。
由以上可见,通过第一图形可以快速获取到特征点的图像位置,基于第二图形可以确定特征子区域和对象标识子区域间的相对位置关系,进而可以便于基于上述相对位置关系,更加快速的从预设空间位置中确定出各特征点对应的空间位置。
下面通过步骤A-步骤C对前述提及的提取第一图形提供的特征点、识别第二图形以及验证特征点的另一种方式进行说明。
步骤A:对目标图像进行预处理。
一种实施方式中,上述预处理可以是对目标图像进行梯度检测,基于检测结果将目标图像转化为梯度图像。梯度图像可以表征图像各处的边缘、纹理等的变化信息。
另一种实施方式中,上述目标图像中可能会存在噪声,为了去除上述噪声,上述预处理可以是对上述图像进行图像滤波处理,例如对图像进行高斯滤波处理、对图像进行均值滤波处理等。
再一种实施方式中,上述预处理可以是对目标图像进行形态学处理,以使得目标图像中的第一图形和第二图形更加明显,便于后续基于连通域等信息从目标图像中识别出第一图形和第二图形。上述形态学处理可以是图像腐蚀、图像膨胀等。
步骤B:基于处理后的图像,提取上述图像中的第一图形和第二图形。
具体的,由于目标图像进行了预处理,因此采用前述的提取边缘特征的方式可以更加快速且准确的从处理后的图像中提取第一图形和第二图形。
步骤C:提取第一图形中的特征点,并基于第二图形和各特征点间的预设分布关系,验证提取到的特征点。
具体的,可以对提取到的特征点与提取到的第二图形间的分布关系与预设分布关系进行对比,将分布关系不符合预设分布关系的特征点确定为无效特征点。例如,各特征点被预先设置在第二图形的上方,若提取到的特征点中有分布于第二图形下方的特征点,那么可以认为上述特征点为误识别的特征点,因此可以将上述特征点确定为无效特征点。这样能够进一步的提高提取到的特征点的准确度。
与上述定位方法相对应的,本申请实施例还提供了一种定位装置。
参见图7,图7为本申请实施例提供的一种定位装置的结构示意图,上述装置包括以下模块701-704。
视觉信标区域识别模块701,用于识别图像采集设备所采集的目标图像中的视觉信标区域;
对象标识子区域确定模块702,用于确定所述视觉信标区域中的对象标识子区域;
位置获得模块703,用于若所述对象标识子区域表征张贴有视觉信标的对象为目标对象,则获得所述视觉信标区域中特征子区域内特征点在所述视觉信标区域中的图像位置,其中,所述特征子区域与所 述对象标识子区域不重合;
目标对象定位模块704,用于基于所获得的图像位置以及所述图像采集设备的标定参数,对所述目标对象进行定位。
由以上可见,应用本申请实施例提供的方法进行定位时,首先从图像采集设备所采集的目标图像中识别出视觉信标区域,视觉信标区域中包含不重合的对象标识子区域和特征子区域,在识别到对象标识子区域中的信息表征张贴有视觉信标的对象为目标对象的情况下,可以进一步获取上述特征子区域中特征点的图像位置,这样基于上述图像位置和图像采集设备的预先标定的参数,能够实现对目标对象进行定位。
另外,应用本申请实施例提供的方案进行对象定位时,不涉及射频卡,从而定位流程不再依赖于射频卡发送的无线信号,这样整个定位过程不会出现因无线信号被环境噪声信号干扰导致定位准确度遭受影响的情况,因此,能够提高对目标对象进行定位的准确度。
再者,在本申请实施例提供的方案中,在视觉信标中划分了互不重合的对象标识子区域和特征子区域,其中,对象标识子区域用于标识目标对象,特征子区域用于提供丰富的特征点来进行对目标对象的定位,这样能够保证上述两个子区域互不干扰,提高了所提取到的特征子区域中特征点位置信息的精度,进而进一步提高了基于上述特征点的位置信息对目标对象进行定位的准确度。
并且,可以看出,本申请实施例仅使用安装于机器人中的单个图像采集设备及预先张贴的视觉信标就能够实现高准确度的目标对象定位,相较于采用射频技术进行定位、基于3D深度传感器等方式进行定位,在保证了较高的定位准确度的基础上,有效的降低了定位所耗费的成本。
本申请的一个实施例中,所述视觉信标包括至少一个信标单元,每一信标单元包括:一个对象标识子区域和多个特征子区域,各特征子区域与所述对象标识子区域相邻。
本申请的一个实施例中,所述视觉信标包括至少一个信标单元,每一信标单元包括:一个对象标识子区域和一个特征子区域。
在上述视觉信标子区域布局下,视觉信标中的每一信标单元都包含了对象标识子区域和特征子区域,这样能够保证图像采集区域只要采集到了视觉信标中的任一信标单元,就能够根据上述信标单元内对象标识子区域和特征子区域包含的信息来实现目标对象的定位,降低了图像采集设备采集到包含视觉信标的目标图像的难度,提高了目标对象定位的效率。
本申请的一个实施例中,每一信标单元包括:一个对象标识子区域和偶数个特征子区域,所述对象标识子区域左右分布的特征子区域数量相同,且所述对象标识子区域左侧与右侧分布的特征子区域中的图形互相对称。
在这种视觉信标子区域布局下,对称分布的偶数个特征子区域可以提供较多的特征点,进而获取到较多的特征点的图像位置,有利于提高基于上述图像位置对目标对象的定位的准确度。
本申请的一个实施例中,所述特征子区域包括:第一图形和第二图形,所述第一图形用于提供预设的特征点,所述第二图形用于确定所述特征子区域与所述对象标识子区域间的相对位置关系;
所述位置获得模块703,具体用于识别所述视觉信标区域中各特征子区域内由所述第一图形提供的预设的特征点;获得所识别到特征点在所述视觉信标区域中的图像位置;
所述目标对象定位模块704,具体用于识别所述视觉信标区域中各特征子区域内的所述第二图形;根据各特征子区域内所述第二图形的图形信息,确定各特征子区域与所述对象标识子区域间的相对位置关系;根据所述相对位置关系,从各特征点的预设空间位置中,选择各特征子区域内的特征点对应的空 间位置;根据各特征子区域内的特征点对应的图像位置和空间位置、以及所述图像采集设备的标定参数,对所述目标对象进行定位。
由以上可见,通过第一图形可以快速获取到特征点的图像位置,基于第二图形可以确定特征子区域和对象标识子区域间的相对位置关系,进而可以便于基于上述相对位置关系,更加快速的从预设空间位置中确定出各特征点对应的空间位置。
本申请的一个实施例中,同一信标单元中不同特征子区域中的所述第二图形存在以下不同中的至少一种:
所述第二图形的颜色不同;所述第二图形的布局不同;所述第二图形的数量不同;所述第二图形的形状不同;所述第二图形的大小不同。
通过设置第二图形不同的图形信息,便于基于各特征子区域的第二图形的图形信息快速确定出各特征子区域相对于对象标识子区域的位置。
本申请的一个实施例中,所述视觉信标中包括的信标单元的数量由所述图像采集设备的视角范围确定;
这样基于图像采集设备的视角范围选取视觉信标中包括的信标单元数量,能够充分利用图像采集设备的视角范围,有利于图像采集设备采集到包含视觉信标区域的目标图像,提高了目标对象定位的效率。另外,这样可以使得图像采集设备的视角范围内包含尽可能多的信标单元,从而可以采集到包含多个信标单元的目标图像,进而可以依据目标图像中多个信标单元中特征子区域提供的大量特征点对目标对象进行定位,进一步提高了对目标对象进行定位时的准确度。
本申请的一个实施例中,所述信标单元包括的特征子区域的数量由所述图像采集设备的视角范围确定。
这样基于图像采集设备的视角范围选取信标单元包括的特征子区域数量,能够充分利用图像采集设备的视角范围,有利于图像采集设备采集到包含视觉信标区域的目标图像,提高了目标对象定位的效率。另外,这样可以使得图像采集设备的视角范围内包含尽可能多的特征子区域,从而可以采集到包含多个特征子区域的目标图像,进而可以依据目标图像中多个特征子区域提供的大量特征点对目标对象进行定位,进一步提高了对目标对象进行定位时的准确度。
本申请的一个实施例中,所述对象标识子区域确定模块702,具体用于基于对象标识子区域在视觉信标区域中的预设位置信息,从所述视觉信标区域中确定所述对象标识子区域。
这样基于对象标识子区域在视觉信标中的预设位置信息,能够准确的从视觉信标区域中确定对象标识子区域。
本申请的一个实施例中,所述对象标识子区域确定模块702,具体用于基于预先设定的对象标识子区域的目标颜色参数,从所述视觉信标区域中确定所述对象标识子区域。
可见,这样基于对象标识子区域的目标颜色参数,能够快捷、准确的从视觉信标区域中确定对象标识子区域。
本申请的一个实施例中,所述对象标识子区域确定模块702,具体用于基于预先设定的对象标识子区域的第二目标图像特征,从所述视觉信标区域中确定所述对象标识子区域。
可见,这样基于对象标识子区域的第二目标图像特征,能够快捷、准确的从视觉信标区域中确定对象标识子区域。
本申请实施例还提供了一种电子设备,如图8所示,包括处理器801、通信接口802、存储器803 和通信总线804,其中,处理器801,通信接口802,存储器803通过通信总线804完成相互间的通信,
存储器803,用于存放计算机程序;
处理器801,用于执行存储器803上所存放的程序时,实现本申请实施例提供的定位方法。
上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口用于上述电子设备与其他设备之间的通信。
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
在本申请提供的又一实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现本申请实施例提供的定位方法。
在本申请提供的又一实施例中,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行本申请实施例提供的定位方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者其他介质(例如固态硬盘Solid State Disk(SSD))等。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置、电子设备和存储介质实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。

Claims (11)

  1. 一种定位方法,其特征在于,所述方法包括:
    识别图像采集设备所采集的目标图像中的视觉信标区域;
    确定所述视觉信标区域中的对象标识子区域;
    若所述对象标识子区域表征张贴有视觉信标的对象为目标对象,则获得所述视觉信标区域中特征子区域内特征点在所述视觉信标区域中的图像位置,其中,所述特征子区域与所述对象标识子区域不重合;
    基于所获得的图像位置以及所述图像采集设备的标定参数,对所述目标对象进行定位。
  2. 根据权利要求1所述的方法,其特征在于,所述视觉信标包括至少一个信标单元,
    每一信标单元包括:一个对象标识子区域和多个特征子区域,各特征子区域与所述对象标识子区域相邻;
    每一信标单元包括:一个对象标识子区域和一个特征子区域。
  3. 根据权利要求2所述的方法,其特征在于,
    每一信标单元包括:一个对象标识子区域和偶数个特征子区域,所述对象标识子区域左右分布的特征子区域数量相同,且所述对象标识子区域左侧与右侧分布的特征子区域中的图形互相对称。
  4. 根据权利要求2所述的方法,其特征在于,所述特征子区域包括:第一图形和第二图形,所述第一图形用于提供预设的特征点,所述第二图形用于确定所述特征子区域与所述对象标识子区域间的相对位置关系;
    所述获得所述视觉信标区域中特征子区域内特征点在所述视觉信标区域中的图像位置,包括:
    识别所述视觉信标区域中各特征子区域内由所述第一图形提供的预设的特征点;
    获得所识别到特征点在所述视觉信标区域中的图像位置;
    所述基于所获得图像位置以及所述图像采集设备的标定参数,对所述目标对象进行定位,包括:
    识别所述视觉信标区域中各特征子区域内的所述第二图形;
    根据各特征子区域内所述第二图形的图形信息,确定各特征子区域与所述对象标识子区域间的相对位置关系;
    根据所述相对位置关系,从各特征点的预设空间位置中,选择各特征子区域内的特征点对应的空间位置;
    根据各特征子区域内的特征点对应的图像位置和空间位置、以及所述图像采集设备的标定参数,对所述目标对象进行定位。
  5. 根据权利要求4所述的方法,其特征在于,同一信标单元中不同特征子区域中的所述第二图形存在以下不同中的至少一种:
    所述第二图形的颜色不同;
    所述第二图形的布局不同;
    所述第二图形的数量不同;
    所述第二图形的形状不同;
    所述第二图形的大小不同。
  6. 根据权利要求2-5中任一项所述的方法,其特征在于,
    所述视觉信标中包括的信标单元的数量由所述图像采集设备的视角范围确定;
    和/或
    所述信标单元包括的特征子区域的数量由所述图像采集设备的视角范围确定。
  7. 根据权利要求1-5中任一项所述的方法,其特征在于,所述确定所述视觉信标区域中的对象标识子区域,包括:
    基于对象标识子区域在视觉信标中的预设位置信息,从所述视觉信标区域中确定所述对象标识子区域;
    基于预先设定的对象标识子区域的目标颜色参数,从所述视觉信标区域中确定所述对象标识子区域;
    基于预先设定的对象标识子区域的第二目标图像特征,从所述视觉信标区域中确定所述对象标识子区域。
  8. 一种定位装置,其特征在于,所述装置包括:
    视觉信标区域识别模块,用于识别图像采集设备所采集的目标图像中的视觉信标区域;
    对象标识子区域确定模块,用于确定所述视觉信标区域中的对象标识子区域;
    位置获得模块,用于若所述对象标识子区域表征张贴有视觉信标的对象为目标对象,则获得所述视觉信标区域中特征子区域内特征点在所述视觉信标区域中的图像位置,其中,所述特征子区域与所述对象标识子区域不重合;
    目标对象定位模块,用于基于所获得的图像位置以及所述图像采集设备的标定参数,对所述目标对象进行定位。
  9. 根据权利要求8所述的装置,其特征在于,所述视觉信标包括至少一个信标单元,
    每一信标单元包括:一个对象标识子区域和多个特征子区域,各特征子区域与所述对象标识子区域相邻;或每一信标单元包括:一个对象标识子区域和一个特征子区域;
    每一信标单元包括:一个对象标识子区域和偶数个特征子区域,所述对象标识子区域左右分布的特征子区域数量相同,且所述对象标识子区域左侧与右侧分布的特征子区域中的图形互相对称;
    所述特征子区域包括:第一图形和第二图形,所述第一图形用于提供预设的特征点,所述第二图形用于确定所述特征子区域与所述对象标识子区域间的相对位置关系;所述位置获得模块,具体用于识别所述视觉信标区域中各特征子区域内由所述第一图形提供的预设的特征点;获得所识别到特征点在所述视觉信标区域中的图像位置;所述目标对象定位模块,具体用于识别所述视觉信标区域中各特征子区域内的所述第二图形;根据各特征子区域内所述第二图形的图形信息,确定各特征子区域与所述对象标识子区域间的相对位置关系;根据所述相对位置关系,从各特征点的预设空间位置中,选择各特征子区域内的特征点对应的空间位置;根据各特征子区域内的特征点对应的图像位置和空间位置、以及所述图像采集设备的标定参数,对所述目标对象进行定位;
    同一信标单元中不同特征子区域中的所述第二图形存在以下不同中的至少一种:
    所述第二图形的颜色不同;所述第二图形的布局不同;所述第二图形的数量不同;所述第二图形的形状不同;所述第二图形的大小不同;
    所述视觉信标中包括的信标单元的数量由所述图像采集设备的视角范围确定;和/或所述信标单元包括的特征子区域的数量由所述图像采集设备的视角范围确定;
    所述对象标识子区域确定模块,具体用于基于对象标识子区域在视觉信标中的预设位置信息,从所述视觉信标区域中确定所述对象标识子区域;或基于预先设定的对象标识子区域的目标颜色参数,从所述视觉信标区域中确定所述对象标识子区域;或基于预先设定的对象标识子区域的第二目标图像特征,从所述视觉信标区域中确定所述对象标识子区域。
  10. 一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;
    存储器,用于存放计算机程序;
    处理器,用于执行存储器上所存放的程序时,实现权利要求1-7任一所述的方法步骤。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-7任一所述的方法步骤。
PCT/CN2023/106844 2022-07-11 2023-07-11 一种定位方法及装置 WO2024012463A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210810431.9A CN115187769A (zh) 2022-07-11 2022-07-11 一种定位方法及装置
CN202210810431.9 2022-07-11

Publications (1)

Publication Number Publication Date
WO2024012463A1 true WO2024012463A1 (zh) 2024-01-18

Family

ID=83516989

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/106844 WO2024012463A1 (zh) 2022-07-11 2023-07-11 一种定位方法及装置

Country Status (2)

Country Link
CN (1) CN115187769A (zh)
WO (1) WO2024012463A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187769A (zh) * 2022-07-11 2022-10-14 杭州海康机器人技术有限公司 一种定位方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084243A (zh) * 2019-03-13 2019-08-02 南京理工大学 一种基于二维码和单目相机的档案识别与定位方法
US10583560B1 (en) * 2019-04-03 2020-03-10 Mujin, Inc. Robotic system with object identification and handling mechanism and method of operation thereof
CN111191557A (zh) * 2019-12-25 2020-05-22 深圳市优必选科技股份有限公司 一种标志识别定位方法、标志识别定位装置及智能设备
CN111582123A (zh) * 2020-04-29 2020-08-25 华南理工大学 一种基于信标识别与视觉slam的agv定位方法
CN113936010A (zh) * 2021-10-15 2022-01-14 北京极智嘉科技股份有限公司 货架定位方法、装置、货架搬运设备及存储介质
CN115187769A (zh) * 2022-07-11 2022-10-14 杭州海康机器人技术有限公司 一种定位方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084243A (zh) * 2019-03-13 2019-08-02 南京理工大学 一种基于二维码和单目相机的档案识别与定位方法
US10583560B1 (en) * 2019-04-03 2020-03-10 Mujin, Inc. Robotic system with object identification and handling mechanism and method of operation thereof
CN111191557A (zh) * 2019-12-25 2020-05-22 深圳市优必选科技股份有限公司 一种标志识别定位方法、标志识别定位装置及智能设备
CN111582123A (zh) * 2020-04-29 2020-08-25 华南理工大学 一种基于信标识别与视觉slam的agv定位方法
CN113936010A (zh) * 2021-10-15 2022-01-14 北京极智嘉科技股份有限公司 货架定位方法、装置、货架搬运设备及存储介质
CN115187769A (zh) * 2022-07-11 2022-10-14 杭州海康机器人技术有限公司 一种定位方法及装置

Also Published As

Publication number Publication date
CN115187769A (zh) 2022-10-14

Similar Documents

Publication Publication Date Title
Romero-Ramirez et al. Speeded up detection of squared fiducial markers
CN109584307B (zh) 改进摄像机固有参数校准的系统和方法
JP5699788B2 (ja) スクリーン領域検知方法及びシステム
JP6417702B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム
CN110443154B (zh) 关键点的三维坐标定位方法、装置、电子设备和存储介质
CN109784250B (zh) 自动引导小车的定位方法和装置
CN112581629A (zh) 增强现实显示方法、装置、电子设备及存储介质
CN110189322B (zh) 平整度检测方法、装置、设备、存储介质及系统
CN109479082B (zh) 图象处理方法及装置
KR101638173B1 (ko) 캘리브레이션을 위한 자동 피처 탐지용 기구물 및 그 탐지 방법
JP7049983B2 (ja) 物体認識装置および物体認識方法
JP6899189B2 (ja) ビジョンシステムで画像内のプローブを効率的に採点するためのシステム及び方法
CN108090486B (zh) 一种台球比赛中的图像处理方法及装置
WO2024012463A1 (zh) 一种定位方法及装置
Xie et al. Geometry-based populated chessboard recognition
Herling et al. Markerless tracking for augmented reality
CN113240656B (zh) 视觉定位方法及相关装置、设备
CN110163914B (zh) 基于视觉的定位
WO2021104203A1 (en) Associating device coordinate systems in a multi-person augmented reality system
CN112102391A (zh) 测量方法及装置、电子设备及存储介质
CN117253022A (zh) 一种对象识别方法、装置及查验设备
JP2014102805A (ja) 情報処理装置、情報処理方法及びプログラム
CN112785651A (zh) 用于确定相对位姿参数的方法和装置
Xing et al. An improved algorithm on image stitching based on SIFT features
CN114359322A (zh) 图像校正、拼接方法及相关装置、设备、系统和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23838954

Country of ref document: EP

Kind code of ref document: A1