CN111191557B - Mark identification positioning method, mark identification positioning device and intelligent equipment - Google Patents

Mark identification positioning method, mark identification positioning device and intelligent equipment Download PDF

Info

Publication number
CN111191557B
CN111191557B CN201911354811.0A CN201911354811A CN111191557B CN 111191557 B CN111191557 B CN 111191557B CN 201911354811 A CN201911354811 A CN 201911354811A CN 111191557 B CN111191557 B CN 111191557B
Authority
CN
China
Prior art keywords
mark
target
screened
positioning
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911354811.0A
Other languages
Chinese (zh)
Other versions
CN111191557A (en
Inventor
郭奎
庞建新
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Youbijie Education Technology Co ltd
Original Assignee
Shenzhen Ubtech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ubtech Technology Co ltd filed Critical Shenzhen Ubtech Technology Co ltd
Priority to CN201911354811.0A priority Critical patent/CN111191557B/en
Publication of CN111191557A publication Critical patent/CN111191557A/en
Application granted granted Critical
Publication of CN111191557B publication Critical patent/CN111191557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a mark recognition positioning method, a mark recognition positioning device, intelligent equipment and a computer readable storage medium, wherein the method comprises the following steps: collecting a real-time environment image; in the real-time environment image, screening according to a preset boundary outer frame and a preset anchor point shape to obtain more than one candidate mark; a flag ID identifying each candidate flag; and determining a target mark in the candidate marks according to mark IDs of the candidate marks, and positioning according to the target mark. By the scheme of the application, the intelligent equipment can realize real-time visual positioning of the special mark, and the current position of the intelligent equipment can be rapidly determined.

Description

Mark identification positioning method, mark identification positioning device and intelligent equipment
Technical Field
The application belongs to the technical field of visual positioning, and particularly relates to a marker identification positioning method, a marker identification positioning device, intelligent equipment and a computer readable storage medium.
Background
At present, the marks with the visual positioning function mainly comprise two-dimensional codes and ArUco codes. However, in the robot field, most robots use an embedded hardware platform, which requires a high computing power to implement real-time monitoring and positioning of two-dimensional codes or ArUco codes.
Disclosure of Invention
In view of the above, the present application provides a method for identifying and positioning a marker, a device for identifying and positioning a marker, an intelligent device, and a computer readable storage medium, which can implement real-time visual positioning of a special marker by the intelligent device, and can quickly determine the current position of the intelligent device.
The first aspect of the application provides a marker identification positioning method, which comprises the following steps:
collecting a real-time environment image;
in the real-time environment image, screening according to a preset boundary outer frame and a preset anchor point shape to obtain more than one candidate mark;
a flag ID identifying each candidate flag;
determining a target mark in the candidate marks according to mark IDs of the candidate marks;
and positioning according to the target mark.
A second aspect of the present application provides a marker recognition positioning apparatus, comprising:
the acquisition unit is used for acquiring real-time environment images;
the screening unit is used for screening and obtaining more than one candidate mark according to the preset boundary outer frame and the preset anchor point shape in the real-time environment image;
an identification unit for identifying a flag ID of each candidate flag;
a determining unit configured to determine a target flag among the candidate flags based on the flag IDs of the respective candidate flags;
and the positioning unit is used for positioning according to the target mark.
A third aspect of the present application provides a smart device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method of the first aspect as described above when the computer program is executed.
A fourth aspect of the application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method of the first aspect above.
A fifth aspect of the application provides a computer program product comprising a computer program which, when executed by one or more processors, implements the steps of the method of the first aspect described above.
From the above, the present application proposes a new special mark, in order to identify the special mark, a real-time environment image is first collected, then more than one candidate mark is obtained in the real-time environment image according to the preset boundary frame and the preset anchor point shape, then the mark ID of each candidate mark is identified, finally the target mark is determined in the candidate mark according to the mark ID of each candidate mark, and the target mark is positioned according to the target mark. The identification process of the special mark does not require higher calculation force on the embedded hardware platform any more, the real-time visual positioning of the intelligent device to the special mark can be realized on the premise of not increasing the cost, and the current position of the intelligent device can be rapidly determined by the method.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation flow of a method for identifying and locating a marker according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a special marker provided by an embodiment of the present application;
fig. 3 is a schematic flowchart of a specific implementation of step 105 in the method for identifying and positioning a marker according to an embodiment of the present application;
FIG. 4 is a block diagram of a marker identification and location device according to an embodiment of the present application;
FIG. 5 is a block diagram of a positioning unit in the marker recognition positioning device according to the embodiment of the present application;
fig. 6 is a schematic diagram of an intelligent device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The mark identification positioning method and the mark identification positioning device can be applied to intelligent equipment capable of automatically controlling movement, such as robots, unmanned vehicles, indoor unmanned planes and the like. In order to illustrate the technical scheme provided by the application, the following description is made by specific embodiments.
Example 1
The following embodiment will take an intelligent device as a robot as an example, and describe a marker identification positioning method provided by the embodiment of the application. Referring to fig. 1, the method for identifying and positioning a marker in an embodiment of the application includes:
step 101, collecting a real-time environment image;
in the embodiment of the application, one or more cameras can be pre-loaded on the robot body, and based on the one or more cameras, real-time environment images of the environment where the robot is located can be acquired by the cameras loaded on the robot body. Alternatively, considering that the robot may need to confirm its own environment only when moving, it may be detected whether the robot is in a moving state first; if the robot is in a moving state, acquiring a real-time environment image through a camera carried by the robot; if the robot is in a static state, the camera does not need to be started. Further, because the moving speed of the robot is not too high, the displacement of the robot in a short period of time is small, based on the moving speed, the real-time environment image can be periodically acquired through the camera carried by the robot, and the waste of robot system resources caused by the fact that the camera is in a working state for a long time is avoided.
102, screening to obtain more than one candidate mark according to a preset boundary outer frame and a preset anchor point shape in the real-time environment image;
in the embodiment of the application, a novel special mark is provided. For a better explanation of the steps of the embodiments of the present application, the specific mark is described below, as shown in fig. 2: the mark has four circles in total, including three large circles and one small circle, and the four circles are set to different colors in practical applications, for example, the large circle in the upper left corner may be set to red, the large circle in the lower left corner may be set to blue, and the large circle in the upper right corner may be set to green to highlight the mark. The three large circles are specifically main positioning circles, and the three large circles are used for determining the origin of the mark and the direction of the coordinate axis; the small circle is an auxiliary positioning circle, and forms four positioning anchor points together with the three large circles, so that the relation between the mark coordinate system and the camera coordinate system is determined together. Specifically, in the sign coordinate system, the upper left corner great circle is used as the origin of the sign coordinate system, the upper left corner great circle and the upper right corner great circle construct the X axis of the sign coordinate system, the upper left corner great circle and the lower left corner great circle construct the Y axis of the sign coordinate system, and the Z axis is determined by the right hand coordinate system. Further, the flag ID is provided in the middle of the flag, and is indicated by a character a in fig. 2, but of course, the flag ID may be any other character, and is not limited thereto. The boundary outer frame of the mark is in a circular arc square frame shape, the boundary outer frame can be set to be gray, and the mark can be positioned faster through the boundary outer frame; the four positioning anchor points inside the mark approximately form a rectangle, and the shape formed by the positioning anchor points can be deformed in consideration of the fact that the robot observes the mark from different angles, so that the shape of the anchor points can be set to be a parallelogram.
In order to accurately identify and obtain the marks in the environment, after the real-time environment image is obtained, the robot can screen and obtain more than one candidate mark according to the preset boundary outer frame and the preset anchor point shape, that is, the candidate marks obtained after screening all have the same or similar structures. Further, the step 102 may be expressed as:
a1, carrying out target recognition on the real-time environment image to obtain more than one pattern to be screened;
after the robot acquires the real-time environment image, the robot can first perform object recognition on the real-time environment image to obtain all objects existing in the real-time environment image, namely, the patterns to be screened. Optionally, if any pattern to be screened cannot be obtained in the real-time environment image through target identification, the current environment of the robot can be considered to have no content related to the mark, the frame of real-time environment image can be discarded first, and the operation of screening and identifying is performed after the next frame of real-time environment image is acquired.
A2, extracting to obtain the outline of each pattern to be screened;
after obtaining more than one pattern to be screened, the contour of each pattern to be screened can be further obtained, specifically, the patterns to be screened can be firstly segmented through self-adaptive thresholds to obtain the preliminary contour of each pattern to be screened, then in order to obtain smoother contour lines, contour filtering processing can be carried out on the preliminary contour of each pattern to be screened to remove noise-removing points, and smoother contours of each pattern to be screened can be obtained.
A3, respectively matching the outline of each pattern to be screened with the boundary outer frame;
the explanation of the special mark provided by the embodiment of the application shows that the special mark has a special boundary outer frame and is in a circular arc square frame form, and based on the special mark, the outline of each pattern to be screened can be respectively matched with the preset boundary outer frame.
A4, if the mark to be screened exists, detecting whether the mark to be screened meets a preset screening condition;
if the outline of the pattern to be screened can be matched with the preset boundary outer frame, the pattern to be screened can be preliminarily determined to be a mark to be screened, so that screening operation can be further carried out later. Specifically, the positioning anchor points contained in the to-be-screened markers can be obtained again, and the to-be-screened markers are detected through preset screening conditions, wherein the screening conditions are as follows: the number of the positioning anchor points contained in the to-be-screened mark is preset, and the connecting lines between the centers of the positioning anchor points can construct the anchor point shape. As can be seen from fig. 2, the preset number is 4; the anchor point is in a parallelogram shape; that is, when a certain marker to be screened includes four positioning anchor points, and the four positioning anchor points can form a parallelogram, the marker to be screened can be determined as a candidate marker. Otherwise, if the outline of the pattern to be screened cannot be matched with the preset boundary outer frame, the pattern to be screened is considered to be not a special mark focused by the robot, and the pattern to be screened can be screened out at the moment and is not subjected to subsequent operation; or if the to-be-screened mark does not meet the preset screening condition, the to-be-screened mark is considered to be a special mark concerned by the robot, and the to-be-screened mark can be screened out at the moment and is not subjected to subsequent operation;
a5, determining the to-be-screened mark meeting the screening conditions as a candidate mark.
Step 103, identifying the mark ID of each candidate mark;
in the embodiment of the application, after more than one candidate mark is obtained by screening, the mark ID of each candidate mark can be continuously identified. The tag IDs of the different tags are different, and it is considered that in one environment, the tag ID may uniquely refer to a particular tag. That is, the tag IDs of different special tags in the same environment do not overlap. Optionally, the step 103 includes:
b1, positioning the target candidate mark based on a positioning anchor point in the target candidate mark;
in this embodiment, when the identification of the tag ID is performed on each candidate tag, the processing flow is the same, so for more clearly describing the operation of step 103, any candidate tag is determined as the target candidate tag, and the specific implementation flow of step 103 is explained and described. That is, each candidate flag may be determined as a target candidate flag and the operations of steps B1 to B3 are performed. Specifically, the above-mentioned positioning anchor points refer to three large circles and one small circle in fig. 2, that is, the three large circles and one small circle in the marker form the positioning anchor points of the candidate markers.
B2, performing perspective transformation on the target candidate mark according to the position of the positioning anchor point in the target candidate mark in the real-time environment image to obtain a front view of the target candidate mark;
wherein, because the target candidate mark is preset, the physical size of the target candidate mark in real life is known in advance by the robot in practice; that is, the actual physical coordinates of the four locating anchors (i.e., the four circles in fig. 2) on the target candidate mark are known to the robot. In step B2, the robot has further obtained the positions of the positioning anchor points in the target candidate mark in the real-time environment image, that is, the pixel coordinates corresponding to the four positioning anchor points in the real-time environment image, so that the robot can obtain the front view of the target candidate mark according to the perspective projection model of the camera.
B3, carrying out image recognition on the front view to determine the mark ID of the target candidate mark.
After obtaining the front view of the target candidate mark, image recognition may be directly performed on the front view, specifically, character strings included in the front view are recognized, so as to obtain the mark ID of the target candidate mark.
104, determining a target mark in the candidate marks according to the mark IDs of the candidate marks;
in the embodiment of the application, the mark IDs of the candidate marks can be sequentially detected through a trained model carried in the robot so as to determine which candidate marks are legal and which candidate marks are illegal, and the candidate mark with the legal mark ID can be obtained through the process, wherein the candidate mark is the target mark.
And 105, positioning according to the target mark.
In the embodiment of the application, because the target mark is a special mark, four positioning anchor points contained in the target mark can be used for positioning; thus, after the target mark is determined, a positioning operation can be performed based on the target mark. Optionally, referring to fig. 3, the step 105 specifically includes:
step 1051, obtaining camera parameters and distortion coefficients;
in the embodiment of the application, the camera parameters of the camera carried on the intelligent equipment such as the robot and the like can be obtained, wherein the camera parameters specifically comprise camera external parameters and camera internal parameters, and the distortion coefficients comprise radial distortion coefficients and tangential distortion coefficients.
Step 1052, obtaining the image coordinates and corresponding space physical coordinates of the target mark;
in the embodiment of the present application, similarly to the above step B2, since the target mark is preset, the physical size of the target mark in real life is actually known in advance by the robot; that is, the actual physical coordinates of the four locating anchors (i.e., the four circles in fig. 2) on the target mark are known to the robot, based on which the image coordinates and corresponding spatial physical coordinates of the target mark can be determined.
Step 1053, obtaining a coordinate transformation relationship between the mark coordinate system and the camera coordinate system based on the camera parameters, the distortion coefficients, the image coordinates of the target mark and the corresponding spatial physical coordinates;
in the embodiment of the application, according to the Perspective projection model of the camera, the position relation between the mark coordinate system and the camera coordinate system, namely the coordinate transformation relation between the mark coordinate system and the camera coordinate system, can be calculated by adopting the Perspective N-Point (PNP) and the known camera parameters, the distortion coefficients, the image coordinates of the target mark and the corresponding space physical coordinates.
Step 1054, determining the relative position of the device to be positioned and the target mark according to the coordinate transformation relation.
In the embodiment of the application, the device to be positioned actually refers to intelligent equipment such as a robot, an unmanned vehicle, an indoor robot and the like applying the scheme of the embodiment of the application. When the coordinate transformation relationship between the marker coordinate system and the camera coordinate system is known, the position relationship between the target marker and the camera (i.e. the camera) is obtained, and the camera is mounted on the smart device, so that the position relationship between the target marker and the camera can be used as the position relationship between the target marker and the smart device, and based on the position relationship, the relative position between the smart device and the target marker can be determined. Further, after the relative position of the intelligent device and the target mark is determined, since the position of the target mark in the global map is known, the position of the intelligent device in the global map can be determined according to the relative position of the intelligent device and the target mark, and then the navigation route is updated according to the position of the intelligent device in the global map and the destination of the current action, so that the intelligent device can realize autonomous navigation.
From the above, through the embodiment of the application, a novel special mark is provided, and intelligent equipment such as robots, unmanned vehicles, indoor unmanned aerial vehicles and the like can identify a target mark through operations such as contour extraction, mark ID identification and the like in the walking process, so that the special mark can be identified and positioned, and the autonomous navigation of the intelligent equipment is realized on the basis. The identification process of the special mark does not require higher calculation force on the embedded hardware platform any more, and the real-time visual positioning of the intelligent device on the special mark can be realized on the premise of not increasing the cost.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Example two
The second embodiment of the present application provides a sign recognition positioning device, which may be integrated in an intelligent device capable of controlling movement by itself, such as a robot, an unmanned vehicle, and an indoor unmanned plane, as shown in fig. 4, a sign recognition positioning device 400 in the embodiment of the present application includes:
an acquisition unit 401 for acquiring a real-time environment image;
a screening unit 402, configured to screen to obtain more than one candidate mark according to a preset boundary outline and a preset anchor point shape in the real-time environment image;
an identification unit 403 for identifying the tag IDs of the respective candidate tags;
a determining unit 404 for determining a target flag among the candidate flags according to the flag IDs of the respective candidate flags;
and a positioning unit 405, configured to position according to the target mark.
Optionally, referring to fig. 5, the positioning unit 405 includes:
a camera parameter acquiring subunit 4051, configured to acquire camera parameters and distortion coefficients;
a coordinate parameter obtaining subunit 4052, configured to obtain the image coordinates and the corresponding spatial physical coordinates of the target mark;
a transformation relation determining subunit 4053, configured to obtain a coordinate transformation relation between the mark coordinate system and the camera coordinate system based on the camera parameter, the distortion coefficient, the image coordinate of the target mark, and the corresponding spatial physical coordinate;
the relative position determining subunit 4054 is configured to determine, according to the coordinate transformation relationship, a relative position of the device to be positioned and the target mark.
Optionally, the tag identification positioning apparatus 400 further includes:
a map position determining unit, configured to determine a position of the device to be located in a global map according to a relative position of the device to be located and the target mark;
and the navigation route updating unit is used for updating the navigation route according to the position of the device to be positioned in the global map and the destination of the action.
Optionally, the screening unit 402 includes:
the target recognition subunit is used for carrying out target recognition on the real-time environment image to obtain more than one pattern to be screened;
the contour extraction subunit is used for extracting and obtaining the contour of each pattern to be screened;
the outline matching subunit is used for respectively matching the outline of each pattern to be screened with the boundary outer frame;
the anchor point detection subunit is configured to detect whether the to-be-screened sign meets a preset screening condition if the to-be-screened sign exists, where the to-be-screened sign is a to-be-screened pattern with a successfully matched outline and the boundary outer frame, and the screening condition is that: the mark to be screened comprises a preset number of positioning anchor points, and the connecting lines between the centers of the positioning anchor points can construct the anchor point shape;
and the candidate determining subunit is used for determining the to-be-screened mark meeting the screening condition as a candidate mark.
Optionally, the contour extraction subunit includes:
the segmentation subunit is used for respectively segmenting each pattern to be screened through the self-adaptive threshold value to obtain the preliminary contour of each pattern to be screened;
and the filtering subunit is used for carrying out contour filtering treatment on the preliminary contours of the patterns to be screened to obtain the contours of the patterns to be screened.
Optionally, the identifying unit 403 includes:
an anchor point positioning subunit, configured to position a positioning anchor point in a target candidate mark, where the target candidate mark is any candidate mark;
a perspective transformation subunit, configured to perform perspective transformation on the target candidate mark according to the position of the positioning anchor point in the target candidate mark in the real-time environment image, so as to obtain a front view of the target candidate mark;
and a mark ID determining subunit for performing image recognition on the front view to determine the mark ID of the target candidate mark.
From the above, through the embodiment of the application, a novel special mark is provided, and intelligent equipment such as robots, unmanned vehicles, indoor unmanned aerial vehicles and the like can identify a target mark through operations such as contour extraction, mark ID identification and the like in the walking process, so that the special mark can be identified and positioned, and the autonomous navigation of the intelligent equipment is realized on the basis. The identification process of the special mark does not require higher calculation force on the embedded hardware platform any more, and the real-time visual positioning of the intelligent device on the special mark can be realized on the premise of not increasing the cost.
Example III
The third embodiment of the present application provides an intelligent device, where the intelligent device may be a robot, an unmanned vehicle, an indoor unmanned aerial vehicle, etc., and the application is not limited herein. Referring to fig. 6, the smart device 6 in the embodiment of the present application includes: a memory 601, one or more processors 602 (only one shown in fig. 6) and computer programs stored on the memory 601 and executable on the processors. Wherein: the memory 601 is used for storing software programs and modules, and the processor 602 executes various functional applications and data processing by running the software programs and units stored in the memory 601 to acquire resources corresponding to the preset events. Specifically, the processor 602 implements the following steps by running the above-described computer program stored in the memory 601:
collecting a real-time environment image;
in the real-time environment image, screening according to a preset boundary outer frame and a preset anchor point shape to obtain more than one candidate mark;
a flag ID identifying each candidate flag;
determining a target mark in the candidate marks according to mark IDs of the candidate marks;
positioning according to the target mark.
Assuming that the first possible embodiment is the above, in a second possible embodiment provided on the basis of the first possible embodiment, the positioning according to the target mark includes:
acquiring camera parameters and distortion coefficients;
acquiring the image coordinates and corresponding space physical coordinates of the target mark;
obtaining a coordinate transformation relation between a mark coordinate system and a camera coordinate system based on the camera parameters, the distortion coefficients, the image coordinates of the target mark and the corresponding space physical coordinates;
and determining the relative position of the device to be positioned and the target mark according to the coordinate transformation relation.
In a third possible implementation provided by the two possible implementations as a basis, after determining the relative position of the device to be located and the target mark according to the coordinate transformation relation, the processor 602 further implements the following steps by running the computer program stored in the memory 601:
determining the position of the device to be positioned in the global map according to the relative positions of the device to be positioned and the target mark;
and updating the navigation route according to the position of the device to be positioned in the global map and the destination of the action.
In a fourth possible implementation manner provided by the one possible implementation manner, the two possible implementation manners, or the three possible implementation manners as a basis, the filtering the real-time environment image according to a preset boundary frame and a preset anchor point shape to obtain more than one candidate mark includes:
performing target recognition on the real-time environment image to obtain more than one pattern to be screened;
extracting to obtain the outline of each pattern to be screened;
respectively matching the outline of each pattern to be screened with the boundary outer frame;
if yes, detecting whether the mark to be screened meets preset screening conditions, wherein the mark to be screened is a pattern to be screened, the outline of which is successfully matched with the boundary outer frame, and the screening conditions are as follows: the mark to be screened comprises a preset number of positioning anchor points, and the connecting lines between the centers of the positioning anchor points can construct the anchor point shape;
and determining the mark to be screened which meets the screening conditions as a candidate mark.
In a fifth possible implementation manner provided by the fourth possible implementation manner, the extracting obtains an outline of each pattern to be screened, including:
dividing each pattern to be screened through a self-adaptive threshold value to obtain a preliminary contour of each pattern to be screened;
and performing contour filtering processing on the preliminary contours of the patterns to be screened to obtain the contours of the patterns to be screened.
In a sixth possible embodiment provided on the basis of the above one possible embodiment, or the above two possible embodiments, or the above three possible embodiments, the identification ID identifying each candidate identification includes:
positioning a positioning anchor point in a target candidate mark, wherein the target candidate mark is any candidate mark;
according to the position of the positioning anchor point in the target candidate mark in the real-time environment image, performing perspective transformation on the target candidate mark to obtain a front view of the target candidate mark;
and performing image recognition on the front view to determine the mark ID of the target candidate mark.
It should be appreciated that in embodiments of the present application, the processor 602 may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Memory 601 may include read only memory and random access memory and provides instructions and data to processor 602. Some or all of the memory 601 may also include non-volatile random access memory. For example, the memory 601 may also store information of a device class.
From the above, through the embodiment of the application, a novel special mark is provided, the intelligent device can identify the target mark through operations such as contour extraction, mark ID identification and the like in the walking process, the identification and positioning of the special mark can be realized, and the autonomous navigation of the intelligent device is realized on the basis. The identification process of the special mark does not require higher calculation force on the embedded hardware platform any more, and the real-time visual positioning of the intelligent device on the special mark can be realized on the premise of not increasing the cost.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of external device software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the system embodiments described above are merely illustrative, e.g., the division of modules or units described above is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may also be implemented by implementing all or part of the flow of the method of the above embodiment, or by instructing the associated hardware by a computer program, where the computer program may be stored on a computer readable storage medium, and where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The above computer readable storage medium may include: any entity or device capable of carrying the computer program code described above, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer readable Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier wave signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable storage medium described above may be appropriately increased or decreased according to the requirements of the jurisdiction's legislation and the patent practice, for example, in some jurisdictions, the computer readable storage medium does not include electrical carrier signals and telecommunication signals according to the legislation and the patent practice.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. A method for identifying and locating a logo, comprising:
under the condition that the intelligent equipment is in a moving state, acquiring a real-time environment image through a camera carried by the intelligent equipment;
in the real-time environment image, screening according to a preset boundary outer frame and a preset anchor point shape to obtain more than one candidate mark;
a flag ID identifying each candidate flag;
determining a target mark in the candidate marks according to mark IDs of the candidate marks, wherein the target mark comprises four circles with different colors, and the four circles are three big circles and one small circle respectively; and
Positioning the target mark according to the target mark, wherein the three large circles are used as main positioning circles and used for determining the origin of the target mark and the direction of a coordinate axis in a mark coordinate system, the small circles are used as auxiliary positioning circles and are used for determining the relation between the mark coordinate system and a camera coordinate system together with the three large circles;
in the real-time environment image, screening to obtain more than one candidate mark according to a preset boundary outer frame and a preset anchor point shape, wherein the method comprises the following steps:
performing target recognition on the real-time environment image to obtain more than one pattern to be screened;
extracting to obtain the outline of each pattern to be screened;
respectively matching the outline of each pattern to be screened with the boundary outer frame;
if yes, detecting whether the to-be-screened mark meets a preset screening condition, wherein the to-be-screened mark is a to-be-screened pattern with successfully matched outline and the boundary outer frame, and the screening condition is as follows: the to-be-screened sign comprises a preset number of positioning anchor points, and the connecting lines between the centers of the positioning anchor points can construct the anchor point shape;
and determining the mark to be screened which meets the screening conditions as a candidate mark.
2. The marker identification positioning method as set forth in claim 1, wherein said positioning according to said target marker includes:
acquiring camera parameters and distortion coefficients;
acquiring the image coordinates and the corresponding space physical coordinates of the target mark;
obtaining a coordinate transformation relation between a mark coordinate system and a camera coordinate system based on the camera parameters, the distortion coefficients, the image coordinates of the target mark and the corresponding space physical coordinates;
and determining the relative position of the device to be positioned and the target mark according to the coordinate transformation relation.
3. The marker identification positioning method according to claim 2, wherein after said determining the relative position of the device to be positioned and the target marker according to the coordinate transformation relationship, the marker identification positioning method further comprises:
determining the position of the device to be positioned in a global map according to the relative positions of the device to be positioned and the target mark;
and updating a navigation route according to the position of the device to be positioned in the global map and the destination of the action.
4. The method for identifying and positioning a logo according to claim 1, wherein the extracting obtains the outline of each pattern to be screened, comprising:
dividing each pattern to be screened through a self-adaptive threshold value to obtain a preliminary contour of each pattern to be screened;
and performing contour filtering processing on the preliminary contours of the patterns to be screened to obtain the contours of the patterns to be screened.
5. A marker identification locating method as claimed in any one of claims 1 to 3, wherein the identification of the marker ID of each candidate marker comprises:
positioning a positioning anchor point in a target candidate mark, wherein the target candidate mark is any candidate mark;
performing perspective transformation on the target candidate mark according to the position of the positioning anchor point in the target candidate mark in the real-time environment image to obtain a front view of the target candidate mark;
and carrying out image recognition on the front view to determine the mark ID of the target candidate mark.
6. A sign recognition positioning device, comprising:
the intelligent equipment is used for acquiring real-time environment images through a camera carried by the intelligent equipment under the condition that the intelligent equipment is in a moving state;
the screening unit is used for screening and obtaining more than one candidate mark according to the preset boundary outer frame and the preset anchor point shape in the real-time environment image;
an identification unit for identifying a flag ID of each candidate flag;
a determining unit, configured to determine a target flag among candidate flags according to flag IDs of the candidate flags, where the target flag includes four circles with different colors, and the four circles are three large circles and one small circle respectively;
the positioning unit is used for positioning the target mark according to the target mark, wherein the three large circles are used as main positioning circles and used for determining the origin of the target mark and the direction of a coordinate axis in a mark coordinate system, the small circles are used as auxiliary positioning circles and are used for determining the relation between the mark coordinate system and a camera coordinate system together with the three large circles;
wherein, the screening unit includes:
the target recognition subunit is used for carrying out target recognition on the real-time environment image to obtain more than one pattern to be screened;
the contour extraction subunit is used for extracting and obtaining the contour of each pattern to be screened;
the outline matching subunit is used for respectively matching the outline of each pattern to be screened with the boundary outer frame;
the anchor point detection subunit is configured to detect whether a to-be-screened sign meets a preset screening condition if the to-be-screened sign exists, where the to-be-screened sign is a to-be-screened pattern with a successfully matched outline and the boundary outer frame, and the screening condition is that: the to-be-screened sign comprises a preset number of positioning anchor points, and the connecting lines between the centers of the positioning anchor points can construct the anchor point shape;
and the candidate determining subunit is used for determining the to-be-screened mark meeting the screening condition as a candidate mark.
7. The sign-identifying positioning device of claim 6, wherein the positioning unit comprises:
a camera parameter obtaining subunit, configured to obtain a camera parameter and a distortion coefficient;
the coordinate parameter acquisition subunit is used for acquiring the image coordinates and the corresponding space physical coordinates of the target mark;
a transformation relation determining subunit, configured to obtain a coordinate transformation relation between the mark coordinate system and the camera coordinate system based on the camera parameter, the distortion coefficient, the image coordinate of the target mark, and the corresponding spatial physical coordinate;
and the relative position determining subunit is used for determining the relative position of the device to be positioned and the target mark according to the coordinate transformation relation.
8. A smart device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 5 when the computer program is executed.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 5.
CN201911354811.0A 2019-12-25 2019-12-25 Mark identification positioning method, mark identification positioning device and intelligent equipment Active CN111191557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911354811.0A CN111191557B (en) 2019-12-25 2019-12-25 Mark identification positioning method, mark identification positioning device and intelligent equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911354811.0A CN111191557B (en) 2019-12-25 2019-12-25 Mark identification positioning method, mark identification positioning device and intelligent equipment

Publications (2)

Publication Number Publication Date
CN111191557A CN111191557A (en) 2020-05-22
CN111191557B true CN111191557B (en) 2023-12-05

Family

ID=70709348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911354811.0A Active CN111191557B (en) 2019-12-25 2019-12-25 Mark identification positioning method, mark identification positioning device and intelligent equipment

Country Status (1)

Country Link
CN (1) CN111191557B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686355B (en) * 2021-01-12 2024-01-05 树根互联股份有限公司 Image processing method and device, electronic equipment and readable storage medium
CN115187769A (en) * 2022-07-11 2022-10-14 杭州海康机器人技术有限公司 Positioning method and device
CN115457144B (en) * 2022-09-07 2023-08-15 梅卡曼德(北京)机器人科技有限公司 Calibration pattern recognition method, calibration device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009035697A1 (en) * 2007-09-13 2009-03-19 Cognex Corporation System and method for traffic sign recognition
CN103020632A (en) * 2012-11-20 2013-04-03 北京航空航天大学 Fast recognition method for positioning mark point of mobile robot in indoor environment
CN109271937A (en) * 2018-09-19 2019-01-25 深圳市赢世体育科技有限公司 Athletic ground Marker Identity method and system based on image procossing
CN109977935A (en) * 2019-02-27 2019-07-05 平安科技(深圳)有限公司 A kind of text recognition method and device
CN109993790A (en) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 Marker, marker forming method, positioning method and positioning device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8300928B2 (en) * 2008-01-25 2012-10-30 Intermec Ip Corp. System and method for locating a target region in an image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009035697A1 (en) * 2007-09-13 2009-03-19 Cognex Corporation System and method for traffic sign recognition
CN103020632A (en) * 2012-11-20 2013-04-03 北京航空航天大学 Fast recognition method for positioning mark point of mobile robot in indoor environment
CN109993790A (en) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 Marker, marker forming method, positioning method and positioning device
CN109271937A (en) * 2018-09-19 2019-01-25 深圳市赢世体育科技有限公司 Athletic ground Marker Identity method and system based on image procossing
CN109977935A (en) * 2019-02-27 2019-07-05 平安科技(深圳)有限公司 A kind of text recognition method and device

Also Published As

Publication number Publication date
CN111191557A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111191557B (en) Mark identification positioning method, mark identification positioning device and intelligent equipment
JP6259928B2 (en) Lane data processing method, apparatus, storage medium and equipment
CN110443210B (en) Pedestrian tracking method and device and terminal
CN110781836A (en) Human body recognition method and device, computer equipment and storage medium
CN110502983B (en) Method and device for detecting obstacles in expressway and computer equipment
CN112308913B (en) Vehicle positioning method and device based on vision and vehicle-mounted terminal
CN112149649B (en) Road spray detection method, computer equipment and storage medium
CN111295666A (en) Lane line detection method, device, control equipment and storage medium
CN111414826A (en) Method, device and storage medium for identifying landmark arrow
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN114037966A (en) High-precision map feature extraction method, device, medium and electronic equipment
Chang et al. An efficient method for lane-mark extraction in complex conditions
CN113345015A (en) Package position detection method, device and equipment and readable storage medium
CN114841910A (en) Vehicle-mounted lens shielding identification method and device
CN116343085A (en) Method, system, storage medium and terminal for detecting obstacle on highway
CN110673607B (en) Feature point extraction method and device under dynamic scene and terminal equipment
FAN et al. Robust lane detection and tracking based on machine vision
Chen et al. Image segmentation based on mathematical morphological operator
CN114332814A (en) Parking frame identification method and device, electronic equipment and storage medium
JP2019121356A (en) Interference region detection apparatus and method, and electronic apparatus
CN111860084A (en) Image feature matching and positioning method and device and positioning system
US11138447B2 (en) Method for detecting raised pavement markers, computer program product and camera system for a vehicle
CN114581890B (en) Method and device for determining lane line, electronic equipment and storage medium
WO2016207749A1 (en) A device and method of detecting potholes
WO2018110377A1 (en) Video monitoring device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240919

Address after: No. 60, Guohe Road, Yangpu District, Shanghai 200082

Patentee after: Shanghai youbijie Education Technology Co.,Ltd.

Country or region after: China

Address before: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen UBTECH Technology Co.,Ltd.

Country or region before: China