CN117151140B - Target identification code identification method, device and computer readable storage medium - Google Patents

Target identification code identification method, device and computer readable storage medium Download PDF

Info

Publication number
CN117151140B
CN117151140B CN202311403576.8A CN202311403576A CN117151140B CN 117151140 B CN117151140 B CN 117151140B CN 202311403576 A CN202311403576 A CN 202311403576A CN 117151140 B CN117151140 B CN 117151140B
Authority
CN
China
Prior art keywords
image
calibration
detection frame
camera
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311403576.8A
Other languages
Chinese (zh)
Other versions
CN117151140A (en
Inventor
姚结兵
薛远
刘刚
陈鹏华
黄丽标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Ronds Science & Technology Inc Co
Original Assignee
Anhui Ronds Science & Technology Inc Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Ronds Science & Technology Inc Co filed Critical Anhui Ronds Science & Technology Inc Co
Priority to CN202311403576.8A priority Critical patent/CN117151140B/en
Publication of CN117151140A publication Critical patent/CN117151140A/en
Application granted granted Critical
Publication of CN117151140B publication Critical patent/CN117151140B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14131D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/1465Aligning or centring of the image pick-up or image-field by locating a pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/19007Matching; Proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Toxicology (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a method, a device and a computer readable storage medium for identifying a target object identification code, wherein a camera moves on a set track and shoots to obtain a current frame image, and the current frame is subjected to target detection to obtain an initial detection frame; and acquiring a calibration image shot at the same position in the calibration image according to the actual distance of the camera moving on the set track. And matching the initial detection frame with the calibration detection frame, wherein if the initial detection frame is matched with the calibration detection frame, the identification code of the target object in the initial detection frame is the identification code corresponding to the calibration detection frame. Therefore, the method of the embodiment only carries out target detection on the target object in the image, and bar codes, two-dimensional codes, characters or other unique identifiers which are arranged on the target object and used for carrying out image recognition are not needed or reduced, so that the possibility that the image recognition is influenced by environmental conditions is reduced, and the calibration is carried out through the calibration video, so that the recognition accuracy of the target object identification code is improved.

Description

Target identification code identification method, device and computer readable storage medium
Technical Field
The present invention relates to the field of computer vision detection technology, and in particular, to a method and apparatus for identifying an object identifier, and a computer readable storage medium.
Background
In the scenario where there are multiple objects, identification of the object identification code is involved, such as a carrier roller ID that identifies multiple carrier rollers of the belt conveyor. In a inspection robot of a belt conveyor, counting and detecting carrier rollers is a very important function. The carrier roller is used as one of core components of the belt conveyor and plays important roles of supporting materials, transmitting capacity and stably running. The position and the serial number of the carrier roller in the picture are confirmed through the video of the inspection robot, so that the equipment state can be monitored in time, faults can be prevented, the operation efficiency can be optimized, and important support is provided for data analysis and trend prediction. Currently, the methods for identifying the carrier roller ID generally comprise bar codes/two-dimensional codes, radio Frequency Identification (RFID), laser lettering identification and the like.
In the prior art, a camera is arranged on a patrol robot, and an ID of a carrier roller is determined by shooting an image of the carrier roller. The existing method for identifying the carrier roller ID processes the acquired image and extracts specific characteristics in the carrier roller image, such as bar codes, two-dimensional codes, characters or other unique identifiers. The identification algorithm matches the extracted features with pre-stored idler ID information to determine the unique ID of the idler and transmits it to a connected computer or system for subsequent processing and storage.
However, by extracting specific features in the idler image, such as a bar code, two-dimensional code, text, or other uniquely identified image recognition method, the accuracy may be affected by environmental conditions, such as unstable light, dirty idler surface, etc., which may lead to erroneous recognition or reduced accuracy.
Disclosure of Invention
An objective of the embodiments of the present application is to provide a method and an apparatus for identifying an identification code of a target object, and a computer readable storage medium, so as to solve the problem that in the prior art, by extracting specific features in an image of a carrier roller, for example, a bar code, a two-dimensional code, a text or other unique identification image identifying method, the accuracy of the carrier roller ID may be affected by environmental conditions, such as unstable light, dirty surface of the carrier roller, etc., which may cause an identification error or reduce accuracy.
The identification method of the target object identification code provided by the embodiment of the application comprises the following steps:
acquiring a current frame image containing one or more targets and camera pose information corresponding to the current frame image; the current frame image is obtained by moving and shooting a camera on a set track;
performing target detection on the current frame image to obtain an initial detection frame of each target object in the current frame image;
Obtaining a corresponding calibration image in the calibration video and calibration posture information corresponding to the calibration image according to the actual distance of the camera moving on the set track; the calibration image comprises one or more targets, and a calibration detection frame and an identification code of each target;
under the condition that the camera gesture information and the calibration gesture information are consistent, determining the identification code of the target object in the current frame image according to the position comparison of the initial detection frame and the calibration detection frame.
In the technical scheme, the camera moves on the set track and shoots to obtain a current frame image, and the current frame is subjected to target detection to obtain an initial detection frame; according to the actual distance of the camera moving on the set track, acquiring a calibration image shot at the same position in the calibration image, wherein the calibration image is provided with a calibration detection frame and an identification code of a target object; under the condition that the camera gesture information and the calibration gesture information are consistent, determining the identification code of the target object in the current frame image according to the position comparison of the initial detection frame and the calibration detection frame. Therefore, the method of the embodiment only carries out target detection on the target object in the image, and bar codes, two-dimensional codes, characters or other unique identifiers which are arranged on the target object and used for carrying out image recognition are not needed or reduced, so that the possibility that the image recognition is influenced by environmental conditions is reduced, and the embodiment carries out calibration through the calibration video, so that the recognition accuracy of the target object identification code is improved. For example, when the target object is a carrier roller, a bar code, a two-dimensional code, characters or other unique marks are not required to be arranged on the surface of the carrier roller so as to carry out image recognition and determine the carrier roller ID, and recognition errors or accuracy reduction caused by the influence of light instability, dirty surfaces of the carrier roller and other environmental factors are avoided.
In some optional embodiments, after obtaining the calibration image corresponding to the calibration video and the calibration gesture information corresponding to the calibration image according to the actual distance that the camera moves on the set track, the method further includes:
under the condition that the camera posture information and the calibration posture information are inconsistent, converting the initial detection frame into a calibrated detection frame according to the camera posture information and the calibration posture information; and determining the identification code of the target object in the current frame image according to the position comparison of the calibrated detection frame and the calibrated detection frame.
In the technical scheme, the camera moves on the set track and shoots to obtain a current frame image, and the current frame is subjected to target detection to obtain an initial detection frame; according to the actual distance of the camera moving on the set track, acquiring a calibration image shot at the same position in the calibration image, wherein the calibration image is provided with a calibration detection frame and an identification code of a target object; and then, the initial detection frame is adjusted and then is matched with the calibration detection frame, and if the initial detection frame is matched with the calibration detection frame, the identification code of the target object in the initial detection frame is the identification code corresponding to the calibration detection frame. Therefore, the method of the embodiment only carries out target detection on the target object in the image, and bar codes, two-dimensional codes, characters or other unique identifiers which are arranged on the target object and used for carrying out image recognition are not needed or reduced, so that the possibility that the image recognition is influenced by environmental conditions is reduced, and the embodiment carries out calibration through the calibration video, so that the recognition accuracy of the target object identification code is improved. For example, when the target object is a carrier roller, a bar code, a two-dimensional code, characters or other unique marks are not required to be arranged on the surface of the carrier roller so as to carry out image recognition and determine the carrier roller ID, and recognition errors or accuracy reduction caused by the influence of light instability, dirty surfaces of the carrier roller and other environmental factors are avoided.
In some alternative embodiments, before acquiring the current frame image including the one or more objects, the method further includes:
acquiring a calibration video, and calibrating posture information corresponding to each frame of image in the calibration video and a calibration distance for the camera to move on a set track;
the video is calibrated by moving a camera on a set track and shooting.
According to the technical scheme, the calibration video is set once, and when the identification of the target object identification code is carried out subsequently, multiple identifications are carried out according to the same calibration video. The calibration video is also obtained by moving and shooting a camera on a set track, calibration posture information corresponding to each frame of image and a calibration distance of the camera moving on the set track are stored in the shooting process, and then target detection is carried out on each frame of image in the calibration video to obtain a calibration detection frame. And finally, marking each calibration detection frame with a correct identification code by means of manual marking or automatic marking and then manual confirmation.
In some optional embodiments, according to an actual distance that the camera moves on the set track, obtaining a calibration image corresponding to the calibration video and calibration gesture information corresponding to the calibration image includes:
Acquiring a calibration distance consistent with the value of the actual distance, and determining a corresponding calibration image according to the calibration distance;
and obtaining corresponding calibration posture information according to the calibration image.
In the above technical solution, in the set of calibration distances of each frame of image in the calibration video, the calibration distance closest to the actual distance is found, and the calibration image corresponding to the calibration distance is the image used for calibrating the current frame of image.
In some alternative embodiments, converting the initial detection frame into a post-calibration detection frame according to the camera pose information and the calibration pose information includes:
obtaining camera angle deviation according to the camera attitude information and the calibration attitude information;
and adjusting the coordinates of the initial detection frame according to the angle deviation of the camera to obtain the calibrated detection frame.
According to the technical scheme, when in actual detection, the camera angle deviation of the camera is obtained when the camera shoots the current frame image, the camera posture information of the camera and the calibration posture information of the camera in the calibration video are obtained when the camera shoots the calibration image at the same position. And adjusting the coordinates of the initial detection frame according to the angle deviation of the camera to obtain a calibrated detection frame, so that the positions of the detection frame and the corresponding calibration detection frame in the calibration video are as consistent as possible.
In some alternative embodiments, the camera angular deviation comprises: a first deviation in the vertical direction of the image, a second deviation in the lateral direction of the image, a third deviation in the optical axis direction of the camera;
according to camera angle deviation, adjust the coordinate of initial detection frame, obtain the detection frame after the calibration, include:
obtaining an image horizontal translation pixel and an image vertical translation pixel according to the angle deviation of the camera;
and moving the initial detection frame according to the image horizontal translation pixels and the image vertical translation pixels to obtain the calibrated detection frame.
The first deviation is the difference between the rotation angle of the current camera around the y axis and the rotation angle of the camera around the y axis in the calibration video, and the y axis is the vertical direction of the image; the second deviation is the difference between the rotation angle of the current camera around the x axis and the rotation angle of the camera around the x axis in the calibration video, and the x axis is the transverse direction of the image; the third deviation is the difference between the rotation angle of the current camera around the z axis and the rotation angle of the camera around the z axis in the calibration video, and the z axis is the direction of the optical axis of the camera.
In some optional embodiments, obtaining the image lateral shift pixel and the image vertical shift pixel according to the camera angle deviation includes:
Obtaining a first component of the translation pixels in the vertical direction of the image according to the first deviation and the image focal length;
obtaining a first component of the image transverse translation pixel according to the second deviation and the image focal length;
obtaining a second component of the horizontal translation pixels of the image and a second component of the vertical translation pixels of the image according to the third deviation and the coordinates of the center point of the detection frame;
the value of the image laterally translated pixel is equal to the sum of its first and second components; the value of the image vertical direction shift pixel is equal to the sum of its first and second components.
In some alternative implementations, the image laterally translates pixels dxc:
dxc=F×tan(a)+x0-x0×cos(c)+y0×sin(c)
image vertical translation pixel dyc:
dyc=F×tan(b)+y0-x0×cos(c)-y0×sin(c);
wherein dxc is an image horizontal shift pixel, dyc is an image vertical shift pixel, x0 is an abscissa of a center point of the detection frame, y0 is an ordinate of the center point of the detection frame, F is an image focal length, a is a first deviation of the image vertical direction y, b is a second deviation of the image horizontal direction x, and c is a third deviation of the camera optical axis direction z.
In some alternative embodiments, determining the identification code of the target object in the current frame image according to the position comparison of the calibrated detection frame and the calibrated detection frame includes:
And calculating the cross ratio of each calibrated detection frame to the calibrated detection frame, and matching the calibrated detection frame corresponding to the maximum cross ratio value with the calibrated detection frame, wherein the identification code of the target object corresponding to the calibrated detection frame is the identification code of the target object corresponding to the calibrated detection frame.
In some alternative embodiments, the method for calculating the intersection ratio of the calibrated detection frame and the calibrated detection frame includes:
determining the union area and the intersection area of the two detection frames according to the positions of the calibrated detection frames and the calibrated detection frames;
and obtaining the intersection ratio of the two detection frames according to the intersection area and the union area.
The embodiment of the application provides an identification device of target object identification code, which comprises:
the acquisition module is used for acquiring a current frame image containing one or more targets and camera pose information corresponding to the current frame image; the current frame image is obtained by moving and shooting a camera on a set track;
the target detection module is used for carrying out target detection on the current frame image to obtain an initial detection frame of each target object in the current frame image;
the inquiring module is used for obtaining a corresponding calibration image in the calibration video and calibration posture information corresponding to the calibration image according to the actual distance of the camera moving on the set track; the calibration image comprises one or more targets, and a calibration detection frame and an identification code of each target;
And the comparison module is used for determining the identification code of the target object in the current frame image according to the position comparison of the initial detection frame and the calibration detection frame under the condition that the camera gesture information and the calibration gesture information are consistent.
In some alternative embodiments, the comparison module is further configured to: under the condition that the camera posture information and the calibration posture information are inconsistent, converting the initial detection frame into a calibrated detection frame according to the camera posture information and the calibration posture information; and determining the identification code of the target object in the current frame image according to the position comparison of the calibrated detection frame and the calibrated detection frame.
In some alternative embodiments, further comprising:
the calibration video acquisition module is used for acquiring a calibration video, and calibration gesture information corresponding to each frame of image in the calibration video and a calibration distance for the camera to move on a set track; the video is calibrated by moving a camera on a set track and shooting.
In some alternative embodiments, the query module is further to:
acquiring a calibration distance consistent with the value of the actual distance, and determining a corresponding calibration image according to the calibration distance;
and obtaining corresponding calibration posture information according to the calibration image.
In some alternative embodiments, the comparison module is further configured to:
obtaining camera angle deviation according to the camera attitude information and the calibration attitude information;
and adjusting the coordinates of the initial detection frame according to the angle deviation of the camera to obtain the calibrated detection frame.
In some alternative embodiments, the camera angular deviation comprises: a first deviation in the vertical direction of the image, a second deviation in the lateral direction of the image, a third deviation in the optical axis direction of the camera; the comparison module is also used for:
obtaining an image horizontal translation pixel and an image vertical translation pixel according to the angle deviation of the camera;
and moving the initial detection frame according to the image horizontal translation pixels and the image vertical translation pixels to obtain the calibrated detection frame.
In some alternative embodiments, the comparison module is further configured to:
and calculating the cross ratio of each calibrated detection frame to the calibrated detection frame, and matching the calibrated detection frame corresponding to the maximum cross ratio value with the calibrated detection frame, wherein the identification code of the target object corresponding to the calibrated detection frame is the identification code of the target object corresponding to the calibrated detection frame.
In some alternative embodiments, the comparison module is further configured to:
Determining the union area and the intersection area of the two detection frames according to the positions of the calibrated detection frames and the calibrated detection frames;
and obtaining the intersection ratio of the two detection frames according to the intersection area and the union area.
An electronic device provided in an embodiment of the present application includes: a processor and a memory storing machine-readable instructions executable by the processor, which when executed by the processor, perform a method as any one of the above.
A computer readable storage medium provided by an embodiment of the present application, on which a computer program is stored, which when executed by a processor performs a method as described in any of the above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of steps of a method for identifying an identification code of a target object according to an embodiment of the present application;
Fig. 2 is a schematic diagram of an inspection robot for identifying carrier roller IDs and a plurality of carrier rollers on a conveyor belt according to the present embodiment;
fig. 3 is a schematic diagram of a theoretical position of a calibrated carrier roller obtained by performing translational adjustment on a carrier roller detection frame provided in the present embodiment;
FIG. 4 is a functional block diagram of an identification device for identifying a target object according to an embodiment of the present application;
fig. 5 is a schematic diagram of a possible structure of an electronic device according to an embodiment of the present application.
Icon: the system comprises a 1-acquisition module, a 2-target detection module, a 3-query module, a 4-comparison module, a 51-processor, a 52-memory, a 53-communication interface and a 54-communication bus.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a flowchart of steps of a method for identifying an identification code of a target object according to an embodiment of the present application, which specifically includes:
step 100, acquiring a current frame image containing one or more targets and camera pose information corresponding to the current frame image; the current frame image is obtained by moving and shooting a camera on a set track;
step 200, performing target detection on the current frame image to obtain an initial detection frame of each target object in the current frame image;
Step 300, obtaining a corresponding calibration image in the calibration video and calibration posture information corresponding to the calibration image according to the actual distance of the camera moving on the set track; the calibration image comprises one or more targets, and a calibration detection frame and an identification code of each target;
step 410, determining the identification code of the target object in the current frame image according to the position comparison of the initial detection frame and the calibration detection frame under the condition that the camera gesture information and the calibration gesture information are consistent.
In the embodiment of the application, a camera moves on a set track and shoots to obtain a current frame image, and the current frame is subjected to target detection to obtain an initial detection frame; according to the actual distance of the camera moving on the set track, acquiring a calibration image shot at the same position in the calibration image, wherein the calibration image is provided with a calibration detection frame and an identification code of a target object; under the condition that the camera gesture information and the calibration gesture information are consistent, determining the identification code of the target object in the current frame image according to the position comparison of the initial detection frame and the calibration detection frame. Therefore, the method of the embodiment only carries out target detection on the target object in the image, and bar codes, two-dimensional codes, characters or other unique identifiers which are arranged on the target object and used for carrying out image recognition are not needed or reduced, so that the possibility that the image recognition is influenced by environmental conditions is reduced, and the embodiment carries out calibration through the calibration video, so that the recognition accuracy of the target object identification code is improved. For example, when the target object is a carrier roller, a bar code, a two-dimensional code, characters or other unique marks are not required to be arranged on the surface of the carrier roller so as to carry out image recognition and determine the carrier roller ID, and recognition errors or accuracy reduction caused by the influence of light instability, dirty surfaces of the carrier roller and other environmental factors are avoided.
In some optional embodiments, step 300 further includes, after obtaining the calibration image corresponding to the calibration video and the calibration pose information corresponding to the calibration image according to the actual distance that the camera moves on the set track: step 420, under the condition that the camera pose information and the calibration pose information are inconsistent, converting the initial detection frame into a calibrated detection frame according to the camera pose information and the calibration pose information; and determining the identification code of the target object in the current frame image according to the position comparison of the calibrated detection frame and the calibrated detection frame. In the embodiment of the application, a camera moves on a set track and shoots to obtain a current frame image, and the current frame is subjected to target detection to obtain an initial detection frame; according to the actual distance of the camera moving on the set track, acquiring a calibration image shot at the same position in the calibration image, wherein the calibration image is provided with a calibration detection frame and an identification code of a target object; and then, the initial detection frame is adjusted and then is matched with the calibration detection frame, and if the initial detection frame is matched with the calibration detection frame, the identification code of the target object in the initial detection frame is the identification code corresponding to the calibration detection frame. Therefore, the method of the embodiment only carries out target detection on the target object in the image, and bar codes, two-dimensional codes, characters or other unique identifiers which are arranged on the target object and used for carrying out image recognition are not needed or reduced, so that the possibility that the image recognition is influenced by environmental conditions is reduced, and the embodiment carries out calibration through the calibration video, so that the recognition accuracy of the target object identification code is improved.
In some alternative embodiments, before acquiring the current frame image including the one or more objects in step 100, the method further includes:
acquiring a calibration video, and calibrating posture information corresponding to each frame of image in the calibration video and a calibration distance for the camera to move on a set track; the video is calibrated by moving a camera on a set track and shooting.
In the embodiment of the application, the calibration video is set once, and when the identification of the target object identification code is carried out subsequently, multiple identifications are carried out according to the same calibration video. The calibration video is also obtained by moving and shooting a camera on a set track, calibration posture information corresponding to each frame of image and a calibration distance of the camera moving on the set track are stored in the shooting process, and then target detection is carried out on each frame of image in the calibration video to obtain a calibration detection frame. And finally, marking each calibration detection frame with a correct identification code by means of manual marking or automatic marking and then manual confirmation.
In some optional embodiments, in step 300, according to an actual distance that the camera moves on the set track, obtaining a calibration image corresponding to the calibration video and calibration pose information corresponding to the calibration image includes:
Step 310, obtaining a calibration distance consistent with the value of the actual distance, and determining a corresponding calibration image according to the calibration distance;
and 320, obtaining corresponding calibration posture information according to the calibration image.
In this embodiment of the present application, in a set of calibration distances of each frame of image in a calibration video, a calibration distance closest to an actual distance is found, and a calibration image corresponding to the calibration distance is an image used for calibrating a current frame of image.
In some optional embodiments, in step 420, the converting the initial detection frame into the post-calibration detection frame according to the camera pose information and the calibration pose information specifically includes:
step 421, obtaining camera angle deviation according to the camera attitude information and the calibration attitude information;
and step 422, adjusting the coordinates of the initial detection frame according to the angle deviation of the camera to obtain a calibrated detection frame.
According to the embodiment of the application, when in actual detection, the camera attitude information of the camera when the camera shoots the current frame image and the calibration attitude information of the camera when the camera shoots the calibration image at the same position in the calibration video are used for obtaining the camera angle deviation when shooting twice. And adjusting the coordinates of the initial detection frame according to the angle deviation of the camera to obtain a calibrated detection frame, so that the positions of the detection frame and the corresponding calibration detection frame in the calibration video are as consistent as possible.
In some alternative embodiments, the camera angular deviation comprises: a first deviation in the vertical direction of the image, a second deviation in the lateral direction of the image, a third deviation in the optical axis direction of the camera; the first deviation is the difference between the rotation angle of the current camera around the y axis and the rotation angle of the camera around the y axis in the calibration video, and the y axis is the vertical direction of the image; the second deviation is the difference between the rotation angle of the current camera around the x axis and the rotation angle of the camera around the x axis in the calibration video, and the x axis is the transverse direction of the image; the third deviation is the difference between the rotation angle of the current camera around the z axis and the rotation angle of the camera around the z axis in the calibration video, and the z axis is the direction of the optical axis of the camera.
Correspondingly, in step 422, the coordinates of the initial detection frame are adjusted according to the camera angle deviation, so as to obtain a calibrated detection frame, which includes:
4221, obtaining an image horizontal translation pixel and an image vertical translation pixel according to the camera angle deviation;
step 4222, moving the initial detection frame according to the image horizontal translation pixels and the image vertical translation pixels to obtain the calibrated detection frame.
In some alternative embodiments, in step 4221, obtaining the image lateral shift pixels and the image vertical shift pixels according to the camera angle deviation includes: obtaining a first component of the translation pixels in the vertical direction of the image according to the first deviation and the image focal length; obtaining a first component of the image transverse translation pixel according to the second deviation and the image focal length; obtaining a second component of the horizontal translation pixels of the image and a second component of the vertical translation pixels of the image according to the third deviation and the coordinates of the center point of the detection frame; the value of the image laterally translated pixel is equal to the sum of its first and second components; the value of the image vertical direction shift pixel is equal to the sum of its first and second components.
Specifically, the image laterally translates the pixel dxc:
dxc=F×tan(a)+x0-x0×cos(c)+y0×sin(c)
image vertical translation pixel dyc:
dyc=F×tan(b)+y0-x0×cos(c)-y0×sin(c);
wherein dxc is an image horizontal shift pixel, dyc is an image vertical shift pixel, x0 is an abscissa of a center point of the detection frame, y0 is an ordinate of the center point of the detection frame, F is an image focal length, a is a first deviation of the image vertical direction y, b is a second deviation of the image horizontal direction x, and c is a third deviation of the camera optical axis direction z.
In some optional embodiments, in step 500, determining the identification code of the target object in the current frame image according to the position comparison between the calibrated detection frame and the calibrated detection frame specifically includes: and calculating the cross ratio of each calibrated detection frame to the calibrated detection frame, and matching the calibrated detection frame corresponding to the maximum cross ratio value with the calibrated detection frame, wherein the identification code of the target object corresponding to the calibrated detection frame is the identification code of the target object corresponding to the calibrated detection frame.
In some alternative embodiments, the method for calculating the intersection ratio of the calibrated detection frame and the calibrated detection frame includes: determining the union area and the intersection area of the two detection frames according to the positions of the calibrated detection frames and the calibrated detection frames; and obtaining the intersection ratio of the two detection frames according to the intersection area and the union area.
The identification method can be suitable for identifying the ID of the upper carrier roller of the conveyor belt, and the identification method can be definitely suitable for identifying the identification codes of other targets, and is mainly suitable for the scene with a plurality of targets, such as identifying the bridge pier ID of a bridge. In the following examples, identification of the idler ID is described in detail.
Referring to fig. 2, fig. 2 is a schematic diagram of an inspection robot for identifying an ID of a carrier roller and a plurality of carrier rollers on a conveyor belt according to the present embodiment. Be provided with the camera on the robot that patrols and examines, the robot that patrols and examines removes on the track, realizes the removal of camera according to the orbit of settlement through the track.
Firstly, the calibration video is acquired when the calibration video is deployed for the first time, and because the image detection algorithm may have the conditions of missed detection and false detection, in order to avoid that the errors affect the subsequent carrier roller ID counting, manual calibration is required. The data structure of the calibration video comprises frame numbers of each frame of image in the calibration video, and the track distance (namely the calibration distance) of the camera corresponding to each frame of image in track movement, the calibration attitude angle of the camera, the carrier roller position (namely the position of the carrier roller detection frame) and other information.
In order to solve the unavoidable problem of false omission in the image detection algorithm and avoid all errors in the subsequent carrier roller ID counting caused by one error, the method adopts manual calibration of the video acquired in the first deployment and constructs the video. The following is an expanded description of the data structure:
frame number (X): each frame of the calibration video has a unique identification frame number for distinguishing between different frame images in the calibration video.
Track distance (D): the track distance of each frame of image, namely the position of the idler roller relative to the starting point, is recorded.
Calibration attitude angle (K): and recording the attitude angle information of each frame of image, and determining the attitude direction of the inspection robot.
Idler position: the idler position information detected in each frame of image is recorded, typically in a rectangular or bounding box, for marking the position of the idler in the image. For example: position of the first idler: (x 11, y 11) (x 12, y 12); position of the second idler: (x 21, y 21) (x 22, y 22); position of the third idler: (x 31, y 31) (x 32, y 32), …, position of nth idler: (xn 1, yn 1) (xn 2, yn 2).
Idler ID: in the calibration video, each idler is manually marked with a unique ID number for establishing the registration ID of the idler. These IDs will serve as reference criteria for the subsequent identification process.
The setting of the calibration video is a key step, and the ID of the carrier roller in the video can be ensured to be accurate by manually marking the ID of the carrier roller. The structured calibration data allows the position, angle and ID of each idler in the video to be recorded explicitly. In the running process of the follow-up inspection robot, the real-time data and the calibration data are compared and matched, and the ID of the carrier roller can be calibrated according to the angle deviation, so that the accuracy of carrier roller ID identification is improved. The calibration method can reduce the risk of error transmission, and even if the image detection algorithm has error detection, the follow-up identification process can still keep higher accuracy as long as the calibration data is accurate. Through an effective calibration means, the limitations of an image detection algorithm can be solved, and the reliability and stability of carrier roller ID identification are improved.
When the inspection robot actually runs, the track distance of the current running is obtained, and the frame number nearest to the current position is found in the calibration video according to the track distance. Thus, the accuracy of the calibration information corresponding to the current position can be ensured.
According to the found frame number closest to the current position, the theoretical position of the current carrier roller in the calibration video can be obtained, and meanwhile, the information such as the attitude angle, the track distance and the like can be obtained.
And a gyroscope is arranged at the position of the camera of the inspection robot so as to detect the angle change of the camera, and the angle deviation of the camera of the current robot and the camera of the robot in the calibration video is calculated by comparing the current angle of the gyroscope with the angle of the gyroscope of the corresponding frame in the calibration video. And adjusting the coordinates of the initial detection frame of the current carrier roller by utilizing the angle deviation of the camera to obtain the theoretical position of the calibration carrier roller in the image, namely the detection frame after calibration. The method specifically comprises the following steps:
firstly, comparing the current camera gesture information with the calibration gesture information of a corresponding frame in the calibration video. Calculating the angle deviation of the current camera around the y, x and z three axes of the camera in the calibration video to be a, b and c respectively by taking the optical axis direction of the camera in the inspection robot as the z direction, the transverse direction of the image as the x direction and the vertical direction of the image as the y direction; f is the image focal length.
Since a, b, c are generally small, it can be ensured by the repeated positioning accuracy of the inspection robot, within less than 5 degrees, the offset can be calculated using an approximation algorithm.
Assuming that the center position of the current frame is (x 0, y 0), the number of pixels shifted in the x direction of the current frame is dxa =f×tan (a), and the number of pixels shifted in the y direction is dyb =f×tan (b) is calculated.
Then according to the rotation c angle around the z axis, the current frame rotates around the center of the image by c angle, the rotation angle is decomposed into the translation of dxc =x0-x0 xcos (c) +y0 xsin (c) on the x and y axes, dyc =y0-x0 xcos (c) -y0 xsin (c).
Thus, the current frame is shifted by the number of pixels dx= dxa + dxc in the x-direction and by the number of pixels dy= dya + dyc in the y-direction.
As shown in fig. 3, the initial detection frame coordinates of the carrier roller in the current frame image are calibrated in a translational adjustment manner through the angle deviation of the camera, so as to obtain the theoretical position of the carrier roller (i.e. the calibrated detection frame in the figure), so that the theoretical position of the carrier roller is as consistent as possible with the position of the calibrated detection frame in the calibration video. Through the step, the carrier roller position in the current frame image can be calibrated to be closer to the position in the calibration video, so that the accuracy and stability of carrier roller ID identification are improved. The calibrated position information can be used for the subsequent carrier roller ID matching and identification process, so that the identification result is more accurate and reliable.
And comparing the calibration detection frame in the calibration video with the carrier roller detection frame of the current image frame, and calculating the area of the overlapped part. And setting the carrier roller ID corresponding to the detection frame in the calibration video with the highest contact ratio with the carrier roller detection frame of the current frame as the ID of the current carrier roller, namely the registration ID of the carrier roller. The method specifically comprises the following steps:
And calculating the superposition area of the current frame carrier roller detection frame and each detection frame in the calibration video. One common method of calculating the overlap area is to use an overlap ratio (Intersection over Union, ioU) with the following formula:
IoU = (intersection area)/(union area)
And finding out the calibration video detection frame with the highest overlap ratio, namely the detection frame with the largest IoU value. A calibration video detection box that most closely matches the current frame idler detection box is found.
And setting the carrier roller ID corresponding to the found calibration video detection frame as the registration ID of the carrier roller of the current frame, namely the unique identifier of the current carrier roller.
It is noted that according to the calculation of IoU, a threshold may be set according to specific requirements to determine when two detection frames are considered to be matched. If IoU is above the set threshold, then the two detection frames are considered to be matched, the idler ID of the calibrated video detection frame is assigned to the current frame idler, otherwise no matching will occur, possibly for further processing or marking as a new idler ID. Through the steps, the carrier roller ID of the current frame can be calibrated, the accuracy and the accuracy of carrier roller ID identification are improved, and the calibrated carrier roller ID is used in the subsequent carrier roller management and tracking process.
In summary, according to the scheme, a chip is not required to be implanted into each carrier roller or a special label is not required to be attached to each carrier roller, so that physical modification of the carrier rollers is avoided, and sustainable use and recycling of the carrier rollers are facilitated. By calibrating the video and the manually marked carrier roller ID, the scheme can overcome the problem of false omission in an image detection algorithm to a certain extent, thereby improving the accuracy of carrier roller ID identification. The position and angle deviation of the carrier roller can be corrected by comparing the real-time data with the calibration video, and the accuracy of carrier roller ID identification is effectively improved. The calibration process is performed in real time, and the inspection robot can calibrate the carrier roller ID according to the current angle in real time. Meanwhile, the image recognition algorithm has strong flexibility, so that the method can meet the carrier roller ID recognition requirements under various conditions. In a word, the scheme overcomes some limitations in carrier roller ID identification through an image identification algorithm and a calibration method, improves the accuracy and stability of identification, realizes carrier roller ID identification with non-contact, high accuracy and wide applicability, and provides an effective solution for the fields of carrier roller management, logistics transportation and the like.
Referring to fig. 4, fig. 4 is a functional block diagram of an identification device of an identification code of a target object according to an embodiment of the present application, including: the system comprises an acquisition module 1, a target detection module 2, a query module 3 and a comparison module 4.
The acquisition module 1 is used for acquiring a current frame image containing one or more targets and camera pose information corresponding to the current frame image; the current frame image is obtained by moving and shooting the camera on a set track. The target detection module 2 is configured to perform target detection on the current frame image, so as to obtain an initial detection frame of each target object in the current frame image. The query module 3 is used for obtaining a corresponding calibration image in the calibration video and calibration posture information corresponding to the calibration image according to the actual distance of the camera moving on the set track; the calibration image comprises one or more targets, and a calibration detection frame and an identification code of each target. And the comparison module 4 is used for determining the identification code of the target object in the current frame image according to the position comparison of the initial detection frame and the calibration detection frame under the condition that the camera gesture information and the calibration gesture information are consistent.
In some alternative embodiments, the comparison module 4 is further configured to: under the condition that the camera posture information and the calibration posture information are inconsistent, converting the initial detection frame into a calibrated detection frame according to the camera posture information and the calibration posture information; and determining the identification code of the target object in the current frame image according to the position comparison of the calibrated detection frame and the calibrated detection frame.
In some alternative embodiments, further comprising: the calibration video acquisition module is used for acquiring a calibration video, and calibration gesture information corresponding to each frame of image in the calibration video and a calibration distance for the camera to move on a set track; the video is calibrated by moving a camera on a set track and shooting.
In some alternative embodiments, the query module 3 is further configured to: acquiring a calibration distance consistent with the value of the actual distance, and determining a corresponding calibration image according to the calibration distance; and obtaining corresponding calibration posture information according to the calibration image.
In some alternative embodiments, the comparison module 4 is further configured to: obtaining camera angle deviation according to the camera attitude information and the calibration attitude information; and adjusting the coordinates of the initial detection frame according to the angle deviation of the camera to obtain the calibrated detection frame.
In some alternative embodiments, the camera angular deviation comprises: a first deviation in the vertical direction of the image, a second deviation in the lateral direction of the image, a third deviation in the optical axis direction of the camera; the comparison module 4 is also for: obtaining an image horizontal translation pixel and an image vertical translation pixel according to the angle deviation of the camera; and moving the initial detection frame according to the image horizontal translation pixels and the image vertical translation pixels to obtain the calibrated detection frame.
In some alternative embodiments, the comparison module 4 is further configured to: and calculating the cross ratio of each calibrated detection frame to the calibrated detection frame, and matching the calibrated detection frame corresponding to the maximum cross ratio value with the calibrated detection frame, wherein the identification code of the target object corresponding to the calibrated detection frame is the identification code of the target object corresponding to the calibrated detection frame.
In some alternative embodiments, the comparison module 4 is further configured to: determining the union area and the intersection area of the two detection frames according to the positions of the calibrated detection frames and the calibrated detection frames; and obtaining the intersection ratio of the two detection frames according to the intersection area and the union area.
Fig. 5 shows a possible structure of the electronic device provided in the embodiment of the present application. Referring to fig. 5, the electronic device includes: processor 51, memory 52, and communication interface 53, which are interconnected and communicate with each other by a communication bus 54 and/or other forms of connection mechanisms (not shown).
The Memory 52 includes one or more (Only one is shown in the figure), which may be, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable programmable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), and the like. The processor 51 and possibly other components may access the memory 52, read and/or write data therein.
The processor 51 comprises one or more (only one shown) which may be an integrated circuit chip with signal processing capabilities. The processor 51 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a micro control unit (Micro Controller Unit, MCU), a network processor (Network Processor, NP), or other conventional processor; but may also be a special purpose processor including a Neural Network Processor (NPU), a graphics processor (Graphics Processing Unit GPU), a digital signal processor (Digital Signal Processor DSP), an application specific integrated circuit (Application Specific Integrated Circuits ASIC), a field programmable gate array (Field Programmable Gate Array FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. Also, when the processor 51 is plural, some of them may be general-purpose processors, and the other may be special-purpose processors.
The communication interface 53 includes one or more (only one shown) that may be used to communicate directly or indirectly with other devices for data interaction. Communication interface 53 may include an interface for wired and/or wireless communication.
One or more computer program instructions may be stored in memory 52 that may be read and executed by processor 51 to implement the methods provided by embodiments of the present application.
It will be appreciated that the configuration shown in fig. 5 is merely illustrative, and that the electronic device may also include more or fewer components than shown in fig. 5, or have a different configuration than shown in fig. 5. The components shown in fig. 5 may be implemented in hardware, software, or a combination thereof. The electronic device may be a physical device such as a PC, a notebook, a tablet, a cell phone, a server, an embedded device, etc., or may be a virtual device such as a virtual machine, a virtualized container, etc. The electronic device is not limited to a single device, and may be a combination of a plurality of devices or a cluster of a large number of devices.
The present embodiments also provide a computer readable storage medium having stored thereon computer program instructions that, when read and executed by a processor of a computer, perform the methods provided by the embodiments of the present application. For example, the computer readable storage medium may be implemented as memory 52 in the electronic device of FIG. 5.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (12)

1. A method of identifying an object identification code, comprising:
acquiring a current frame image containing one or more targets and camera pose information corresponding to the current frame image; the current frame image is obtained by moving and shooting a camera on a set track;
performing target detection on the current frame image to obtain an initial detection frame of each target object in the current frame image;
obtaining a corresponding calibration image in a calibration video and calibration posture information corresponding to the calibration image according to the actual distance of the camera moving on the set track; the calibration image comprises one or more targets, and a calibration detection frame and an identification code of each target;
In case the camera pose information and the calibration pose information coincide,
and determining the identification code of the target object in the current frame image according to the position comparison of the initial detection frame and the calibration detection frame.
2. The method of claim 1, wherein after obtaining the calibration image corresponding to the calibration video and the calibration pose information corresponding to the calibration image according to the actual distance that the camera moves on the set track, the method further comprises:
under the condition that the camera posture information and the calibration posture information are inconsistent, converting the initial detection frame into a calibrated detection frame according to the camera posture information and the calibration posture information; and determining the identification code of the target object in the current frame image according to the position comparison of the calibrated detection frame and the calibrated detection frame.
3. The method of claim 1, wherein prior to the acquiring the current frame image containing one or more objects, further comprising:
acquiring a calibration video, calibration posture information corresponding to each frame of image in the calibration video and a calibration distance for the camera to move on the set track;
The calibration video is obtained by moving and shooting the camera on the set track.
4. The method of claim 3, wherein the obtaining a calibration image corresponding to the calibration video and calibration pose information corresponding to the calibration image according to the actual distance the camera moves on the set track comprises:
acquiring a calibration distance consistent with the value of the actual distance, and determining the corresponding calibration image according to the calibration distance;
and obtaining corresponding calibration posture information according to the calibration image.
5. The method of claim 2, wherein said converting the initial detection frame into a calibrated detection frame based on the camera pose information and calibration pose information comprises:
obtaining camera angle deviation according to the camera attitude information and the calibration attitude information;
and adjusting the coordinates of the initial detection frame according to the angle deviation of the camera to obtain the calibrated detection frame.
6. The method of claim 5, wherein the camera angle deviation comprises: a first deviation in the vertical direction of the image, a second deviation in the lateral direction of the image, a third deviation in the optical axis direction of the camera;
The adjusting the coordinates of the initial detection frame according to the angle deviation of the camera to obtain the calibrated detection frame comprises the following steps:
obtaining an image horizontal translation pixel and an image vertical translation pixel according to the camera angle deviation;
and moving the initial detection frame according to the image horizontal translation pixels and the image vertical translation pixels to obtain the calibrated detection frame.
7. The method of claim 6, wherein deriving image horizontally translated pixels and image vertically translated pixels from the camera angular deviation comprises:
obtaining a first component of the image vertical translation pixel according to the first deviation and the image focal length;
obtaining a first component of the image lateral translation pixel according to the second deviation and the image focal length;
obtaining a second component of the image horizontal translation pixel and a second component of the image vertical translation pixel according to the third deviation and the coordinates of the center point of the detection frame;
the value of the image laterally translated pixel is equal to the sum of its first and second components; the value of the image vertically translated pixel is equal to the sum of its first and second components.
8. The method of claim 6, wherein the image laterally translates pixels:
dxc=F×tan(a)+x0-x0×cos(c)+y0×sin(c)
the image vertically translates pixels:
dyc=F×tan(b)+y0-x0×cos(c)-y0×sin(c);
wherein dxc is the horizontal shift pixel of the image, dyc is the vertical shift pixel of the image, x0 is the abscissa of the center point of the detection frame, y0 is the ordinate of the center point of the detection frame, F is the focal length of the image, a is the first deviation of the vertical direction y of the image, b is the second deviation of the horizontal direction x of the image, and c is the third deviation of the optical axis direction z of the camera.
9. The method of claim 2, wherein determining the identification code of the object in the current frame image based on the comparison of the positions of the calibrated detection frame and the calibrated detection frame comprises:
and calculating the cross ratio of each calibrated detection frame to the corresponding calibrated detection frame when the cross ratio is the maximum, and matching the corresponding calibrated detection frame with the corresponding calibrated detection frame, wherein the identification code of the target in the calibrated detection frame is the identification code of the target in the corresponding calibrated detection frame.
10. The method of claim 9, wherein the method for calculating the cross-over ratio of the calibrated detection frame to the calibrated detection frame comprises:
Determining the union area and the intersection area of the two detection frames according to the positions of the calibrated detection frames and the calibrated detection frames;
and obtaining the intersection ratio of the two detection frames according to the intersection area and the union area.
11. An apparatus for identifying an identification code of an object, comprising:
the acquisition module is used for acquiring a current frame image containing one or more targets and camera pose information corresponding to the current frame image; the current frame image is obtained by moving and shooting a camera on a set track;
the target detection module is used for carrying out target detection on the current frame image to obtain an initial detection frame of each target object in the current frame image;
the query module is used for obtaining a corresponding calibration image in the calibration video and calibration posture information corresponding to the calibration image according to the actual distance of the camera moving on the set track; the calibration image comprises one or more targets, and a calibration detection frame and an identification code of each target;
and the comparison module is used for determining the identification code of the target object in the current frame image according to the position comparison of the initial detection frame and the calibration detection frame under the condition that the camera gesture information and the calibration gesture information are consistent.
12. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when run by a processor, performs the method according to any of claims 1-10.
CN202311403576.8A 2023-10-27 2023-10-27 Target identification code identification method, device and computer readable storage medium Active CN117151140B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311403576.8A CN117151140B (en) 2023-10-27 2023-10-27 Target identification code identification method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311403576.8A CN117151140B (en) 2023-10-27 2023-10-27 Target identification code identification method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN117151140A CN117151140A (en) 2023-12-01
CN117151140B true CN117151140B (en) 2024-02-06

Family

ID=88884614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311403576.8A Active CN117151140B (en) 2023-10-27 2023-10-27 Target identification code identification method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117151140B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106643699A (en) * 2016-12-26 2017-05-10 影动(北京)科技有限公司 Space positioning device and positioning method in VR (virtual reality) system
KR101793975B1 (en) * 2016-09-13 2017-11-07 서강대학교산학협력단 Method and apparatus of camera tracking for streaming images from depth camera
CN110415269A (en) * 2019-07-19 2019-11-05 浙江大学 A kind of target tracking algorism under dynamic static background
WO2020103476A1 (en) * 2018-11-22 2020-05-28 北京哆咪大狮科技有限公司 Piano key action identification system
CN111461222A (en) * 2020-04-01 2020-07-28 北京爱笔科技有限公司 Method and device for acquiring target object track similarity and electronic equipment
JP2021081989A (en) * 2019-11-19 2021-05-27 アイシン精機株式会社 Camera calibration device
CN113256732A (en) * 2021-04-19 2021-08-13 安吉智能物联技术有限公司 Camera calibration and pose acquisition method
CN113269098A (en) * 2021-05-27 2021-08-17 中国人民解放军军事科学院国防科技创新研究院 Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
WO2021217398A1 (en) * 2020-04-28 2021-11-04 深圳市大疆创新科技有限公司 Image processing method and apparatus, movable platform and control terminal therefor, and computer-readable storage medium
CN113657256A (en) * 2021-08-16 2021-11-16 大连海事大学 Unmanned ship-borne unmanned aerial vehicle sea-air cooperative visual tracking and autonomous recovery method
WO2022142417A1 (en) * 2020-12-31 2022-07-07 深圳云天励飞技术股份有限公司 Target tracking method and apparatus, electronic device, and storage medium
KR20220100765A (en) * 2021-01-08 2022-07-18 최은정 Method of recognizing motion of golf ball and club in fast camera image and apparatus of analyzing golf motion using the same
CN114897683A (en) * 2022-04-25 2022-08-12 深圳信路通智能技术有限公司 Method, device and system for acquiring vehicle-side image and computer equipment
CN116681965A (en) * 2023-05-19 2023-09-01 智道网联科技(北京)有限公司 Training method of target detection model and target detection method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345596A (en) * 2018-09-19 2019-02-15 百度在线网络技术(北京)有限公司 Multisensor scaling method, device, computer equipment, medium and vehicle
CN113989450B (en) * 2021-10-27 2023-09-26 北京百度网讯科技有限公司 Image processing method, device, electronic equipment and medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101793975B1 (en) * 2016-09-13 2017-11-07 서강대학교산학협력단 Method and apparatus of camera tracking for streaming images from depth camera
CN106643699A (en) * 2016-12-26 2017-05-10 影动(北京)科技有限公司 Space positioning device and positioning method in VR (virtual reality) system
WO2020103476A1 (en) * 2018-11-22 2020-05-28 北京哆咪大狮科技有限公司 Piano key action identification system
CN110415269A (en) * 2019-07-19 2019-11-05 浙江大学 A kind of target tracking algorism under dynamic static background
JP2021081989A (en) * 2019-11-19 2021-05-27 アイシン精機株式会社 Camera calibration device
CN111461222A (en) * 2020-04-01 2020-07-28 北京爱笔科技有限公司 Method and device for acquiring target object track similarity and electronic equipment
WO2021217398A1 (en) * 2020-04-28 2021-11-04 深圳市大疆创新科技有限公司 Image processing method and apparatus, movable platform and control terminal therefor, and computer-readable storage medium
WO2022142417A1 (en) * 2020-12-31 2022-07-07 深圳云天励飞技术股份有限公司 Target tracking method and apparatus, electronic device, and storage medium
KR20220100765A (en) * 2021-01-08 2022-07-18 최은정 Method of recognizing motion of golf ball and club in fast camera image and apparatus of analyzing golf motion using the same
CN113256732A (en) * 2021-04-19 2021-08-13 安吉智能物联技术有限公司 Camera calibration and pose acquisition method
CN113269098A (en) * 2021-05-27 2021-08-17 中国人民解放军军事科学院国防科技创新研究院 Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
CN113657256A (en) * 2021-08-16 2021-11-16 大连海事大学 Unmanned ship-borne unmanned aerial vehicle sea-air cooperative visual tracking and autonomous recovery method
CN114897683A (en) * 2022-04-25 2022-08-12 深圳信路通智能技术有限公司 Method, device and system for acquiring vehicle-side image and computer equipment
CN116681965A (en) * 2023-05-19 2023-09-01 智道网联科技(北京)有限公司 Training method of target detection model and target detection method

Also Published As

Publication number Publication date
CN117151140A (en) 2023-12-01

Similar Documents

Publication Publication Date Title
US11003940B2 (en) System and methods for automatic solar panel recognition and defect detection using infrared imaging
US10878372B2 (en) Method, system and device for association of commodities and price tags
CN105512587B (en) System and method for tracking optical codes
CN111462200A (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN109977770B (en) Automatic tracking shooting method, device, system and storage medium
CN111814752B (en) Indoor positioning realization method, server, intelligent mobile device and storage medium
CN111860352B (en) Multi-lens vehicle track full tracking system and method
CN103675588A (en) Printed circuit element polarity machine vision detection method and device
CN102656532A (en) Map generating and updating method for mobile robot position recognition
JPH09322155A (en) Method and device for monitoring video
CN103069796A (en) Method for counting objects and apparatus using a plurality of sensors
WO2022161186A1 (en) Method and apparatus for movable robot to adjust pose of goods rack
Liu et al. Research on deviation detection of belt conveyor based on inspection robot and deep learning
CN109509233B (en) PTZ camera target tracking method, system and device based on RFID label position information
CN111611871B (en) Image recognition method, apparatus, computer device, and computer-readable storage medium
Liang et al. A novel inertial-aided visible light positioning system using modulated LEDs and unmodulated lights as landmarks
CN117151140B (en) Target identification code identification method, device and computer readable storage medium
CN109977853B (en) Underground worker panoramic monitoring method based on multiple identification devices
Huang et al. Truck‐Lifting Prevention System Based on Vision Tracking for Container‐Lifting Operation
CN112906643A (en) License plate number identification method and device
CN113763466A (en) Loop detection method and device, electronic equipment and storage medium
CN117347996A (en) Target relay method, system and equipment for continuous radar area
CN113932793B (en) Three-dimensional coordinate positioning method, three-dimensional coordinate positioning device, electronic equipment and storage medium
CN113345017B (en) Method for assisting visual SLAM by using mark
CN112802112A (en) Visual positioning method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant