CN111354018B - Object identification method, device and system based on image - Google Patents

Object identification method, device and system based on image Download PDF

Info

Publication number
CN111354018B
CN111354018B CN202010150926.4A CN202010150926A CN111354018B CN 111354018 B CN111354018 B CN 111354018B CN 202010150926 A CN202010150926 A CN 202010150926A CN 111354018 B CN111354018 B CN 111354018B
Authority
CN
China
Prior art keywords
luminous
point
points
image
infrared characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010150926.4A
Other languages
Chinese (zh)
Other versions
CN111354018A (en
Inventor
田地
丁彬彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Weier Huibo Technology Co ltd
Original Assignee
Hefei Weier Huibo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Weier Huibo Technology Co ltd filed Critical Hefei Weier Huibo Technology Co ltd
Priority to CN202010150926.4A priority Critical patent/CN111354018B/en
Publication of CN111354018A publication Critical patent/CN111354018A/en
Application granted granted Critical
Publication of CN111354018B publication Critical patent/CN111354018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T3/08
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Abstract

The invention provides an object identification method, device and system based on images, the method comprises the steps that through arranging luminous bodies on a plurality of target objects, the luminous bodies are characterized by infrared characteristic points, first image information of the target objects is obtained, and the first image information at least comprises the infrared characteristic point image information of the luminous bodies; and analyzing information represented by the infrared characteristic point image information in the first image information, identifying the target object, effectively determining the corresponding relation of the luminous bodies in the image to the luminous bodies in practice, and solving the coordinates of each target object by utilizing the coordinates of the head and tail points corresponding to the luminous body weight with the corresponding relation, so that the target object can be conveniently tracked, the luminous body size is extremely small, the manufacturing cost is low, and the method can be used on a large scale. When a plurality of objects to be positioned or a plurality of luminous bodies exist in the prior art, the corresponding relation between the luminous points in the image acquired by the camera and the luminous bodies in reality cannot be determined, and the identity recognition of the object to be targeted cannot be realized.

Description

Object identification method, device and system based on image
Technical Field
The invention belongs to the technical field of VR visual positioning, and particularly relates to an object identification method, device and system based on images.
Background
The positioning technology is widely applied to the fields of virtual reality, augmented reality, motion capture and the like, and is an important component of man-machine interaction. The visual positioning is a common positioning technology, and a luminous body fixed on an object to be positioned is collected through one or more cameras, so that the three-dimensional position of the luminous body is determined, and the visual positioning is a technical means of visual positioning.
However, when there are multiple positioning objects or multiple illuminants, determining the correspondence between the illuminant points in the image acquired by the camera and the illuminant in reality is a difficult problem in visual positioning, and the identity recognition of the illuminant, that is, the object to be positioned, cannot be realized if the correspondence cannot be determined. The method in the prior art is to make each illuminant emit light with different colors, and then determine the light by colors through an RGB camera. However, the number of colors that can be easily resolved in an image is limited, and some colors can become difficult to determine once there are too many illuminants; meanwhile, the RGB camera is easily affected by indoor light noise, and often results in error, so that the corresponding relation of the luminous points in the image to the luminous bodies in practice cannot be effectively determined by the method in the prior art, a plurality of luminous bodies cannot be tracked, user experience is reduced, and the problem to be solved by the person in the field is urgent.
Chinese patent publication No. CN107101616A discloses an identity recognition method, device and system for locating objects. A luminous body and an inertial measurement unit IMU are respectively arranged on a plurality of positioning objects; the method comprises the following steps: collecting current frame images of a plurality of luminous bodies through a binocular camera module; determining the three-dimensional position coordinates of each illuminant relative to the binocular camera module by using a binocular imaging principle; determining the corresponding relation between the luminous points and the IMU according to the historical track of each luminous point in the current frame image and the historical track of the orientation of each IMU; and tracking each positioning object by utilizing the three-dimensional position coordinates corresponding to the luminous points and quaternions acquired by the IMU with the corresponding relation with the luminous points. However, the inertial measurement unit IMU is disposed on a plurality of positioning objects, which often occupies additional space on the positioning objects, and is not suitable for some small positioning objects, and the inertial measurement unit IMU is often expensive, which may cause waste of development cost.
Disclosure of Invention
1. Problems to be solved
Aiming at the problem that when a plurality of luminous bodies on an object to be positioned emit light simultaneously in the prior art, the corresponding relation of the luminous points in the image acquired by the camera to the luminous bodies in reality cannot be determined, and the identity recognition of the object to be targeted cannot be realized. The invention provides an object identification method based on images, which is characterized in that a plurality of target objects are provided with luminous bodies, the luminous bodies are characterized by infrared characteristic points, first image information of the target objects is obtained, and the first image information at least comprises the infrared characteristic point image information of the luminous bodies; and analyzing information represented by the infrared characteristic point image information in the first image information, identifying the target object, effectively determining the corresponding relation of the luminous bodies in the image to the luminous bodies in practice, and solving the coordinates of each target object by utilizing the coordinates of the head and tail points corresponding to the luminous body weight with the corresponding relation, so that the target object can be conveniently tracked, the luminous body size is extremely small, the manufacturing cost is low, and the method can be used on a large scale.
2. Technical proposal
In order to solve the problems, the invention adopts the following technical scheme.
The first aspect of the present invention provides an image-based object recognition method, in which a light emitter is provided on a target object, the light emitter being characterized by infrared feature points, the method comprising:
s100: acquiring first image information of the target object, wherein the first image information at least comprises an infrared characteristic point image of a illuminant;
s200: and analyzing information represented by the infrared characteristic point image in the first image information, and identifying the target object.
Preferably, in step S100, the infrared feature points are arranged on a straight line segment, specifically as follows:
the two end points of the straight line segment are infrared characteristic points and are defined as head and tail points, and black points are arranged on one side of the head and tail points or on one side of the tail points, which is close to the head and tail points, and are used for marking the direction of the luminous body;
the number of each straight line segment point is fixed, and the infrared characteristic point group has unique codes by controlling the number among the infrared characteristic points.
Preferably, the step S200 includes:
s202: preprocessing the first image information, and then carrying out subdivision searching to obtain a first target point area;
s204: traversing the first target point area, clustering the first target point, and judging straight lines to obtain straight line segments where the infrared characteristic point sets are located;
s206: ordering the infrared characteristic points, determining the distance between adjacent infrared characteristic points, and positioning the coordinates of the head and tail infrared characteristic points in the straight line segment;
s208: and determining the code ID of the characteristic of the infrared characteristic point group according to the head-tail infrared characteristic point coordinates, and identifying the target object.
Preferably, the preprocessing step includes: and graying the first image information, binarizing the first image to obtain a corrosion effect and removing noise.
Preferably, the steps include: and performing perspective transformation on the head and tail point coordinates to obtain target object position coordinates.
Preferably, the clustering step of the first target point includes:
traversing all target points, obtaining the minimum point distance of the target points, and clustering all the target points through the minimum point distance and a recursion algorithm.
Preferably, the first image information of the target object is obtained through a plurality of industrial cameras, wherein a data synchronization mechanism established among the plurality of industrial cameras is used for uniformly managing the data positioned by the plurality of cameras;
the camera detection overlapping area is set, so that detection blind areas generated when the camera detection overlapping area spans the camera capturing area are prevented, the identified data overlapping areas are combined, and repeated positioning data are avoided.
A second aspect of the present invention provides an image-based object recognition apparatus, on each of a plurality of target objects, a light emitter, the light emitter being characterized by infrared feature points, the apparatus comprising:
the image acquisition unit is used for acquiring first image information of the target object, wherein the first image information at least comprises an infrared characteristic point image of a luminous body; a kind of electronic device with high-pressure air-conditioning system
And the target identification unit is used for analyzing the information represented by the infrared characteristic point image in the first image information and identifying the target object.
A third aspect of the present invention provides an image-based object recognition system comprising:
a plurality of binocular camera modules located at the target object; a kind of electronic device with high-pressure air-conditioning system
A light emitter disposed on a plurality of target objects and the image-based object recognition apparatus described above;
wherein the plurality of binocular camera modules are in communication connection with the image-based object recognition device.
3. Advantageous effects
Compared with the prior art, the invention has the beneficial effects that:
(1) The invention provides an object identification method based on images, which is characterized in that a plurality of target objects are provided with luminous bodies, the luminous bodies are characterized by infrared characteristic points, first image information of the target objects is obtained, and the first image information at least comprises the infrared characteristic point image information of the luminous bodies; the information represented by the infrared characteristic point image information in the first image information is analyzed, the target object is identified, the corresponding relation of the luminous bodies in the image to the luminous bodies in practice is effectively determined, the coordinates of each target object can be obtained and realized by utilizing the coordinates of the head and tail points corresponding to the luminous body weight with the corresponding relation, the target object is convenient to track, the luminous body size is extremely small, the manufacturing cost is low, and the method can be used on a large scale;
(2) The luminous body detection surfaces are sealed by using opaque materials, the detection surfaces above the front surfaces are provided with holes, and the box is provided with a 1 mm milky opaque organic glass plate attached to the hole surfaces to cover the front surfaces, so that the effect of shielding the hole openings is achieved, the infrared light emitted from the small holes can be subjected to soft light treatment, the halation effect of a camera caused by direct light is prevented, the light spots emitted from adjacent small holes are interfered, clearer image information can be obtained, the processing of the image information is facilitated, and a target object is accurately identified;
(3) The invention obtains the image information of the target object through a plurality of industrial cameras, and a data synchronization mechanism established among the plurality of industrial cameras uniformly manages the data positioned by the plurality of cameras; the camera detection overlapping area is set, so that detection blind areas generated when the camera detection overlapping area is crossed are prevented, the identified data overlapping areas are combined, repeated positioning data are avoided, and the later processing of image information is facilitated.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps. In the accompanying drawings:
FIG. 1 is a schematic flow chart of an object identification method based on an image according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for analyzing image information of an infrared feature point in first image information according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an object recognition device based on an image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an object recognition system based on an image according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of four different illuminant ID identifiers in an embodiment of the present invention;
fig. 6 is a flowchart of an industrial camera data synchronization method according to an embodiment of the present invention.
Detailed Description
Embodiments of the technical scheme of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and thus are merely examples, and are not intended to limit the scope of the present invention. It is noted that unless otherwise indicated, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention pertains.
In this application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. In the description of the present invention, the meaning of "plurality" is two or more unless specifically defined otherwise.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context.
In particular implementations, the terminals described in embodiments of the invention include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad). It should also be appreciated that in some embodiments, the device is not a portable communication device, but a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad).
In the following discussion, a terminal including a display and a touch sensitive surface is described. However, it should be understood that the terminal may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal supports various applications, such as one or more of the following: drawing applications, presentation applications, word processing applications, website creation applications, disk burning applications, spreadsheet applications, gaming applications, telephony applications, video conferencing applications, email applications, instant messaging applications, workout support applications, photo management applications, digital camera applications, digital video camera applications, web browsing applications, digital music player applications, and/or digital video player applications.
Various applications that may be executed on the terminal may use at least one common physical user interface device such as a touch sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal may be adjusted and/or changed between applications and/or within the corresponding applications. In this way, the common physical architecture (e.g., touch-sensitive surface) of the terminal may support various applications with user interfaces that are intuitive and transparent to the user.
Exemplary application
The interactive driving project in the VR projection has higher requirements on the positioning of the moving trolley, however, the volume of the moving trolley is larger, the surface is more a paint surface, the ground is smoother and brighter, the common camera is added with the light supplementing lamp, interference information such as ground noise points, reflected light and the like can be generated in the moving process of the trolley, and the trolley can not be identified and positioned stably and effectively. Therefore, the luminous bodies are arranged on the plurality of moving trolleys, the luminous bodies are characterized by infrared characteristic points, the first image information of the moving trolleys is obtained through the industrial camera, and the first image information at least comprises the infrared characteristic point image information of the luminous bodies; the information represented by the infrared characteristic point image information in the first image information is analyzed, the ID of the moving trolley is identified, the corresponding relation of the luminous bodies in the image to the moving trolley in reality can be effectively determined, and the coordinates of each target object can be obtained by utilizing the coordinates of the head point and the tail point corresponding to the luminous body weight with the corresponding relation, so that the tracking of the target object is facilitated.
Exemplary method
As shown in fig. 1, the present embodiment provides an image-based object recognition method, in which a plurality of target objects are provided with light emitters in advance, and the light emitters are characterized by infrared feature points; the target object in the implementation is a motion trolley of an interactive driving project in VR projection, and the illuminant is a luminous box; those skilled in the art will appreciate that the positioning object may also be a VR device, a handle, a VR headset, etc., and is not limited in this regard. The method comprises the following steps:
s100: acquiring first image information of the target object, wherein the first image information at least comprises infrared characteristic point image information of a luminous body;
specifically, in step S100, the infrared feature points are arranged on a plurality of straight line segments, two end points of the straight line segments are infrared feature points, and are defined as head and tail points, and black points are arranged on one side of the head point, which is close to the tail point, or the tail point, which is close to the head point, so as to mark the direction of the illuminant; the number of each straight line segment point is fixed, and the infrared characteristic point group has unique codes by controlling the number among the infrared characteristic points.
Further, as shown in fig. 5, specific characteristic points of the light-emitting box in the present embodiment are as follows:
the detection surface of the luminous box is provided with 7 holes with equal intervals, each hole is provided with an infrared luminous point, the luminous and non-luminous points of each hole are controlled, the white color point is a luminous point (infrared characteristic point), the black color is a shading non-luminous point (black point) and is used for identifying different information marks, the open pore surface of the box is attached with a layer of thinner milky organic glass to hide the mark information, and invisible infrared light is used as a luminous source, so that the detection device can be seen as a common box; the infrared luminous points in the holes at the head and the tail always emit light and are used for detecting and calculating the distance between the bisecting holes.
The direction of the movement of the luminous box is defined as the positive direction, and the second holes which are arranged from right to left of the box are always non-luminous holes and are used for marking the operation of the box direction and angle.
Wherein the ID definition rules of different luminous boxes are as follows: starting with the 1 st hole (normally bright) arranged right to left from the square of the box, 2 holes on the left are defined as ID1, 3 holes on the left are defined as ID2, 4 holes on the left are defined as ID3, and 5 holes on the left are defined as ID4.
As a variation, a point (infrared characteristic point) in which one end of the light-emitting box has only one light-emitting point is defined as a head point, and the other end of the light-emitting box has at least two light-emitting points in succession, and the ID of the target object is identified based on the number of non-light-emitting points. Those skilled in the art will understand that the ID rule represented by the infrared feature point may be changed according to different application scenarios, and is not limited herein.
The luminous box is arranged on the moving trolley or other equipment, infrared light can be emitted from the luminous box through the small holes, the industrial camera can easily capture luminous points through the infrared optical filter, and compared with other light supplementing reflection positioning methods, the luminous box has the advantages of high adaptability, long detection distance, strong anti-interference capability and the like.
As a variation, the surface of the illuminant (luminous box) is sealed by using an opaque material, the detection surface above the front surface is perforated, and the perforated surface of the box is covered by a 1 mm milky opaque organic glass plate, so that the function of shielding the perforation is achieved, the infrared light emitted from the perforation can be subjected to soft light treatment, the phenomenon that the camera is in halation due to direct light is prevented, the light spots emitted from adjacent pinholes are disturbed, clearer image information can be obtained, the processing of the image information is facilitated, and a target object is accurately identified.
S200: and analyzing information characterized by infrared characteristic point image information in the first image information, and identifying the target object.
Specifically, the step S200 includes:
s202: preprocessing the first image information, and then carrying out subdivision searching to obtain a first target point area;
the pretreatment step comprises the following steps: processing the first image information in an EmguCV, triggering an image processing background thread AutoReseteEvent signal, graying the first image information, binarizing the first image to obtain a corrosion effect, removing noise, and discharging interference factors caused by the original environment in the image; the binarization is to realize the image self-adaptive binarization through self-adaptive thresholding (adaptive threshold), so as to simplify the image and improve the speed.
S204: traversing the first target point area, clustering the first target point, and judging straight lines to obtain straight line segments where the infrared characteristic point sets are located;
specifically, traversing all target points in a first target point area, acquiring the minimum point distance of the target points, and clustering all the target points by using a minimum point distance of 1.5 times and a recursion algorithm; and limiting target points in all the clustered sets in a linear range through a linear judgment algorithm to obtain a linear segment where the infrared characteristic point set is located.
S206: ordering the infrared characteristic points, determining the distance between adjacent infrared characteristic points, and positioning the coordinates of the head and tail infrared characteristic points in the straight line segment;
specifically, by sorting the distances from the origin to the target points, the origin in this example refers to the origin coordinates established in the upper left corner of the image in the image captured by the industrial camera; and (3) positioning the head and tail coordinates of the luminous box through calculating the distances between the head and tail adjacent points after sequencing and through the direction rule of the luminous box (the two adjacent infrared characteristic points in the forward direction and the infrared characteristic point in the reverse direction are not adjacent).
S208: and determining the code ID of the characteristic of the infrared characteristic point group according to the head-tail infrared characteristic point coordinates, and identifying the target object. Namely, excluding the coordinates of the head and tail infrared characteristic points in the straight line segment, recognizing that the line segment has a plurality of normally-bright points, starting with the 1 st hole (normally-bright) arranged right to left and right, defining 2 holes on the left as ID1, defining 3 holes on the left as ID2, defining 4 holes on the bright as ID3, and defining 5 holes on the bright as ID4.
As a variation, perspective transformation is performed on the coordinates of the head and tail points to obtain the position coordinates of the target object.
The perspective transformation matrix transformation formula is as follows:
wherein the perspective transformation matrix:
the points to be moved, i.e. the source target points, are:
the other fixed point, i.e. the target point to which to move, is:
this is a transformation from two-dimensional space to three-dimensional space, and because the image is in a two-dimensional plane, it is divided by Z, (X '; Y '; Z ') represents the point on the image:
let a 33 =1, develop the above formula, get the case of one point:
the 4 points can obtain 8 equations, namely the A can be solved.
Wherein the perspective transformation matrix (X, Y, Z) is transformed coordinates; a, a 11 、a 12 、a 13 、a 21 、a 22 、a 231 、a 31 、a 32 、a 33 Representing a three-dimensional spatial transformation matrix, (x, y, 1) being the source coordinates before transformation, since the two-dimensional image is processed, the z of the source coordinates (x, y, z) is constant at 1; the perspective transformation is calculated through the formula, so that the position coordinates of the target object are more accurate.
As a variation, as shown in fig. 6, the first image information of the target object is obtained by a plurality of industrial cameras, where a data synchronization mechanism established between the plurality of industrial cameras uniformly manages the data located by the plurality of cameras;
the camera detection overlapping area is set, so that detection blind areas generated when the camera detection overlapping area spans the camera capturing area are prevented, the identified data overlapping areas are combined, and repeated positioning data are avoided.
Specifically, the multi-camera positioning data are fused, the design program can be used in a superposition way, one program is set as a main entry program, the other programs are set as subroutines, each program is independently divided into identification areas, and adjacent areas are set to be overlapped for identification, data fusion is carried out, and cross-area positioning blind areas are placed or lost. According to the scheme, multi-camera positioning in a combined area of 2x2, 2x3 or 3x3 and the like can be realized, all subprogram data uniformly flow to the main entrance program, the main entrance program establishes a positioning data buffer area, the buffer area positioning data is traversed regularly through a timer, overlapping part data are combined, and the main entrance program uniformly outputs positioning coordinate data.
Finally, the data is sent to the object identification device of the image through the network UDP communication technology, the UDP data protocol is used for designing higher universality, the UDP data protocol is a connectionless transmission protocol, the delay is small in the data transmission process, the data transmission efficiency is high, the use is simple, the data can be received by a receiver, the high expansibility is realized, the superposition and the use are convenient, the independence is high, and the flexible access and the quick use of other software development are provided.
Exemplary apparatus
As shown in fig. 3, the present embodiment provides an image-based object recognition device on the basis of an exemplary method, where a plurality of target objects are each provided with a light emitter, the light emitters are characterized by infrared feature points, the infrared feature points are arranged on a plurality of straight line segments, two end points of the straight line segments are infrared feature points, and are defined as head and tail points, and a black point is arranged on a side of the head point, which is close to the tail point, or the tail point, which is close to the head point, so as to mark the direction of the light emitters; the number of each straight line segment point is fixed, and the infrared characteristic point group has unique codes by controlling the number among the infrared characteristic points.
The device comprises:
an image acquisition unit 10 for acquiring first image information of the target object, the first image information including at least infrared feature point image information of a light emitter;
and a target recognition unit 20, configured to analyze information characterized by the infrared feature point image information in the first image information, and recognize the target object.
Specifically, the target recognition unit includes
The image preprocessing module is used for preprocessing the first image information, re-segmenting and searching to obtain a first target point area;
the first data processing module is used for traversing the first target point area, clustering the first target point and obtaining a straight line segment where the infrared characteristic point set is located through straight line judgment;
the second data processing module is used for sequencing the infrared characteristic points, determining the distance between adjacent infrared characteristic points and positioning the coordinates of the head and tail infrared characteristic points in the straight line segment;
and the third data processing module is used for determining the code ID of the infrared characteristic point group representation according to the head-tail infrared characteristic point coordinates and identifying the target object.
Exemplary System
As shown in fig. 4, the present embodiment provides an image-based object recognition system on the basis of an exemplary apparatus, including a plurality of industrial camera modules 30 located at the target object; and a light emitting body disposed on a plurality of target objects (moving dollies) and the above-mentioned image-based object recognition device 40, to which the plurality of industrial camera modules are communicatively connected.
Specifically, the industrial camera module sends data to the object recognition device 40 of the image through the network UDP communication technology, the local area network communication uses the UDP data protocol for designing higher universality, the UDP data protocol is a connectionless transmission protocol, the delay is small in the data transmission process, the data transmission efficiency is high, the use is simple, the data can be received by a receiver, the high expansibility is realized, the superposition and the use are convenient, the independence is high, and the flexible access and the quick use of other software development are provided.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention, and are intended to be included within the scope of the appended claims and description.

Claims (7)

1. An image-based object recognition method, characterized in that a illuminant is provided on a target object, the illuminant being characterized by infrared feature points, the method comprising:
s100: acquiring first image information of the target object, wherein the first image information at least comprises an infrared characteristic point image of a illuminant; the luminous body is a luminous box;
s200: analyzing information represented by the infrared characteristic point image in the first image information, and identifying the target object;
in step S100, the infrared feature points are arranged on a straight line segment, which specifically includes the following steps:
two end points of the straight line segment are infrared characteristic points, which are defined as head and tail points, and black points are arranged on one side of the head point, which is close to the tail point or the tail point, which is close to the head point, and are used for marking the direction of the luminous body; the number of each straight line segment point is fixed, and the infrared characteristic point group has unique codes by controlling the number among the infrared characteristic points;
holes with equal intervals are formed in the detection surface of the luminous box, infrared luminous points are arranged in each hole, and the luminous and non-luminous of each hole are controlled; defining the movement direction of the luminous box as a positive direction, and arranging a second hole of the luminous box from right to left as a non-luminous hole all the time for marking the operation of the box direction and angle; the infrared luminous points in the holes at the head and the tail always emit light, so as to detect and calculate the distance between the holes at equal intervals;
step S200 includes:
s202: preprocessing the first image information, and then carrying out subdivision searching to obtain a first target point area;
s204: traversing the first target point area, clustering the first target point, and judging straight lines to obtain straight line segments where the infrared characteristic point sets are located;
s206: ordering the infrared characteristic points, determining the distance between adjacent infrared characteristic points, and positioning the coordinates of the head and tail infrared characteristic points in the straight line segment;
the method comprises the steps that the distances from a target point to an origin point are ordered, wherein the origin point refers to an origin point coordinate established at the upper left corner of an image in the image shot by an industrial camera; by calculating the distances between the head adjacent points and the tail adjacent points after sequencing and by the direction rule of the luminous boxes, the head coordinates and the tail coordinates of the luminous boxes are positioned,
the direction rule of the luminous box is as follows: the forward direction is two adjacent infrared characteristic points, the reverse direction is a non-adjacent infrared characteristic point, and the head and tail coordinates of the luminous box are positioned;
s208: and determining the code ID of the characteristic of the infrared characteristic point group according to the head-tail infrared characteristic point coordinates, and identifying the target object.
2. The image-based object recognition method according to claim 1, wherein the preprocessing step includes: graying the first image information, binarizing the first image to obtain corrosion effect and removing noise.
3. The image-based object recognition method according to claim 1, wherein the steps include: and performing perspective transformation on the head and tail point coordinates to obtain target object position coordinates.
4. The image-based object recognition method of claim 1, wherein the clustering the first target point comprises:
traversing all target points, obtaining the minimum point distance of the target points, and clustering all the target points through the minimum point distance and a recursion algorithm.
5. The image-based object recognition method according to any one of claims 1 to 4, wherein first image information of the target object is obtained by a plurality of industrial cameras, wherein a data synchronization mechanism established among the plurality of industrial cameras uniformly manages data of a plurality of camera locations;
the camera detection overlapping area is set, so that detection blind areas generated when the camera detection overlapping area spans the camera capturing area are prevented, the identified data overlapping areas are combined, and repeated positioning data are avoided.
6. An object recognition device based on an image is characterized in that a plurality of target objects are provided with luminous bodies, the luminous bodies are characterized by infrared characteristic points, and the luminous bodies are luminous boxes; the device comprises:
the image acquisition unit is used for acquiring first image information of the target object, wherein the first image information at least comprises an infrared characteristic point image of a luminous body; a kind of electronic device with high-pressure air-conditioning system
The target identification unit is used for analyzing the information represented by the infrared characteristic point image in the first image information and identifying the target object;
the infrared characteristic points are arranged on the straight line segment, and specifically are as follows:
two end points of the straight line segment are infrared characteristic points, which are defined as head and tail points, and black points are arranged on one side of the head point, which is close to the tail point or the tail point, which is close to the head point, and are used for marking the direction of the luminous body; the number of each straight line segment point is fixed, and the infrared characteristic point group has unique codes by controlling the number among the infrared characteristic points;
holes with equal intervals are formed in the detection surface of the luminous box, infrared luminous points are arranged in each hole, and the luminous and non-luminous of each hole are controlled; defining the movement direction of the luminous box as a positive direction, and arranging a second hole of the luminous box from right to left as a non-luminous hole all the time for marking the calculation of the direction and the angle of the luminous box; the infrared luminous points in the holes at the head and the tail always emit light, so as to detect and calculate the distance between the holes at equal intervals;
the step of analyzing the information represented by the infrared characteristic point image in the first image information and identifying the target object comprises the following steps:
preprocessing the first image information, and then carrying out subdivision searching to obtain a first target point area;
traversing the first target point area, clustering the first target point, and judging straight lines to obtain straight line segments where the infrared characteristic point sets are located;
ordering the infrared characteristic points, determining the distance between adjacent infrared characteristic points, and positioning the coordinates of the head and tail infrared characteristic points in the straight line segment;
the method comprises the steps that the distances from a target point to an origin point are ordered, wherein the origin point refers to an origin point coordinate established at the upper left corner of an image in the image shot by an industrial camera; by calculating the distances between the head adjacent points and the tail adjacent points after sequencing and by the direction rule of the luminous boxes, the head coordinates and the tail coordinates of the luminous boxes are positioned,
the direction rule of the luminous box is as follows: the forward direction is two adjacent infrared characteristic points, the reverse direction is a non-adjacent infrared characteristic point, and the head and tail coordinates of the luminous box are positioned;
and determining the code ID of the characteristic of the infrared characteristic point group according to the head-tail infrared characteristic point coordinates, and identifying the target object.
7. An image-based object recognition system, comprising:
a plurality of binocular camera modules located at the target object; a kind of electronic device with high-pressure air-conditioning system
A light emitter disposed on a plurality of target objects and the image-based object recognition apparatus of claim 6;
wherein the plurality of binocular camera modules are in communication connection with the image-based object recognition device.
CN202010150926.4A 2020-03-06 2020-03-06 Object identification method, device and system based on image Active CN111354018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010150926.4A CN111354018B (en) 2020-03-06 2020-03-06 Object identification method, device and system based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010150926.4A CN111354018B (en) 2020-03-06 2020-03-06 Object identification method, device and system based on image

Publications (2)

Publication Number Publication Date
CN111354018A CN111354018A (en) 2020-06-30
CN111354018B true CN111354018B (en) 2023-07-21

Family

ID=71197434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010150926.4A Active CN111354018B (en) 2020-03-06 2020-03-06 Object identification method, device and system based on image

Country Status (1)

Country Link
CN (1) CN111354018B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112451962B (en) 2020-11-09 2022-11-29 青岛小鸟看看科技有限公司 Handle control tracker
CN116342662B (en) * 2023-03-29 2023-12-05 北京诺亦腾科技有限公司 Tracking and positioning method, device, equipment and medium based on multi-camera

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004208229A (en) * 2002-12-26 2004-07-22 Advanced Telecommunication Research Institute International Object identification system, light emitting device, and detection apparatus
JP2007043579A (en) * 2005-08-04 2007-02-15 Advanced Telecommunication Research Institute International Object identification system and detection apparatus
CN102799850A (en) * 2012-06-30 2012-11-28 北京百度网讯科技有限公司 Bar code recognition method and device
CN103768792A (en) * 2014-02-25 2014-05-07 中山市金马科技娱乐设备有限公司 Partitioning and recognition method applied to wide-range video space object positioning system
CN105126338A (en) * 2015-08-18 2015-12-09 中山市金马科技娱乐设备股份有限公司 Video positioning system applicable to light gun shooting games
CN105160690A (en) * 2015-08-18 2015-12-16 武汉大学 Reference point identifying method applied to positioning of video projection target
CN105678733A (en) * 2014-11-21 2016-06-15 中国科学院沈阳自动化研究所 Infrared and visible-light different-source image matching method based on context of line segments
CN106295655A (en) * 2016-08-03 2017-01-04 国网山东省电力公司电力科学研究院 A kind of transmission line part extraction method patrolling and examining image for unmanned plane
CN106372702A (en) * 2016-09-06 2017-02-01 深圳市欢创科技有限公司 Positioning identification and positioning method thereof
CN106506076A (en) * 2016-10-14 2017-03-15 乐视控股(北京)有限公司 A kind of method of virtual reality system and its information transfer, device
CN106648147A (en) * 2016-12-16 2017-05-10 深圳市虚拟现实技术有限公司 Space positioning method and system for virtual reality characteristic points
CN107050855A (en) * 2017-05-10 2017-08-18 中山市金马科技娱乐设备股份有限公司 A kind of trackless Ferris Wheel of band VR glasses
CN107101616A (en) * 2017-05-23 2017-08-29 北京小鸟看看科技有限公司 A kind of personal identification method for positioning object, device and system
CN206541271U (en) * 2017-03-03 2017-10-03 北京国承万通信息科技有限公司 A kind of optical positioning system and virtual reality system
CN107289931A (en) * 2017-05-23 2017-10-24 北京小鸟看看科技有限公司 A kind of methods, devices and systems for positioning rigid body
CN107300378A (en) * 2017-05-23 2017-10-27 北京小鸟看看科技有限公司 A kind of personal identification method for positioning object, device and system
CN206757401U (en) * 2017-05-10 2017-12-15 中山市金马科技娱乐设备股份有限公司 The space positioning system of trackless Ferris Wheel
CN107633528A (en) * 2017-08-22 2018-01-26 北京致臻智造科技有限公司 A kind of rigid body recognition methods and system
CN109785381A (en) * 2018-12-06 2019-05-21 苏州炫感信息科技有限公司 A kind of optical inertial fusion space-location method, positioning device and positioning system
CN110120100A (en) * 2018-02-06 2019-08-13 广东虚拟现实科技有限公司 Image processing method, device and recognition and tracking system
CN110174092A (en) * 2019-04-26 2019-08-27 北京航空航天大学 A kind of intensive cluster relative positioning method based on infrared coding target
CN209591427U (en) * 2019-03-26 2019-11-05 广东虚拟现实科技有限公司 Marker and interactive device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NO323926B1 (en) * 2004-11-12 2007-07-23 New Index As Visual system and control object and apparatus for use in the system.

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004208229A (en) * 2002-12-26 2004-07-22 Advanced Telecommunication Research Institute International Object identification system, light emitting device, and detection apparatus
JP2007043579A (en) * 2005-08-04 2007-02-15 Advanced Telecommunication Research Institute International Object identification system and detection apparatus
CN102799850A (en) * 2012-06-30 2012-11-28 北京百度网讯科技有限公司 Bar code recognition method and device
CN103768792A (en) * 2014-02-25 2014-05-07 中山市金马科技娱乐设备有限公司 Partitioning and recognition method applied to wide-range video space object positioning system
CN105678733A (en) * 2014-11-21 2016-06-15 中国科学院沈阳自动化研究所 Infrared and visible-light different-source image matching method based on context of line segments
CN105126338A (en) * 2015-08-18 2015-12-09 中山市金马科技娱乐设备股份有限公司 Video positioning system applicable to light gun shooting games
CN105160690A (en) * 2015-08-18 2015-12-16 武汉大学 Reference point identifying method applied to positioning of video projection target
CN106295655A (en) * 2016-08-03 2017-01-04 国网山东省电力公司电力科学研究院 A kind of transmission line part extraction method patrolling and examining image for unmanned plane
CN106372702A (en) * 2016-09-06 2017-02-01 深圳市欢创科技有限公司 Positioning identification and positioning method thereof
CN106506076A (en) * 2016-10-14 2017-03-15 乐视控股(北京)有限公司 A kind of method of virtual reality system and its information transfer, device
CN106648147A (en) * 2016-12-16 2017-05-10 深圳市虚拟现实技术有限公司 Space positioning method and system for virtual reality characteristic points
CN206541271U (en) * 2017-03-03 2017-10-03 北京国承万通信息科技有限公司 A kind of optical positioning system and virtual reality system
CN107050855A (en) * 2017-05-10 2017-08-18 中山市金马科技娱乐设备股份有限公司 A kind of trackless Ferris Wheel of band VR glasses
CN206757401U (en) * 2017-05-10 2017-12-15 中山市金马科技娱乐设备股份有限公司 The space positioning system of trackless Ferris Wheel
CN107101616A (en) * 2017-05-23 2017-08-29 北京小鸟看看科技有限公司 A kind of personal identification method for positioning object, device and system
CN107289931A (en) * 2017-05-23 2017-10-24 北京小鸟看看科技有限公司 A kind of methods, devices and systems for positioning rigid body
CN107300378A (en) * 2017-05-23 2017-10-27 北京小鸟看看科技有限公司 A kind of personal identification method for positioning object, device and system
CN107633528A (en) * 2017-08-22 2018-01-26 北京致臻智造科技有限公司 A kind of rigid body recognition methods and system
CN110120100A (en) * 2018-02-06 2019-08-13 广东虚拟现实科技有限公司 Image processing method, device and recognition and tracking system
CN109785381A (en) * 2018-12-06 2019-05-21 苏州炫感信息科技有限公司 A kind of optical inertial fusion space-location method, positioning device and positioning system
CN209591427U (en) * 2019-03-26 2019-11-05 广东虚拟现实科技有限公司 Marker and interactive device
CN110174092A (en) * 2019-04-26 2019-08-27 北京航空航天大学 A kind of intensive cluster relative positioning method based on infrared coding target

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Study on tracking method based on infrared random point;Yuqian Li 等;《Proceedings 2013 International Conference on Mechatronic Sciences, Electric Engineering and Computer (MEC)》;1872-1875 *
The perceptive workbench: Computer-vision-based gesture tracking, object tracking, and 3D reconstruction for augmented desks;Starner, T 等;《object tracking, and 3D reconstruction for 《Machine Vision and Applications 14》;59-71 *
基于图像处理的头盔空间位置测量;李召鑫;《中国优秀硕士学位论文全文数据库 信息科技辑》;第2012年卷(第7期);I138-2106 *

Also Published As

Publication number Publication date
CN111354018A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
JP5887775B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
JP5991041B2 (en) Virtual touch screen system and bidirectional mode automatic switching method
KR100953606B1 (en) Image displaying apparatus, image displaying method, and command inputting method
JP2020524863A (en) System and method for fast identification and processing of image areas of interest
US8587563B2 (en) Touch system and positioning method therefor
CN108022264B (en) Method and equipment for determining camera pose
US20130266174A1 (en) System and method for enhanced object tracking
CN104423569A (en) Pointing position detecting device, method and computer readable recording medium
EP2996067A1 (en) Method and device for generating motion signature on the basis of motion signature information
CN101730876A (en) Pointing device using camera and outputting mark
CN111354018B (en) Object identification method, device and system based on image
US20120293555A1 (en) Information-processing device, method thereof and display device
Katz et al. A multi-touch surface using multiple cameras
US10078374B2 (en) Method and system enabling control of different digital devices using gesture or motion control
CN114138121B (en) User gesture recognition method, device and system, storage medium and computing equipment
CN106598356B (en) Method, device and system for detecting positioning point of input signal of infrared emission source
KR101461145B1 (en) System for Controlling of Event by Using Depth Information
CN102799344B (en) Virtual touch screen system and method
KR20100107976A (en) Three dimension user interface apparatus and thereof method
Ukita et al. Wearable virtual tablet: fingertip drawing on a portable plane-object using an active-infrared camera
Simion et al. Finger detection based on hand contour and colour information
JP2018185563A (en) Information processing apparatus, information processing method, computer program and storage medium
WO2020197914A1 (en) Systems and methods for tracking
Song et al. A crowdsensing-based real-time system for finger interactions in intelligent transport system
Bhamre et al. Gesture recognition using Laser Sensor Enhanced with different parameters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant