CN111213368B - Rigid body identification method, device and system and terminal equipment - Google Patents

Rigid body identification method, device and system and terminal equipment Download PDF

Info

Publication number
CN111213368B
CN111213368B CN201980004924.XA CN201980004924A CN111213368B CN 111213368 B CN111213368 B CN 111213368B CN 201980004924 A CN201980004924 A CN 201980004924A CN 111213368 B CN111213368 B CN 111213368B
Authority
CN
China
Prior art keywords
information
rigid body
image data
light
coded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980004924.XA
Other languages
Chinese (zh)
Other versions
CN111213368A (en
Inventor
王越
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Realis Multimedia Technology Co Ltd
Original Assignee
Shenzhen Realis Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Realis Multimedia Technology Co Ltd filed Critical Shenzhen Realis Multimedia Technology Co Ltd
Priority to CN202010672046.3A priority Critical patent/CN111757010B/en
Publication of CN111213368A publication Critical patent/CN111213368A/en
Application granted granted Critical
Publication of CN111213368B publication Critical patent/CN111213368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a rigid body identification method, a device, terminal equipment and a system, wherein the rigid body identification method comprises the following steps: judging whether the image data belonging to the same light spot is complete in an identification period according to multi-frame image data from the camera; one recognition cycle includes successive specified frame image data; if the judgment result is negative, judging the light spot to be a miscellaneous point; if the judgment result is yes, judging that the light spot is a luminous source on the rigid body, and identifying the coding information of the luminous source according to the image data corresponding to the luminous source; determining luminous sources belonging to the same rigid body according to multi-frame image data from a camera; combining the coded information of the luminous sources belonging to the same rigid body to obtain the coded information of the rigid body, and matching the coded information of the rigid body with the coded information of a preset rigid body to identify the rigid body; wherein, the preset rigid body coding information is unique.

Description

Rigid body identification method, device and system and terminal equipment
Technical Field
The application relates to the technical field of motion capture, in particular to a rigid body identification method, a device, a system and terminal equipment.
Background
Motion capture is a high and new technology for measuring and recording the motion track or posture of an object in a real three-dimensional space and reconstructing the state of a moving object in a virtual three-dimensional space.
Existing optical motion capture systems can be classified into active and passive types. The passive rigid body optical mark point is easy to be lost, and is limited by the problems of heat dissipation, power supply, reflection type optical path and the like, the brightness of a light source received by the camera is not high, the capability of filtering external redundant information of the video camera is reduced, and the working distance of the camera is also reduced. In addition, the passive rigid bodies need to be arranged in different three-dimensional forms to distinguish the respective differences, thereby making mass production and mass arrangement difficult.
Some active products in the market only replace the original reflective optical mark points of the passive rigid body with light emitting sources such as light emitting diodes and the like which can emit light, and avoid a light source carried by a camera, so that although the loss of the optical mark points and the production cost of the camera are reduced, the working distance of the dynamic camera can be increased to a certain extent. However, the active rigid body is much more difficult to manufacture than the passive rigid body due to the power supply problem, and the active product still needs to configure the rigid body into different three-dimensional forms, thereby further increasing the difficulty of mass production and mass configuration, and rigid body identification.
Disclosure of Invention
In view of the above, the present application provides a rigid body identification method, apparatus, system and terminal device, so as to solve the problem that the rigid body identification speed in an active optical motion capture system is too slow because the rigid body applied in the active optical motion capture system needs to be configured into different three-dimensional forms in the prior art.
A first aspect of the present application provides a rigid body identification method, including:
judging whether the image data belonging to the same light spot is complete in an identification period according to multi-frame image data from the camera; one recognition cycle includes successive specified frame image data;
if the judgment result is negative, judging the light spot to be a miscellaneous point; if the judgment result is yes, judging that the light spot is a luminous source on the rigid body, and identifying the coding information of the luminous source according to the image data corresponding to the luminous source;
determining luminous sources belonging to the same rigid body according to multi-frame image data from a camera;
combining the coded information of the luminous sources belonging to the same rigid body to obtain the coded information of the rigid body, and matching the coded information of the rigid body with the coded information of a preset rigid body to identify the rigid body; wherein, the preset rigid body coding information is unique.
A second aspect of the present application provides a rigid body identification apparatus comprising:
the processing unit is used for judging whether the image data belonging to the same light spot is complete in an identification period according to the multi-frame image data from the camera; one recognition cycle includes successive specified frame image data; if the judgment result is negative, judging the light spot to be a miscellaneous point; if the judgment result is yes, judging that the light spot is a luminous source on the rigid body, and identifying the coding information of the luminous source according to the image data corresponding to the luminous source;
the determining unit is used for determining the luminous sources belonging to the same rigid body in the luminous sources obtained by the processing unit according to the multi-frame image data from the camera;
the identification unit is used for combining the coded information of the luminous sources which belong to the same rigid body and are obtained by the processing unit and determined by the determination unit to obtain the coded information of the rigid body, and matching the coded information of the rigid body with the coded information of a preset rigid body to identify the rigid body; wherein, the preset rigid body coding information is unique.
The third aspect of the present application provides an active optical dynamic capturing system, including a server, a base station, a camera, and a rigid body, where the base station is configured to generate a synchronous trigger signal and send the synchronous trigger signal to the rigid body and the camera; the rigid body comprises a plurality of luminous sources, and the luminous sources are used for calling coded data from self-stored coded information and distributing the coded data to each luminous source after receiving the synchronous trigger signal, so that each luminous source can control the brightness of the luminous source according to the coded data; the camera is used for carrying out exposure shooting on the rigid body after receiving the synchronous trigger signal and sending image data obtained by shooting to the server; the server is configured to identify the rigid body by using the method of the first aspect.
A fourth aspect of the present application provides a terminal device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the rigid body identification method mentioned in the first aspect or any possible implementation manner of the first aspect when executing the computer program.
In the embodiment of the application, whether the image data belonging to the same light spot in an identification period is complete is judged according to multi-frame image data from a camera; if the judgment result is negative, judging the light spot to be a miscellaneous spot, and deleting the data of the light spot; if the judgment result is yes, judging that the light spot is a luminous source on the rigid body, and identifying the coding information of the luminous source according to the image data corresponding to the luminous source; meanwhile, the luminous sources are determined to belong to the same rigid body; and finally, combining the coded information of the luminous sources belonging to the same rigid body to obtain the coded information of the rigid body, and matching the coded information of the rigid body with the coded information of a preset rigid body, thereby achieving the aim of identifying the rigid body. In the whole rigid body identification process, the rigid body identification is carried out according to the coded information of the luminous source and is irrelevant to the three-dimensional form of the rigid body, so that the rigid body in the active optical motion capture system does not need to be configured into different three-dimensional forms, and the rigid body identification capability of the active optical motion capture system is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a block diagram of an active optical kinetic capture system according to the present application;
FIG. 2 is a schematic diagram of encoded information provided herein;
FIG. 3 is a schematic diagram of a signal timing diagram provided herein;
FIG. 4 is a schematic flow chart of a rigid body identification method provided in the present application;
FIG. 5 is a schematic flow chart of the embodiment of step 401 in FIG. 4;
FIG. 6 is a schematic flow chart of the embodiment of step 403 in FIG. 4;
FIG. 7 is a schematic flow chart illustrating another embodiment of step 403 in FIG. 4;
FIG. 8 is a schematic diagram of a frame of a rigid body identification apparatus according to the present application;
fig. 9 is a schematic structural diagram of an embodiment of a terminal device provided in the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
Before describing the rigid body identification method according to the embodiment of the present invention in detail, the conventional optical motion capture process and the active optical motion capture system to which the rigid body identification method of the present application is applied will be described.
The traditional optical dynamic capturing system emits infrared light through an ultra-high power near-infrared light source in a dynamic capturing camera and irradiates a passive mark point; the passive mark points coated with high-reflection materials reflect the irradiated infrared light, and the infrared light and the ambient light with background information reach an infrared narrow band-pass filtering unit of the camera through a low-distortion lens of the dynamic capture camera. The light-transmitting wave band of the infrared narrow band-pass filtering unit is consistent with the wave band of the infrared light source. Therefore, the ambient light with redundant background information is filtered out, and only the infrared light with the mark point information passes through and is recorded by the photosensitive element of the camera. The photosensitive element converts the optical signal into an image signal and outputs the image signal to the control circuit, an image processing unit in the control circuit uses the FPGA to preprocess the image signal in a hardware mode, and finally the 2D coordinate information of the mark point flows out to the tracking software. The tracking and positioning software adopts a computer multi-view vision principle to calculate the coordinates and the directions of the point clouds in the three-dimensional capturing space according to the matching relationship between the two-dimensional point clouds of the images and the relative positions and the orientations of the cameras. Based on the three-dimensional coordinates of the point cloud, the tracking and positioning software calculates the position and orientation of each rigid body in the capturing space by identifying different rigid body structures.
The passive optical dynamic capturing system has the following defects: firstly, a dynamic capture camera is required to have a relatively complex image processing device, and the cost of the camera is relatively high; secondly, the marking points are required to be coated with high-reflectivity materials, so that abrasion is easily caused in the using process, and the normal operation of the system is influenced; thirdly, the tracking and positioning depend on the structure of the rigid body, the number of the rigid bodies is limited due to the design of the rigid bodies, and the identification and tracking of the rigid bodies require a camera to capture all the mark points on the rigid bodies, so the use environment is very severe.
Aiming at the problems of the traditional optical dynamic capturing, the application provides a set of novel active optical dynamic capturing system. The active optical dynamic capturing system consists of an active optical rigid body, an active optical camera, a switch, a base station and a server. The active light rigid body is a rigid body which is formed by at least 4 LEDs and can actively emit light; the active light camera does not need to emit infrared light as a traditional optical dynamic capturing camera, only needs to receive active light signals emitted by an active light rigid body, preprocesses the signals, and outputs 2D coordinate information, gray value information and area information of the LED lamp to tracking software running on a server through a switch; the exchanger is used for supplying power to the active optical camera and the base station and transmitting the information of the camera and the base station; the base station is used for communicating with the active optical rigid body and configuring the rigid body according to the information.
Hereinafter, the active optical dynamic capture system proposed in the present application will be described in detail.
Example one
The embodiment of the present application provides an active optical dynamic capture system, as shown in fig. 1, for convenience of description, only the portions related to the embodiment of the present application are shown:
the active optical dynamic capture system comprises: a base station 11, an active light rigid body 12 (hereinafter, rigid body), an active light camera 13 (hereinafter, camera), a server 14, and a switch 15. Each component of the active optical dynamic capture system will be described in detail below.
First, the server 44 mainly functions as: firstly, the method comprises the following steps: generating unique coding information for each rigid body 12, and issuing the unique coding information to each rigid body 12 through the base station 11; secondly, the method comprises the following steps: receiving image data from the active optical camera 13, and acquiring coding information of a corresponding rigid body according to the image data in an identification period, and identifying the rigid body 12 according to the coding information and preset rigid body coding information.
In the system configuration stage, when the server 14 generates unique code information for each rigid body 12, the server 14 may generate unique code information for each rigid body according to the preset code length of the code information of the light emitting sources and the number of the light emitting sources on the rigid body. The adopted coding rule can comprise any one of the following: and setting a frame header, a parity check code coding rule and a Hamming code coding rule. And the coded information may specifically be binary coded information. The coding information of the rigid body includes: coding information of all luminous sources on the rigid body. For example, if a rigid body includes N light sources, the encoded information of the rigid body includes the encoded information of the N light sources. The coded information of one luminous source is stored with coded data of one luminous source in one identification period. In each identification period, the light source sequentially controls the brightness of the light source according to the indication of the coded information. I.e. the light source brightness of the light emitting source, cycles once for an identification period. In the process of generating unique code information for each rigid body in advance, the server 14 determines the code information of different light sources according to the following principle: the coded information of different luminous sources is different, so that the uniqueness of the coded information of the rigid body is ensured.
The coding length of the coding information of the rigid body is as follows: the product of the code length of the coded information of the luminous sources and the number of the luminous sources. Assuming that the code length of the code information of one light emitting source is 16 and the number of light emitting sources is 8, the code length of the code information corresponding to the rigid body is 128, and the code information may be as shown in fig. 2, where each row of code data of the code information constitutes code information corresponding to one light emitting source. It will be appreciated that the encoded information shown in fig. 2 is merely one presentation of encoded information.
The server 14 is further configured to receive image data from the camera 13, obtain coding information of a corresponding rigid body according to the image data in an identification period, and identify the rigid body 12 according to the coding information and preset coding information of the rigid body during a system operation stage. The server 14 may use a rigid body identification method, which will be described in the subsequent embodiments of the present application, when identifying the rigid body 12, and as to the rigid body identification method, refer to the following detailed description.
Next, a base station 11 is introduced, which functions as: generating a synchronous trigger signal and realizing the transmission of information among system components. For example, the base station 11 generates a synchronous trigger signal according to a predetermined interval period, and transmits the generated synchronous trigger signal to the rigid body 12 and the camera 13 at the same time, so that the rigid body 12 can control the brightness of the light-emitting source on the rigid body 12 according to the synchronous trigger signal, and the camera 13 can capture the image data of the light-emitting source on the rigid body 12 according to the synchronous trigger signal. In a specific implementation, the base station 11 may send the synchronization trigger signal to the rigid body 12 and the camera 13 simultaneously through a wireless transmission technology. The wireless transmission technology may include any one of the following: wireless fidelity (Wi-Fi) and ZigBee (ZigBee).
In the transmission of information between components, as described above, the base station 11 specifically assigns the received encoded information from the server 14 to the rigid bodies 12 at random, and each rigid body 12 registers the encoded information in the register of the rigid body 12 after receiving the encoded information.
Again, introducing the rigid body 12, the rigid body 12 comprises a plurality of light emitting sources, which function to: firstly, the method comprises the following steps: receiving coding information of a rigid body 12 issued by a base station 11 and registering the coding information in a register; secondly, the method comprises the following steps: after receiving the synchronization trigger signal from the base station 11, the coded data is periodically called from the coded information stored in itself and distributed to each light emitting source, so that each light emitting source can control the brightness of the light emitting source according to the coded data.
The number of the rigid bodies 12 in the active optical capturing system may be 1 or more, and is not limited to the 3 illustrated in fig. 1. The Light Emitting source may be a Light Emitting Diode (LED), and the encoded data may include 0 or 1. Namely, the embodiment of the application identifies that the corresponding coded data is 1 or 0 through different brightness of the luminous source. When the rigid body 12 receives a synchronous trigger signal every time to perform primary coded data distribution, specifically, after receiving the synchronous trigger signal, one bit of coded data is sequentially selected from the coded information of the N light emitting sources stored by the rigid body 12, and the coded data is sent to the corresponding light emitting source. I.e. after a synchronization signal trigger, each light source receives only 1 bit of encoded data. When the rigid body 12 receives the next synchronous trigger signal, the next coded data is selected from the coded information of the N light sources stored in the register and is sent to the corresponding light source, and so on, thereby completing the light emission of an identification period. After one identification period is completed, that is, after the coded information of the light-emitting source has completed a round of distribution, after the synchronization trigger signal is received again, the rigid body 12 recycles the coded information of the light-emitting source and starts the distribution of the coded data of the next identification period.
It should be noted that the coded information of the rigid body 12 includes coded information of all the light sources on the rigid body, and the coded information of one light source stores coded data of one light source in one identification period. Meanwhile, one recognition period includes consecutive designated frame image data, and the designated number of frames (or the number of camera exposure shots) of the consecutive image data within one recognition period is equal to the code length F of the code information of one light emission source. The coding length of the coded information of the light emission source may be, for example, F, which is a positive integer equal to or greater than two. For the value of F, the smaller the encodable range, and accordingly the smaller the number of identifiable rigid bodies, and the larger the value of F, the more exposure and shooting times per identification period, and the more time consumed, so that the identification speed of the rigid bodies can be reduced. In particular, F may be selected to be 16.
Assuming that N is 8 and F is 16, that is, the rigid body 12 includes 8 light emitting sources, the encoded information includes 8 encoded subsets, and after receiving the first synchronization trigger signal, the rigid body 12 selects first bit encoded data from the encoded information of each light emitting source, and sends the selected first bit encoded data to the corresponding light emitting sources. And after receiving the second synchronous trigger signal, respectively selecting second bit encoding data from the encoding information of each luminous source, sending all the selected second bit encoding data to the corresponding luminous sources, and repeating the steps until the 16-time synchronous trigger signal receiving and the 16-time encoding data distribution are completed in one identification period.
Finally, a camera 13 is introduced for performing exposure shooting of the rigid body 12 after receiving the synchronization trigger signal from the base station 11, and transmitting the shot image data to the server 14. Among them, one camera 13 can capture images of a plurality of rigid bodies 12. It should be noted that the brightness change of the light emitting source on the rigid body 12 and the exposure shot of the camera 13 are performed synchronously, that is, the rigid body 12 performs coded data distribution once receiving a synchronous trigger signal; and the camera 13 performs exposure shooting every time it receives a synchronization trigger signal. Since the synchronous trigger signal is synchronously sent to the rigid body 12 and the camera 13, the camera 13 can be ensured to capture the brightness change of the light emitting source. For example, as shown in fig. 3, the rigid body performs encoded data distribution once it receives a rising edge synchronization trigger signal; the camera 13 performs exposure shooting every time it receives a rising edge synchronization trigger signal. Therefore, when the brightness of the luminous source such as an LED lamp changes, the camera can capture the luminous source synchronously.
It is understood that the image data sent by the camera 13 to the server 14 may include: the 2D coordinate value of each light source, the gray scale value of the light source, and the associated domain area of the light source. Therefore, before sending the image data to the server 14, the camera 13 needs to determine the 2D coordinate value, the gray value and the associated domain area of each light emitting source in each frame of image data shot by exposure; and sending the 2D coordinate value, the gray value, and the associated domain area of each light emitting source to the server 14.
Specifically, the camera 13 determines the 2D coordinate value, the gray value, and the associated domain area of each light emitting source in each frame of image data of exposure shooting immediately after each exposure shooting, and sends the currently determined 2D coordinate value, gray value, and associated domain area to the server 14 immediately. Assuming that F frames of image data are included in one recognition period, the camera 13 performs F exposure shots based on the received F synchronization trigger signals, and sends the F image data to the server 14.
The switch 15, its effect is: data exchange between the server 14 and the base station 11 is realized, and data exchange between the base station 11 and the camera 13 is realized. Unique coded information is generated at server 14 and may be transmitted to base station 11 via switch 15. Of course, the exchange 15 may also receive the synchronization trigger signal transmitted by the base station 11 and transmit the synchronization trigger signal to the camera 13.
Next, the operation of the active light kinetic capturing system will be described in detail.
The rigid body identification system of the embodiment of the present invention can be generally divided into two stages, namely, a configuration stage and an operation identification stage, when operating, which will be described in detail below.
During the configuration phase, server 14 generates unique encoding information for each rigid body 12. After the server 14 generates unique code information for each rigid body 12, the server 14 transmits the code information of a preset plurality of rigid bodies to each rigid body 12 through the switch 15. After each rigid body 12 receives the encoded information, it registers the received encoded information in its own register.
In the operation phase, the base station 11 broadcasts the synchronization trigger signal to all rigid bodies 12 through a wireless transmission technology (such as wireless wifi, ZigBee and other wireless communication technologies). Each time a synchronous trigger signal is received, the rigid body 12 calls 1-bit coded data from the coded information of each light source in the register in sequence to distribute the coded data to the corresponding light source respectively (if the number of the light sources is N, the N-bit coded data are called together, and each light source receives the 1-bit coded data). After the light source receives the coded data, the light intensity of the light source is controlled according to the indication of the coded data, that is, the embodiment of the invention shows that the corresponding code is 1 or 0 by the light or dark of the light source. At the same time, the same synchronization trigger signal is also sent from the base station 11 to the exchange 15 and transmitted to the camera 13, and the camera 13 performs one exposure shot after receiving the synchronization trigger signal. That is, the rigid body performs coded data distribution once each time it receives a synchronization trigger signal, and the camera performs exposure shooting once each time it receives a synchronization trigger signal.
After performing exposure shooting, the camera 13 needs to transmit image data obtained by shooting to the server 14. It should be noted that, after each exposure shot by the camera, the image data (including the 2D coordinate values, the grayscale values, and the associated domain areas of the light emitting sources) obtained by the current exposure shot needs to be immediately transmitted to the server 14.
The server 14, after receiving the plurality of frames of image data from the camera, identifies the plurality of frames of image data according to the identification period to identify the rigid body 12.
According to the active optical dynamic capturing system, the active optical camera does not need to rely on an ultra-high power near-infrared light source to emit infrared light any more, and only needs to receive an infrared light signal emitted by the active optical rigid body, so that the active optical camera does not have a complex device structure, development cost is reduced, user investment is reduced, and the problem that the traditional optical camera is high in manufacturing cost is solved. Meanwhile, the active light rigid body does not depend on high-reflection materials any more, but is formed by combining at least 4 LED lamps, the LED lamps are safely protected in the rigid body shell, the active light rigid body is enabled to be safer and more stable to use, and the problem that the traditional optical rigid body is easy to wear in the using process is solved. The active optical dynamic capturing system not only simplifies the complex device structure of the traditional optical dynamic capturing camera and reduces the cost of the camera, but also ensures that the active optical rigid body is not easy to wear and damage, and the use sustainability is greatly improved. Most importantly, the tracking and positioning of the active optical dynamic capturing system are based on the coding state of the active optical rigid body instead of the rigid body structure, so that the rigid body structure is unified, the appearance is greatly optimized, and the number of the identifiable rigid bodies is multiplied due to the diversity of the coding state.
Example two:
referring to fig. 4, the rigid body identification method provided in the embodiment of the present application is described below, and the rigid body identification method in the embodiment of the present application may be applied to the rigid body identification of the active optical dynamic capturing system, which specifically includes:
step 401, judging whether the image data belonging to the same light spot is complete in an identification period according to the multi-frame image data from the camera; one recognition cycle includes successive specified frame image data;
the execution subject of the rigid body identification method of the present application may be a server. As can be seen from the description of the first embodiment, after the camera performs exposure shooting, the camera sends the image data obtained by exposure shooting to the server, so that the server can perform rigid body identification according to the multiple frames of image data from the camera. The image data transmitted by the camera comprises: 2D coordinate values of the light spot, gray scale values of the light spot, and associated domain areas of the light spot. One identification cycle is the number of frames of continuous image data required for the server to perform rigid body identification once. The number of the continuous appointed frames is related to the code information of one luminous source, and the number of the continuous appointed frames is equal to the code length of the code information of one luminous source. That is, the server determines the number of designated frames of image data within one recognition period when generating the coded information of the light emission sources.
When the camera performs exposure shooting, the camera may shoot light emitting sources on the rigid body and other reflective points in the capture field. Therefore, the image data about the light spot transmitted from the camera may be the image data of the light emitting source, and may also be the image data of other reflective spots in the captured field. Therefore, the server needs to determine whether the data of the light spot is from another reflective point in the field or a light emitting source (LED) on the rigid body.
Specifically, when implementing this step 401, as shown in fig. 5, the server may include:
step 501, generating mark information belonging to the same light spot according to the 2D coordinate value of the light spot included in the multi-frame image data from the camera.
When the server receives image data from the camera for the first time, namely receives the first frame of image data, marking information can be respectively given to each light spot according to the 2D coordinate information of each light spot included in the first frame of image data; when the image data is received again, that is, when the image data is not received for the first time, the server may match the 2D coordinate information of all the light points in the newly received image data with the 2D coordinate information of all the light points in the old image data. If the two light spots are matched, determining that the two matched light spots belong to the same light spot, and giving the same marking information to the two light spots; if not, the two light spots are given different mark information.
For example, if the server receives image data for the first time including the light point T1 and the light point T2, the server assigns different mark information K1 and K2 to the light point T1 and the light point T2. At this time, the server also stores the 2D coordinate value, the gradation value, and the associated field area of the light spot according to the label information. When a frame of image data is newly received, the server correspondingly matches the 2D coordinate information of all light points (such as T3 and T4) in the new image data with the 2D coordinate of the light points (such as T1 and T2) in the stored image data according to the distance relationship, if the distance relationship between two points of the light point T1 and the light point T3 meets the preset matching condition, the two light points (T3 and T1) are considered to belong to the same light point, and the new light point T3 is endowed with the old mark K1 corresponding to the matching light point T1, namely the matching light point is endowed with the same mark information; if the distance relationship between the two points of the light point T2 and the light point T4 does not satisfy the preset matching condition, the two light points are considered not to match (T4 and T2 are not identical), a new mark K3 is given to the light point T4, and the 2D coordinate, the gray value and the associated domain area of the light point T4 are stored according to the mark, so that the cycle is performed. Therefore, mark information belonging to the same light spot is generated, and image data corresponding to the same light spot at different times are stored according to the mark information.
Step 502, judging whether the frame number of the image data corresponding to the mark information belonging to the same light spot in an identification period reaches a specified frame number.
Step 503, if yes, determining that the image data belonging to the same light spot in the identification period is complete.
Step 504, if not, it is determined that the image data belonging to the same light spot in the recognition period is incomplete.
When the number of frames of the image data received by the server meets the specified number of frames of an identification period, the server also judges whether the number of frames of the image data corresponding to the mark information belonging to the same light spot in the identification period reaches the specified number of frames. For example, if the number of designated frames of the image data in an identification period is 16, when the number of frames of the image data received by the server reaches 16 frames, the 2D coordinates, the gray values, and the associated domain areas of the light points stored by the same mark may be integrated to determine whether the number of frames of the image data corresponding to the same mark information generated in step 501 reaches 16 frames; if the number of the frames reaches 16, the image data belonging to the same light spot in the identification period is considered to be complete, and the step 403 is entered; if the number of frames is less than 16, the image data belonging to the same spot in the recognition period is considered to be incomplete, and the process proceeds to step 402.
Step 402, determining the light spot as a spot, and deleting the image data corresponding to the light spot.
Step 403, determining that the light spot is a light source on the rigid body, and identifying the coded information of the light source according to the image data corresponding to the light source.
Step 403, identifying the coded information of the light source according to the image data corresponding to the light source.
In step 403, i.e. when identifying the coded information of the light-emitting source according to the image data corresponding to the light-emitting source, there are three methods, which will be described separately below,
as shown in fig. 6, when identifying the coded information of the light source according to the image data corresponding to the light source, the first method is to identify the coded information of the light source according to the gray-level value of the light source, and includes the following steps:
step 601, calculating an average value of the gray-level values of the light-emitting sources in the identification period, and using the average value as a threshold value of the gray-level values of the light-emitting sources in the identification period.
Step 602, comparing the gray level value of each frame of the light source in the identification period with the gray level threshold value, and assigning different encoding data according to the comparison result.
In steps 601-602, in a specific implementation, for example, for 16 frames of image data in an identification period, an average value of the gray scale values of each light-emitting source included in the image data is first calculated, and then the average value is used as a threshold value of the gray scale value in the identification period of the light-emitting source. And comparing the gray value of each frame of the luminous source with the average value, recording the gray value of each frame of the luminous source as 1 when the gray value is larger than the average value and considering that the LED lamp is in a bright state at the moment, and recording the gray value of each frame of the luminous source as 0 when the gray value is smaller than or equal to the average value and considering that the LED lamp is in a dead state at the moment, so that the coded data corresponding to 16 frames of data of the luminous source in the. It should be noted that the LED lamp is not completely turned off, but the brightness is significantly reduced compared to the LED lamp in the on state.
It should be noted that, since the average value of the gray-scale values of different light-emitting sources in different identification periods or in the same identification period varies, the gray-scale value thresholds of different light-emitting sources also dynamically vary in the same identification period or in different identification periods.
Step 603, identifying the coded information of the light source according to the coded data of the light source in the identification period.
As shown in fig. 7, when identifying the coded information of the light-emitting source according to the image data corresponding to the light-emitting source, the second method is to identify the coded information of the light-emitting source according to the area of the association domain of the light-emitting source, and includes the following steps:
step 701, calculating an average value of the area of the association domain of the light source in the identification period, and using the average value as the threshold of the area of the association domain of the light source in the identification period.
Step 702, comparing the area of the associated domain of each frame of the light source in the identification period with the threshold of the area of the associated domain, and assigning different coded data according to the comparison result.
In steps 701-702, in a specific implementation, for example, for 16 frames of image data in an identification period, an average value of the area of the association domain of each light-emitting source included in the image data is first calculated, and then the average value is used as the threshold value of the area of the association domain in the identification period of the light-emitting source. And then comparing the area of the associated domain of each frame of the luminous source with the average value respectively, recording the area of the associated domain of each frame of the luminous source as 1 when the area of the associated domain is larger than the average value and considering that the LED lamp is in a bright state at the moment, and recording the area of the associated domain of each frame of the luminous source as 0 when the area of the associated domain is smaller than or equal to the average value and considering that the LED lamp is in a dead state at the moment, so that. It should be noted that the LED lamp is not completely turned off, but the brightness is significantly reduced compared to the LED lamp in the on state.
It should be noted that, since the average value of the area of the associated domain of different light sources in different identification periods or in the same identification period varies, the threshold value of the area of the associated domain of different light sources also dynamically varies in the same or different identification periods.
Step 703, identifying the coded information of the light source according to the coded data of the light source in the identification period.
In addition, the first method is preferable when identifying the coded information of the light emission source from the image data corresponding to the light emission source. If the coded information of the light source cannot be identified according to the gray-scale value of the light source, the coded information of the light source can also be identified according to the area of the associated domain of the light source, i.e. the third method of the present application.
After the coded data within one identification period of a luminous source is obtained by adopting any one of the three methods, the coded information of the luminous source can be identified according to the calculated coded data. This can occur when the brightly lit code information is identified: the currently identified 16 frames of image data do not always start with the start frame of the active optical rigid body code, and occasional error data in the 16 frames of image data cannot be completely excluded, so to identify the coded information of the light emitting source, the start frame of the 16 frames of data must be found first.
To this end, when the server generates coded information for each light emitting source, the generated coded information includes: header information and footer information different from the header information. Specifically, if the code length of the code information of one light emitting source is 16 (i.e., the number of frames of image data in one identification period is 16), the on/off state of the light emitting source (LED) is cycled every 16 frames. At this time, the on-off states of the 16 frames of LED lamps may be recorded as coded information of a light emitting source, the first 8 frames of the 16 frames are referred to as a header, the last 8 frames are referred to as a footer, the on-off states of the 8 frames of the footer of each LED lamp are different and unique, that is, the 8 frames of information determine the unique coded information of each LED lamp, and the header of the first 8 frames is only used for better reminding the position of the state information of the footer of the server, so the on-off states may be the same or different. For example, in one embodiment, the header is specified to be identical and is 01111110, that is, each LED lamp header of each rigid body is brightly and brightly and brightly extinguished, and the 8-frame bright and dim state of the header is designed according to the hamming code, for example, the coded information of a LED lamp header is 11100001, that is, the bright and dim state is brightly and brightly extinguished, and the 16-frame coded information of the LED is 0111111011100001, that is, the bright and dim state is brightly and brightly and brightly extinguished and brightly extinguished, and then the cycle is repeated. One benefit of such a design is that the header never is the same as the footer, i.e., the server can clearly separate the header and the footer; the other advantage is that the table tail is configured in a Hamming code mode, so that error correction is convenient, and the identifiability of the rigid body can be improved to a certain extent.
Based on the coded information of the light sources, when the coded identification of the light sources is performed, the coded data of each light source in one identification period can be combined to obtain combined coded data. Then expanding the combined coded data, and searching preset header information from the expanded coded data; the coding mode of the luminous source ensures that the header information in the expanded coded data is unique. And then in the expanded coded data, the header information and the footer information are combined according to the header information to obtain the coded information of the luminous source in the identification period. And expanding the combined coded data, preferably multiplying the expanded coded data to achieve the aim of quickly searching header information and tail information of the table.
In a specific implementation, for example, 16 frames of encoded data calculated within one identification period of a certain light source can be expanded into 32 frames, then the position of the head can be quickly found out in every 8 frames from the beginning in the 32 frames, the head is found out, the start frame is also found out, and 8 frames after the head are also the tail data that we need. The method has the advantages that the speed of finding the head and the tail of the table is very high, and the error data can be quickly eliminated if the error data exists in the 16-frame data frame, namely the head of the table cannot be found. And when the header information and the footer information are searched, combining the header information and the footer information to obtain the coded information of the luminous source in the identification period.
After identifying the coded information of the light emitting source in the identification period, the method proceeds to step 403 and step 404.
Step 404, determining light emitting sources belonging to the same rigid body according to the multi-frame image data from the camera;
step 405, combining the coded information of the light emitting sources belonging to the same rigid body to obtain the coded information of the rigid body, and matching the coded information of the rigid body with the coded information of a preset rigid body to identify the rigid body.
In step 404, in a specific implementation, the relative distance relationship between the light sources may be calculated according to the 2D coordinate values of the light sources; and determining the light-emitting sources with smaller relative position distances as the light-emitting sources belonging to the same rigid body according to the number of the light-emitting sources set on the same rigid body.
In step 405, in the process of matching the obtained coding information of the rigid body with the preset coding information of the rigid body, all 8 LED lamps on one rigid body can be identified in an ideal state, but due to inevitable force factors such as shielding of the rigid body in the using process, it may be difficult to achieve such a perfect ideal state. In fact, only 4 LED lamps are identified, an active optical rigid body can be identified, and the 4 LED lamps can help people calculate and obtain the posture information of the active optical rigid body. The method for solving the posture information of the rigid body is that the matching relation between the three-dimensional coordinates of the rigid body and the two-dimensional coordinates of the LED lamp mark points is directly obtained by knowing the mark of each LED lamp on the active light rigid body, and then the posture information of the rigid body can be calculated by using a gradient descent method.
The rigid body identification method is applied to the active optical dynamic capturing system, and because the active optical rigid body has the coding information, when the rigid body identification is carried out, the rigid body identification method does not depend on the rigid body structure, but can directly obtain the matching relation between the 2D coordinates and the 3D coordinates according to the coding information, and the posture calculation of the rigid body is quicker and more accurate.
EXAMPLE III
Fig. 8 is a schematic diagram of a rigid body identification device according to an embodiment of the present application. As shown in fig. 8, the rigid body recognition device 8 includes:
the processing unit 81 is configured to determine whether image data belonging to the same light spot is complete in one identification period according to multi-frame image data from the camera; one recognition cycle includes successive specified frame image data; if the judgment result is negative, judging the light spot to be a miscellaneous point; if the judgment result is yes, judging that the light spot is a luminous source on the rigid body, and identifying the coding information of the luminous source according to the image data corresponding to the luminous source;
a determining unit 82, configured to determine, according to multiple frames of image data from the camera, light sources belonging to the same rigid body from among the light sources processed by the processing unit;
the identification unit 83 is configured to combine the coded information of the light emitting sources belonging to the same rigid body, which is obtained by the processing unit and determined by the determination unit, to obtain coded information of the rigid body, and match the coded information of the rigid body with coded information of a preset rigid body to identify the rigid body; wherein, the preset rigid body coding information is unique.
The rigid body identification method disclosed in the second embodiment is specifically adopted when the rigid body identification device 8 identifies a rigid body, and details thereof are not repeated. The rigid body recognition device 8 of the present embodiment may be a server of an active optical dynamic capturing system, and since the active optical rigid body has the encoded information, when performing rigid body recognition, the rigid body recognition device may not rely on a rigid body structure, but may directly obtain a matching relationship between a 2D coordinate and a 3D coordinate according to the encoded information, and the posture calculation of the rigid body is faster and more accurate.
Example four
Fig. 9 is a schematic diagram of a terminal device according to an embodiment of the present application. As shown in fig. 9, the terminal device 9 of this embodiment includes: a processor 90, a memory 91 and a computer program 92, such as a rigid body identification program, stored in said memory 91 and executable on said processor 90. The processor 90, when executing the computer program 92, implements the steps in the various rigid body identification method embodiments described above, such as steps 401 to 405 shown in fig. 4. Alternatively, the processor 90, when executing the computer program 8, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the units 801 to 802 shown in fig. 8.
Illustratively, the computer program 92 may be partitioned into one or more modules/units that are stored in the memory 91 and executed by the processor 90 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 92 in the terminal device 9. For example, the computer program 92 may be divided into a code information acquiring unit and an identifying unit, and the specific functions of each unit are as follows: the processing unit 81 is configured to determine whether image data belonging to the same light spot is complete in one identification period according to multi-frame image data from the camera; one recognition cycle includes successive specified frame image data; if the judgment result is negative, judging the light spot to be a miscellaneous point; if the judgment result is yes, judging that the light spot is a luminous source on the rigid body, and identifying the coding information of the luminous source according to the image data corresponding to the luminous source; a determining unit 82, configured to determine, according to multiple frames of image data from the camera, light sources belonging to the same rigid body from among the light sources processed by the processing unit; the identification unit 83 is configured to combine the coded information of the light emitting sources belonging to the same rigid body, which is obtained by the processing unit and determined by the determination unit, to obtain coded information of the rigid body, and match the coded information of the rigid body with coded information of a preset rigid body to identify the rigid body; wherein, the preset rigid body coding information is unique.
The terminal device 9 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 90, a memory 91. It will be appreciated by those skilled in the art that fig. 9 is merely an example of a terminal device 9 and does not constitute a limitation of the terminal device 7, and may include more or less components than those shown, or some components may be combined, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 90 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 91 may be an internal storage unit of the terminal device 9, such as a hard disk or a memory of the terminal device 9. The memory 91 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device 9. Further, the memory 91 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 91 is used for storing the computer program and other programs and data required by the terminal device. The memory 91 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (13)

1. A rigid body identification method, comprising:
judging whether the image data belonging to the same light spot is complete in an identification period according to multi-frame image data from the camera; one recognition cycle includes successive specified frame image data;
if the judgment result is negative, judging the light spot to be a miscellaneous point; if the judgment result is yes, judging that the light spot is a luminous source on the rigid body, and identifying the coding information of the luminous source according to the image data corresponding to the luminous source; wherein the encoding information includes: the method comprises the steps that header information of an identification period and tail information different from the header information appear in a circulating mode, the header information is a starting frame, the tail information is arranged behind the header information, each tail information is unique and used for determining the coding information of each luminous source, and each header information is the same and used for reminding the position of the tail information of a server side, so that the server side can obviously separate the header information from the tail information; based on the coding information of the luminous sources, when the coding identification of the luminous sources is carried out, the coding data of each luminous source in one identification period can be combined to obtain combined coding data; expanding the combined coded data, and searching preset header information from the expanded coded data; wherein, the header information in the extended coded data is unique; searching table tail information in the expanded coded data according to the table head information; combining the header information and the footer information to obtain the coded information of the luminous source in the identification period;
determining luminous sources belonging to the same rigid body according to multi-frame image data from a camera;
combining the coded information of the luminous sources belonging to the same rigid body to obtain the coded information of the rigid body, and matching the coded information of the rigid body with the coded information of a preset rigid body to identify the rigid body; wherein, the preset rigid body coding information is unique.
2. A rigid body identification method as defined in claim 1, wherein the image data comprises: the 2D coordinate values of the light spots, the determining whether the image data belonging to the same light spot in one recognition cycle is complete, specifically includes:
generating marking information belonging to the same light spot according to the 2D coordinate value of the light spot included in the multi-frame image data;
judging whether the frame number of the image data corresponding to the mark information belonging to the same light spot in an identification period reaches the specified frame number;
if the image data of the same light spot in the identification period is complete, judging that the image data of the same light spot in the identification period is incomplete.
3. A rigid body identification method according to claim 2, wherein the generating of the label information belonging to the same light point based on the 2D coordinate values of the light points included in the multi-frame image data specifically comprises:
when first frame image data is received, respectively giving mark information to each light spot according to 2D coordinate information of each light spot included in the first frame image data;
when image data are received again, matching the 2D coordinate information of all light spots in the newly received image data with the 2D coordinate information of all light spots in the old image data, if the two light spots are matched, determining that the two matched light spots belong to the same light spot, and giving the same marking information to the two light spots; if not, the two light spots are given different mark information.
4. A rigid body identification method as defined in claim 1, wherein the image data further comprises: gray scale values of the light emitting sources; the identifying the coded information of the light source according to the image data corresponding to the light source specifically includes:
calculating the average value of the gray value of the luminous source in the identification period, and taking the average value as the threshold value of the gray value of the luminous source in the identification period;
comparing the gray value of each frame of the luminous source in the identification period with the gray value threshold value respectively, and endowing different coded data according to the comparison result;
and identifying the coded information of the luminous source according to the coded data of the luminous source in the identification period.
5. A rigid body identification method as defined in claim 1, the image data further comprising: the area of the associated domain of the light emitting source; the identifying the coded information of the light source according to the image data corresponding to the light source specifically includes:
calculating the average value of the area of the association domain of the luminous source in the identification period, and taking the average value as the area threshold of the association domain in the identification period of the luminous source;
comparing the area of the associated domain of each frame of the luminous source in the identification period with the threshold of the area of the associated domain, and endowing different coded data according to the comparison result;
and identifying the coded information of the luminous source according to the coded data of the luminous source in the identification period.
6. A rigid body identification method as defined in claim 4, wherein the image data further comprises: the area of the associated domain of the light emitting source; when the coded information of the light source cannot be identified according to the gray scale value of the light source, identifying the coded information of the light source according to the image data corresponding to the light source further includes:
calculating the average value of the area of the association domain of the luminous source in the identification period, and taking the average value as the area threshold of the association domain in the identification period of the luminous source;
comparing the area of the associated domain of each frame of the luminous source in the identification period with the threshold of the area of the associated domain, and endowing different coded data according to the comparison result;
and identifying the coded information of the luminous source according to the coded data of the luminous source in the identification period.
7. A rigid body identification method as defined in claim 1 wherein the code length of the code information is 16, the code length of the header information is 8, and the code length of the footer information is 8.
8. A rigid body identification method as claimed in claim 1 wherein the combined encoded data is multiplied to quickly find the header and footer information.
9. The rigid body identification method of claim 1, wherein the code length of the code information of one light emitting source in the code information is equal to the number of designated frames in the one identification period.
10. The rigid body identification method according to claim 1, wherein the determining the light emitting sources belonging to the same rigid body according to the 2D coordinate values of the light emitting sources included in the image data specifically comprises:
determining a relative distance relationship between the light-emitting sources according to the 2D coordinate values of the light-emitting sources;
according to the number of the luminous sources set on one rigid body, the luminous sources with smaller relative position distances are determined as the luminous sources belonging to the same rigid body.
11. A rigid body identification device, comprising:
the processing unit is used for judging whether the image data belonging to the same light spot is complete in an identification period according to the multi-frame image data from the camera; one recognition cycle includes successive specified frame image data; if the judgment result is negative, judging the light spot to be a miscellaneous point; if the judgment result is yes, judging that the light spot is a luminous source on the rigid body, and identifying the coding information of the luminous source according to the image data corresponding to the luminous source; wherein the encoding information includes: the method comprises the steps that header information of an identification period and tail information different from the header information appear in a circulating mode, the header information is a starting frame, the tail information is arranged behind the header information, each tail information is unique and used for determining the coding information of each luminous source, and each header information is the same and used for reminding the position of the tail information of a server side, so that the server side can obviously separate the header information from the tail information; based on the coding information of the luminous sources, when the coding identification of the luminous sources is carried out, the coding data of each luminous source in one identification period can be combined to obtain combined coding data; expanding the combined coded data, and searching preset header information from the expanded coded data; wherein, the header information in the extended coded data is unique; searching table tail information in the expanded coded data according to the table head information; combining the header information and the footer information to obtain the coded information of the luminous source in the identification period;
the determining unit is used for determining the luminous sources belonging to the same rigid body in the luminous sources obtained by the processing unit according to the multi-frame image data from the camera;
the identification unit is used for combining the coded information of the luminous sources which belong to the same rigid body and are obtained by the processing unit and determined by the determination unit to obtain the coded information of the rigid body, and matching the coded information of the rigid body with the coded information of a preset rigid body to identify the rigid body; wherein, the preset rigid body coding information is unique.
12. An active optical dynamic capturing system is characterized by comprising a server, a base station, a camera and a rigid body, wherein the base station is used for generating a synchronous trigger signal and synchronously sending the synchronous trigger signal to the rigid body and the camera; the rigid body comprises a plurality of luminous sources and a register, and is used for receiving coding information of the rigid body transmitted by a base station, registering the coding information in the register and calling coding data from the coding information stored in the rigid body and distributing the coding data to each luminous source after receiving a synchronous trigger signal from the base station so that each luminous source can control the brightness of the luminous source according to the coding data; the camera is used for carrying out exposure shooting on the rigid body after receiving the synchronous trigger signal and sending image data obtained by shooting to the server, wherein the brightness change of the luminous source on the rigid body and the exposure shooting of the camera are carried out synchronously; the server is configured to identify the rigid body using the method of any of claims 1-10.
13. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 10 when executing the computer program.
CN201980004924.XA 2019-05-23 2019-05-23 Rigid body identification method, device and system and terminal equipment Active CN111213368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010672046.3A CN111757010B (en) 2019-05-23 2019-05-23 Active optical rigid body configuration method, system and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/088159 WO2020232703A1 (en) 2019-05-23 2019-05-23 Rigid body recognition method and apparatus, and system and terminal device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202010672046.3A Division CN111757010B (en) 2019-05-23 2019-05-23 Active optical rigid body configuration method, system and terminal equipment

Publications (2)

Publication Number Publication Date
CN111213368A CN111213368A (en) 2020-05-29
CN111213368B true CN111213368B (en) 2021-07-13

Family

ID=70790122

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010672046.3A Active CN111757010B (en) 2019-05-23 2019-05-23 Active optical rigid body configuration method, system and terminal equipment
CN201980004924.XA Active CN111213368B (en) 2019-05-23 2019-05-23 Rigid body identification method, device and system and terminal equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010672046.3A Active CN111757010B (en) 2019-05-23 2019-05-23 Active optical rigid body configuration method, system and terminal equipment

Country Status (2)

Country Link
CN (2) CN111757010B (en)
WO (1) WO2020232703A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914716B (en) * 2020-07-24 2023-10-20 深圳市瑞立视多媒体科技有限公司 Active light rigid body identification method, device, equipment and storage medium
CN112508992B (en) * 2020-12-11 2022-04-19 深圳市瑞立视多媒体科技有限公司 Method, device and equipment for tracking rigid body position information
CN112781589B (en) * 2021-01-05 2021-12-28 北京诺亦腾科技有限公司 Position tracking equipment and method based on optical data and inertial data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101558427A (en) * 2007-03-06 2009-10-14 松下电器产业株式会社 Image processing apparatus and method, image processing program and image processor
CN109691232A (en) * 2016-07-21 2019-04-26 飞利浦照明控股有限公司 Lamp with encoded light function
CN109766882A (en) * 2018-12-18 2019-05-17 北京诺亦腾科技有限公司 Label identification method, the device of human body luminous point

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6112226A (en) * 1995-07-14 2000-08-29 Oracle Corporation Method and apparatus for concurrently encoding and tagging digital information for allowing non-sequential access during playback
CN102169366B (en) * 2011-03-18 2012-11-07 汤牧天 Multi-target tracking method in three-dimensional space
US8752761B2 (en) * 2012-09-21 2014-06-17 Symbol Technologies, Inc. Locationing using mobile device, camera, and a light source
US20160098609A1 (en) * 2013-05-07 2016-04-07 Koninklijke Philips N.V. A video analysis device and a method of operating a video analysis device
WO2015191605A1 (en) * 2014-06-09 2015-12-17 The Johns Hopkins University Virtual rigid body optical tracking system and method
CN104216637B (en) * 2014-09-23 2018-01-23 北京尚易德科技有限公司 A kind of method and system by identifying spot tracks control splicing large screen
US10486061B2 (en) * 2016-03-25 2019-11-26 Zero Latency Pty Ltd. Interference damping for continuous game play
CN106204744B (en) * 2016-07-01 2019-01-25 西安电子科技大学 It is the augmented reality three-dimensional registration method of marker using encoded light source
CN106254458B (en) * 2016-08-04 2019-11-15 山东大学 A kind of image processing method based on cloud robot vision, platform and system
CN108460824B (en) * 2017-02-20 2024-04-02 北京三星通信技术研究有限公司 Method, device and system for determining stereoscopic multimedia information
WO2019014861A1 (en) * 2017-07-18 2019-01-24 Hangzhou Taruo Information Technology Co., Ltd. Intelligent object tracking
CN107633528A (en) * 2017-08-22 2018-01-26 北京致臻智造科技有限公司 A kind of rigid body recognition methods and system
CN108151738B (en) * 2017-12-22 2019-07-16 北京轻威科技有限责任公司 Codified active light marked ball with attitude algorithm
CN109067403A (en) * 2018-08-02 2018-12-21 北京轻威科技有限责任公司 A kind of active light marked ball decoding method and system
CN109697422B (en) * 2018-12-19 2020-12-04 深圳市瑞立视多媒体科技有限公司 Optical motion capture method and optical motion capture camera
CN109714588A (en) * 2019-02-16 2019-05-03 深圳市未来感知科技有限公司 Multi-viewpoint stereo image positions output method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101558427A (en) * 2007-03-06 2009-10-14 松下电器产业株式会社 Image processing apparatus and method, image processing program and image processor
CN109691232A (en) * 2016-07-21 2019-04-26 飞利浦照明控股有限公司 Lamp with encoded light function
CN109766882A (en) * 2018-12-18 2019-05-17 北京诺亦腾科技有限公司 Label identification method, the device of human body luminous point

Also Published As

Publication number Publication date
CN111213368A (en) 2020-05-29
CN111757010A (en) 2020-10-09
CN111757010B (en) 2021-10-22
WO2020232703A1 (en) 2020-11-26

Similar Documents

Publication Publication Date Title
CN111213368B (en) Rigid body identification method, device and system and terminal equipment
JP3779308B2 (en) Camera calibration system and three-dimensional measurement system
CN103154666B (en) Distance measurement device and environment map generation apparatus
EP2824923B1 (en) Apparatus, system and method for projecting images onto predefined portions of objects
CN103562676B (en) Method and 3D scanner of using structured lighting
CN107370951B (en) Image processing system and method
WO2019020200A1 (en) Method and apparatus for accurate real-time visible light positioning
US20190012789A1 (en) Generating a disparity map based on stereo images of a scene
CN102103696A (en) Face identification system, method and identification device with system
CN110022443A (en) Acquisition parameters adjusting method and camera terminal
CN103218596A (en) Bar-code scanner with dynamic multi-angle illuminating system and bar-code scanning method thereof
CN111213366B (en) Rigid body identification method, device and system and terminal equipment
CN107332625B (en) Positioning wireless synchronization system and positioning system
CN113037434B (en) Method and related equipment for solving synchronous communication packet loss of coding type active optical capturing system
CN111914716B (en) Active light rigid body identification method, device, equipment and storage medium
Benveniste et al. A color invariant based binary coded structured light range scanner for shiny objects
CN111931614B (en) Active light rigid body identification method, device, equipment and storage medium
CN115457154A (en) Calibration method and device of three-dimensional scanner, computer equipment and storage medium
CN116309796A (en) Optical motion capturing method, device, electronic equipment and storage medium
CN114596511A (en) Active optical rigid body identification method, device, equipment and storage medium
CN114463394A (en) Rigid body identification method, device, equipment and storage medium
CN111752386A (en) Space positioning method and system and head-mounted equipment
CN109391331B (en) Optical communication system, method and receiving device thereof
CN107305692B (en) Method and device for determining motion information of object to be detected
CN106297230B (en) Exchange method and communication equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant