WO2020232703A1 - Procédé et appareil de reconnaissance de corps rigide, et système et dispositif terminal - Google Patents

Procédé et appareil de reconnaissance de corps rigide, et système et dispositif terminal Download PDF

Info

Publication number
WO2020232703A1
WO2020232703A1 PCT/CN2019/088159 CN2019088159W WO2020232703A1 WO 2020232703 A1 WO2020232703 A1 WO 2020232703A1 CN 2019088159 W CN2019088159 W CN 2019088159W WO 2020232703 A1 WO2020232703 A1 WO 2020232703A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
rigid body
information
emitting source
image data
Prior art date
Application number
PCT/CN2019/088159
Other languages
English (en)
Chinese (zh)
Inventor
王越
许秋子
Original Assignee
深圳市瑞立视多媒体科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市瑞立视多媒体科技有限公司 filed Critical 深圳市瑞立视多媒体科技有限公司
Priority to CN202010672046.3A priority Critical patent/CN111757010B/zh
Priority to PCT/CN2019/088159 priority patent/WO2020232703A1/fr
Priority to CN201980004924.XA priority patent/CN111213368B/zh
Publication of WO2020232703A1 publication Critical patent/WO2020232703A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means

Definitions

  • This application relates to the field of motion capture technology, and in particular to a rigid body recognition method, device, system and terminal equipment.
  • Motion capture is a high-tech technology that measures and records the motion trajectory or posture of an object in the real three-dimensional space, and reconstructs the state of the moving object in the virtual three-dimensional space.
  • the existing optical motion capture systems can be divided into active and passive. Passive rigid optical marking points are easy to wear, and are subject to heat dissipation, power supply, and reflective optical paths.
  • the brightness of the light source received by the camera is not high, which reduces the ability of the camera to filter external redundant information and also reduces the camera’s Working distance.
  • passive rigid bodies need to be set into different three-dimensional shapes to distinguish their differences, which makes it difficult to mass produce and difficult to configure in batches.
  • Some active products on the market only replace the reflective optical marking points of the original passive rigid body with light-emitting sources such as light-emitting diodes that emit light by themselves, and eliminate the camera's own light source, although this reduces the loss of optical marking points and the camera
  • the production cost can also increase the working distance of the motion capture camera to a certain extent.
  • active rigid bodies are often much more difficult to manufacture than passive rigid bodies due to power supply issues.
  • active products still require rigid bodies to be configured into different three-dimensional forms, thereby further increasing mass production and mass configuration, as well as rigid bodies. The difficulty of identification.
  • this application provides a rigid body recognition method, device, system and terminal equipment to solve the prior art need to configure rigid bodies used in active optical motion capture systems into different three-dimensional shapes, resulting in active The problem of too slow rigid body recognition speed in optical motion capture system.
  • the first aspect of the application provides a rigid body recognition method, including:
  • a recognition period includes consecutive designated frame image data
  • the multi-frame image data from the camera determine the light source belonging to the same rigid body
  • the second aspect of the present application provides a rigid body recognition device, including:
  • the processing unit is used for judging whether the image data belonging to the same light spot in a recognition period is complete according to the multi-frame image data from the camera; a recognition period includes continuous specified frame image data; if the judgment result is no, then judge all The light spot is a noise; if the judgment result is yes, it is judged that the light spot is a light emitting source on a rigid body, and the code information of the light emitting source is identified according to the image data corresponding to the light emitting source;
  • the determining unit is configured to determine the light emitting source belonging to the same rigid body among the light emitting sources processed by the processing unit according to the multi-frame image data from the camera;
  • the identification unit is used to combine the encoding information of the light emitting source belonging to the same rigid body obtained by the processing unit and determined by the determining unit to obtain the encoding information of the rigid body, and to combine the encoding information of the rigid body with the predetermined
  • the encoding information of the set rigid body is matched to identify the rigid body; wherein the preset encoding information of the rigid body is unique.
  • the third aspect of the present application provides an active optical motion capture system, including a server, a base station, a camera, and a rigid body, the base station is used to generate a synchronization trigger signal and send the synchronization trigger signal to the rigid body and the camera;
  • the rigid body includes multiple light-emitting sources, which are used to call coded data from the coded information stored in itself after receiving the synchronization trigger signal and assign it to each light-emitting source, so that each light-emitting source can be
  • the coded data controls the brightness of the light-emitting source;
  • the camera is used to perform exposure shooting of the rigid body after receiving the synchronization trigger signal, and send the image data obtained by shooting to the server;
  • the server It is used to identify the rigid body by adopting the method of the first aspect.
  • a fourth aspect of the present application provides a terminal device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor.
  • the processor implements the first aspect or the first aspect when the computer program is executed by the processor.
  • the rigid body recognition method mentioned in any of the possible implementations.
  • the multi-frame image data from the camera it is judged whether the image data belonging to the same light spot in a recognition period is complete; if the judgment result is no, it is judged that the light spot is a noise, and the The data of the light spot is deleted; if the judgment result is yes, it is judged that the light spot is the light source on the rigid body, and the encoding information of the light source is identified according to the image data corresponding to the light source; at the same time, it is also determined that those light sources belong to the same rigid body Finally, the coded information of the light-emitting sources belonging to the same rigid body is combined to obtain the coded information of the rigid body, and the coded information of the rigid body is matched with the preset coded information of the rigid body to achieve the purpose of identifying the rigid body.
  • the entire rigid body recognition process is based on the coded information of the light-emitting source for rigid body recognition, it has nothing to do with the three-dimensional form of the rigid body, so there is no need to configure the rigid body in the active optical motion capture system into different three-dimensional forms, which greatly improves The active optical motion capture system's ability to recognize rigid bodies.
  • FIG. 1 is a schematic diagram of a framework of an active optical motion capture system provided by this application;
  • FIG. 2 is a schematic diagram of coding information provided by this application.
  • FIG. 3 is a schematic diagram of a signal timing diagram provided by this application.
  • FIG. 4 is a schematic flowchart of a rigid body recognition method provided by this application.
  • FIG. 5 is a schematic flowchart of an embodiment of step 401 in FIG. 4;
  • FIG. 6 is a schematic flowchart of an embodiment of step 403 in FIG. 4;
  • FIG. 7 is a schematic flowchart of another embodiment of step 403 in FIG. 4;
  • FIG. 8 is a schematic diagram of a frame of a rigid body recognition device provided by this application.
  • FIG. 9 is a schematic structural diagram of an embodiment of a terminal device provided by this application.
  • the term “if” can be interpreted as “when” or “once” or “in response to determination” or “in response to detection” depending on the context .
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • the traditional optical motion capture system uses the ultra-high-power near-infrared light source in the motion capture camera to emit infrared light and illuminate the passive marking points; the passive marking points coated with high-reflective materials reflect the irradiated infrared light, and this part of the infrared light Light and ambient light with background information will pass through the low-distortion lens of the motion capture camera and reach the infrared narrow band pass filter unit of the camera. Since the light pass band of the infrared narrow band pass filter unit is consistent with the wave band of the infrared light source. Therefore, the ambient light with redundant background information will be filtered out, leaving only the infrared light with marking point information to pass through and be recorded by the camera's photosensitive element.
  • the photosensitive element then converts the light signal into an image signal and outputs it to the control circuit.
  • the image processing unit in the control circuit uses FPGA to preprocess the image signal in the form of hardware, and finally stream the 2D coordinate information of the marker to the tracking software.
  • the tracking and positioning software uses the principle of computer multi-eye vision, and calculates the coordinates and direction of the point cloud in the three-dimensional capture space according to the matching relationship between the two-dimensional point cloud of the image and the relative position and orientation of the camera. Based on the three-dimensional coordinates of the point cloud, the tracking and positioning software resolves the position and orientation of each rigid body in the capture space by identifying different rigid body structures.
  • the above-mentioned passive optical motion capture system has the following shortcomings: first, the motion capture camera is required to have more complex image processing devices, and the camera cost is relatively high; second, the marking points are required to be coated with high-reflective materials, which are likely to cause wear during use , Affecting the normal operation of the system; thirdly, tracking and positioning depend on the structure of the rigid body.
  • the design of the rigid body makes the number of rigid bodies very limited, and the identification and tracking of the rigid body requires the camera to capture all the marking points on the rigid body, and the use environment is very harsh.
  • Active optical motion capture system is composed of active light rigid body, active light camera, switch, base station and server.
  • the active light rigid body is a rigid body composed of at least 4 LEDs that can actively emit light; the active light camera does not need to emit infrared light like a traditional optical motion capture camera, but only needs to receive the active light signal from the active light rigid body and pre-process these signals.
  • the role of the switch is to supply power to the active light camera and the base station and to transmit the camera and base station information;
  • the role of the base station is Communicate with the active optical rigid body and configure the rigid body according to the information.
  • the embodiment of the present application provides an active optical motion capture system. As shown in FIG. 1, for ease of description, only the parts related to the embodiment of the present application are shown:
  • the active optical motion capture system includes: a base station 11, an active optical rigid body 12 (hereinafter referred to as a rigid body), an active optical camera 13 (hereinafter referred to as a camera), a server 14 and a switch 15.
  • a base station 11 an active optical rigid body 12 (hereinafter referred to as a rigid body)
  • an active optical camera 13 hereinafter referred to as a camera
  • server 14 a switch 15
  • the main functions of the server 44 are as follows: First: generate unique coding information for each rigid body 12, and send it to each rigid body 12 through the base station 11; second: receive image data from the active light camera 13, and perform The image data in a recognition period acquires the coding information of the corresponding rigid body, and the rigid body 12 is recognized according to the coding information and preset rigid body coding information.
  • the server 14 when the server 14 generates unique code information for each rigid body 12, the server 14 can generate a unique code for each rigid body according to the preset code length of the code information of the luminous source and the number of luminous sources on the rigid body.
  • Encoding information may include any one of the following: setting the frame header, parity check code coding rule, and Hamming code coding rule.
  • the encoding information may specifically be binary encoding information.
  • the coded information of the rigid body includes: the coded information of all light-emitting sources on the rigid body. For example, if a rigid body includes N light-emitting sources, the coded information of the rigid body includes coded information of N light-emitting sources.
  • the coded information of a light-emitting source stores the coded data of a light-emitting source in one identification period.
  • the light source sequentially controls the brightness of the light source according to the instructions of the coded information. That is, the brightness of the light source of the light-emitting source, which is repeated once in a recognition cycle.
  • the principle that the server 14 should abide by when determining the code information of different luminous sources is: the code information of different luminous sources is different, thereby ensuring the coding of the rigid body Uniqueness of information.
  • the code length of the coded information of the rigid body is: the product of the coded length of the coded information of the light-emitting source and the number of light-emitting sources. Assuming that the code length of the coded information of a light-emitting source is 16, and the number of light-emitting sources is 8, the code length of the coded information corresponding to the rigid body is 128.
  • the coded information can be as shown in Figure 2. Each row of coded data of the coded information The coded information corresponding to a luminous source is formed. It can be understood that the encoded information shown in FIG. 2 is only a presentation form of the encoded information.
  • the server 14 is also used to receive the image data from the camera 13, and obtain the coding information of the corresponding rigid body according to the image data in a recognition period, and to compare the rigid body according to the coding information and preset rigid body coding information. 12 to identify.
  • the server 14 recognizes the rigid body 12
  • the rigid body recognition method that will be described in the subsequent embodiments of the present application can be used.
  • the rigid body recognition method please refer to the detailed description below.
  • the base station 11 is introduced. Its functions include generating synchronization trigger signals and realizing the transmission of information between system components. For example, the base station 11 generates a synchronization trigger signal according to a predetermined interval period, and sends the generated synchronization trigger signal to the rigid body 12 and the camera 13 at the same time, so that the rigid body 12 can control the brightness of the light source on the rigid body 12 according to the synchronization trigger signal. At the same time, the camera 13 can capture the image data of the light emitting source on the rigid body 12 according to the synchronization trigger signal. During specific implementation, the base station 11 may simultaneously send a synchronization trigger signal to the rigid body 12 and the camera 13 through wireless transmission technology. Among them, the wireless transmission technology may include any of the following: wireless fidelity (Wi-Fi), ZigBee (ZigBee).
  • the base station 11 When transmitting information between components, as described above, the base station 11 specifically randomly allocates multiple pieces of coded information received from the server 14 to multiple rigid bodies 12, and each rigid body 12 receives the coded information, The encoding information is registered in the register of the rigid body 12.
  • the rigid body 12 includes multiple light-emitting sources. Its functions are as follows: first: receiving the coded information of the rigid body 12 from the base station 11 and registering it in the register; second: receiving the synchronization trigger from the base station 11 After the signal, the coded data is periodically called from the coded information stored by itself and assigned to each light-emitting source, so that each light-emitting source can control the brightness of the light-emitting source according to the coded data.
  • the number of rigid bodies 12 in the active optical motion capture system may be one or more, and is not limited to the three shown in FIG. 1.
  • the light source may be a light emitting diode (Light Emitting Diode, LED), and the coded data may include 0 or 1. That is, the embodiment of the present application uses the different brightness of the light source to identify the corresponding coded data as 1 or 0.
  • the rigid body 12 receives a synchronization trigger signal and performs a coded data distribution, specifically after receiving the synchronization trigger signal, it selects one bit of coded data in sequence from the coded information of the N light-emitting sources stored in itself and sends it to the corresponding Luminous source.
  • each light source only receives 1 bit of coded data.
  • the rigid body 12 receives the next synchronization trigger signal, it selects the next coded data from the coded information of the N light-emitting sources stored in the register and sends it to the corresponding light-emitting source, and so on, since then, a recognition cycle is completed Glowing.
  • a recognition cycle is completed, that is, after a round of distribution of the coded information of the light-emitting source has been completed, when the synchronization trigger signal is received again, the rigid body 12 uses the coded information of the light-emitting source repeatedly to start the coded data distribution of the next recognition cycle.
  • the coded information of the rigid body 12 includes the coded information of all light-emitting sources on the rigid body, and the coded information of one light-emitting source stores the coded data of one light-emitting source in one identification period.
  • a recognition period includes consecutive designated frames of image data, and the designated number of continuous image data in a recognition period (or the number of camera exposure shots) is the same as the value of the code length F of the coded information of a light-emitting source.
  • the code length of the coded information of the light-emitting source may be, for example, F, where F is a positive integer equal to or greater than two.
  • F the smaller the F, the smaller the encoding range, and the smaller the number of identifiable rigid bodies.
  • F can be selected as 16.
  • the rigid body 12 includes 8 light-emitting sources, and the code information includes 8 coded subsets.
  • the coded information of each light-emitting source Select the first bit of coded data in, and send the selected first bit of coded data to the corresponding light source respectively.
  • the second coded data is selected from the coded information of each light-emitting source, and all the selected second-bit coded data are sent to the corresponding light-emitting source, and so on. , Until the completion of 16 times of synchronization trigger signal reception and 16 times of coded data distribution within one identification period.
  • the camera 13 is introduced. After receiving the synchronization trigger signal from the base station 11, the rigid body 12 is exposed and photographed, and the photographed image data is sent to the server 14. Among them, one camera 13 can take images of multiple rigid bodies 12. It should be noted that the brightness change of the light-emitting source on the rigid body 12 is synchronized with the exposure and shooting of the camera 13, that is, the coded data is distributed every time the rigid body 12 receives a synchronization trigger signal; and the camera 13 receives a synchronization trigger every time The signal is an exposure shot. Since the synchronization trigger signal is synchronously sent to the rigid body 12 and the camera 13, it can be ensured that the camera 13 can capture the brightness change of the light-emitting source. For example, as shown in FIG.
  • the rigid body performs an encoding data distribution every time a rising edge synchronization trigger signal is received; the camera 13 performs an exposure shooting every time a rising edge synchronization trigger signal is received. This ensures that the camera can capture the changes in the brightness of the light source such as LED lights.
  • the image data sent by the camera 13 to the server 14 may include: the 2D coordinate value of each light-emitting source, the gray value of the light-emitting source, and the associated domain area of the light-emitting source. Therefore, before sending the image data to the server 14, the camera 13 also needs to determine the 2D coordinate value, the gray value and the associated domain area of each light-emitting source in each frame of image data taken by exposure; and the 2D coordinate value of each light-emitting source , The gray value and the area of the associated domain are sent to the server 14.
  • the camera 13 immediately determines the 2D coordinate value, gray value, and associated domain area of each light-emitting source in each frame of image data of the exposure shooting after each exposure shooting, and determines the currently obtained 2D coordinate value ,
  • the gray value and the area of the associated domain are sent to the server 14 immediately.
  • the camera 13 performs F times of exposure shooting according to the received F times of synchronization trigger signal, and sends F times of image data to the server 14.
  • the switch 15 has the function of realizing data exchange between the server 14 and the base station 11, and realizing the data exchange between the base station 11 and the camera 13.
  • the server 14 generates unique code information, and the code information can be sent to the base station 11 through the switch 15.
  • the switch 15 can also receive the synchronization trigger signal sent by the base station 11 and send the synchronization trigger signal to the camera 13.
  • the rigid body recognition system of the embodiment of the present invention When the rigid body recognition system of the embodiment of the present invention is working, it can be roughly divided into two stages, namely the configuration stage and the operation recognition stage, which will be described in detail below.
  • the server 14 In the configuration phase, the server 14 generates unique coding information for each rigid body 12. After the server 14 generates unique code information for each rigid body 12, the server 14 sends the code information of a plurality of preset rigid bodies to each rigid body 12 through the switch 15. After each rigid body 12 receives the encoded information, it registers the received encoded information in its own register.
  • the base station 11 broadcasts a synchronization trigger signal to all rigid bodies 12 through wireless transmission technology (such as wireless wifi, ZigBee and other wireless communication technologies).
  • wireless transmission technology such as wireless wifi, ZigBee and other wireless communication technologies.
  • the rigid body 12 calls one bit of coded data from the coded information of each light-emitting source in the register in order and assigns it to the corresponding light-emitting source (if the number of light-emitting sources is N, a total of N-bit codes are called Data, each light source receives 1 bit of coded data).
  • the luminous source controls its own luminous intensity according to the instructions of the coded data, that is, the embodiment of the present invention displays the corresponding code as 1 or 0 through the light or dark of the luminous source.
  • the same synchronization trigger signal is also sent from the base station 11 to the switch 15 and transmitted to the camera 13.
  • the camera 13 receives the synchronization trigger signal, an exposure shooting is performed. That is to say, every time the rigid body receives a synchronization trigger signal, the coded data is distributed once, and the camera receives a synchronization trigger signal every time it performs an exposure shooting.
  • the camera 13 After the camera 13 performs exposure shooting, it needs to send the image data obtained by shooting to the server 14. It should be noted that each time the camera performs an exposure shooting, it needs to immediately send the image data (including the 2D coordinate value, gray value and associated domain area of the light-emitting source) obtained from the current exposure shooting to the server 14.
  • the server 14 After receiving the multi-frame image data from the camera, the server 14 recognizes the multi-frame image data according to the recognition cycle to identify the rigid body 12.
  • the active light camera no longer needs to rely on the ultra-high-power near-infrared light source to emit infrared light, and only needs to receive the infrared light signal emitted by the active light rigid body. Therefore, the active light camera is no longer complicated in device structure.
  • the development cost reduces user input and solves the problem of high cost of traditional optical cameras.
  • the active light rigid body no longer relies on high-reflective materials, but is composed of at least 4 LED lights. The LED lights are safely protected in the rigid body shell, which promotes the safer and more stable use of the active light rigid body and solves the traditional optical rigid body in the use process It is easy to cause wear and tear.
  • the proposal of the active optical motion capture system not only simplifies the complex device structure of the traditional optical motion capture camera, reduces the camera cost, but also the active optical rigid body is not easy to wear and destroy, and the sustainability of use is greatly improved.
  • the most important thing is that the tracking and positioning of the active optical motion capture system is based on the encoding state of the active optical rigid body instead of the rigid body structure. This not only makes the rigid body structure uniform, and greatly optimizes the aesthetics, but also the diversity of encoding states makes it identifiable The number of rigid bodies has doubled.
  • the rigid body recognition method in the embodiment of the present application may be applicable to the rigid body recognition of the active optical motion capture system proposed above, and may specifically include:
  • Step 401 Determine whether the image data belonging to the same light spot in a recognition period is complete according to multiple frames of image data from the camera; a recognition period includes consecutive designated frame image data;
  • the execution subject of the rigid body recognition method of this application may be a server.
  • the camera after the camera performs the exposure shooting, it will send the image data obtained by the exposure shooting to the server, so that the server can perform rigid body recognition based on the multiple frames of image data from the camera.
  • the image data transmitted by the camera includes: the 2D coordinate value of the light spot, the gray value of the light spot, and the associated domain area of the light spot.
  • a recognition cycle refers to the number of frames of continuous image data required when the server performs a rigid body recognition.
  • the number of consecutive designated frames is related to the coded information of a light-emitting source, and the number of consecutive designated frames is the same as the code length of the coded information of a light-emitting source. That is to say, when the server generates the coded information of the light-emitting source, it determines the specified number of frames of image data in a recognition period.
  • the camera when the camera is performing exposure shooting, it may shoot the luminous source on the rigid body, or it may shoot other reflective points in the capture field. Therefore, the image data about the light spot from the camera may be the image data of the light-emitting source, or it may be the image data of other reflective points in the captured field. Therefore, the server needs to determine whether the data of the light point comes from other reflective points in the field or a light source (LED) on a rigid body.
  • LED light source
  • the server when it implements this step 401, as shown in FIG. 5, it may include:
  • step 501 according to the 2D coordinate values of the light points included in the multiple frames of image data from the camera, mark information belonging to the same light point is generated.
  • the server When the server receives the image data from the camera for the first time, that is, when it receives the first frame of image data, it can assign a marking information to each light point according to the 2D coordinate information of each light point included in the first frame of image data; When the image data is received again, that is, when the image data is not received for the first time, the server may match the 2D coordinate information of all light points in the newly received image data with the 2D coordinate information of all light points in the old image data. If they match, it is determined that the two matched light spots belong to the same light spot, and the two light spots are given the same marking information; if they do not match, the two light spots are given different marking information.
  • the server receives the image data for the first time including the light spot T1 and the light spot T2, and the server assigns the light spot T1 and the light spot T2 with different marking information K1 and K2.
  • the server also stores the 2D coordinate value, gray value and area of the associated domain of the light spot according to the marking information.
  • the server uses the 2D coordinate information of all light points (such as T3, T4) in the new image data and the 2D coordinates of the light points (such as T1, T2) in the stored image data according to the distance The relationship corresponds to matching.
  • the two light points (T3, T1) are considered to belong to the same light point, and the new light point T3 is assigned to correspond
  • the old mark K1 of the matched light spot T1 is given the same mark information as the matched light spot; if the distance relationship between the light spot T2 and the light spot T4 does not meet the preset matching condition, it is considered that the two light spots do not match (T4 , T2 is not the same point of view), give the light spot T4 a new mark K3, and then store the 2D coordinates, gray value and associated domain area of this light spot T4 according to the mark, and then cycle.
  • marking information belonging to the same light spot is generated, and image data corresponding to the same light spot at different times is stored according to the marking information.
  • Step 502 Determine whether the number of frames of image data corresponding to the marking information belonging to the same light spot in a recognition period reaches a specified number of frames.
  • step 503 if it is reached, it is determined that the image data belonging to the same light spot in the recognition period is complete.
  • step 504 if it is not reached, it is judged that the image data belonging to the same light spot in the recognition period is incomplete.
  • the server When the number of image data frames received by the server meets the specified number of frames in a recognition period, the server also determines whether the number of frames of image data corresponding to the marking information belonging to the same light spot in a recognition period reaches the specified number of frames. For example, if the specified number of frames of image data is 16, when the number of frames of image data received by the server reaches 16 frames, the 2D coordinates and gray scale of the light points can be stored with the same mark.
  • the value and the area of the associated domain are integrated to determine whether the number of frames of image data corresponding to the same tag information generated in step 501 reaches 16 frames; if it reaches 16 frames, it is considered that the image data belonging to the same spot in the recognition period is complete If the number of frames is not reached, then it is considered that the image data belonging to the same light spot in the recognition period is incomplete, and step 402 is entered.
  • Step 402 Determine that the light spot is a noise, and delete the image data corresponding to the light spot.
  • Step 403 Determine that the light spot is a light emitting source on a rigid body, and identify the code information of the light emitting source according to the image data corresponding to the light emitting source.
  • Step 403 Identify the code information of the light-emitting source according to the image data corresponding to the light-emitting source.
  • step 403 When performing step 403, that is, when identifying the code information of the light-emitting source according to the image data corresponding to the light-emitting source, there are three methods, which will be described separately below:
  • the first method is to identify the code information of the light emitting source according to the gray value of the light emitting source, including the following steps:
  • Step 601 Calculate the average value of the gray value of the light-emitting source in the recognition period, and use the average value as the gray value threshold of the light-emitting source in the recognition period.
  • Step 602 Compare the gray value of each frame of the light-emitting source in the identification period with the gray value threshold, and assign different coded data according to the comparison result.
  • step 601-step 602 during specific implementation, for example, for 16 frames of image data in a recognition period, first calculate the average value of the gray value of each light-emitting source included in the image data, and then use the average value as the light-emitting source. Identify the gray value threshold within the cycle. Then the gray value of each frame of the light-emitting source is compared with the average value. If the value is greater than the average value, it is recorded as 1, and the LED is considered to be in a bright state, and the value less than or equal to the average value is recorded as 0 and considered as this. When the LED light is in the off state, the coded data corresponding to the 16 frames of data in the identification period of the light source is obtained at this time. It is worth noting that the state when the LED light is off is not completely off, but the brightness is significantly reduced compared to the state when the LED light is on.
  • Step 603 Identify the coded information of the light-emitting source according to the coded data of the light-emitting source in the recognition period.
  • the second method is to identify the code information of the light emitting source according to the area of the associated domain of the light emitting source, including the following steps:
  • Step 701 Calculate the average value of the associated domain area of the light-emitting source in the recognition period, and use the average value as the associated domain area threshold value of the light-emitting source in the recognition period.
  • Step 702 Compare the associated domain area of each frame of the light-emitting source in the identification period with the associated domain area threshold, and assign different coded data according to the comparison result.
  • step 701-step 702 during specific implementation, for example, for 16 frames of image data in a recognition period, first calculate the average value of the associated domain area of each light-emitting source included in the image data, and then the average value is used as the light-emitting source. The threshold of the associated domain area within the recognition period. Then compare the associated domain area of each frame of the luminous source with the average value. If it is greater than the average value, it is recorded as 1, and the LED light is considered to be in a bright state. If it is less than or equal to the average value, it is recorded as 0 and considered as this. When the LED light is in the off state, the coded data corresponding to the 16 frames of data in the identification period of the light source is obtained at this time. It is worth noting that the state when the LED light is off is not completely off, but the brightness is significantly reduced compared to the state when the LED light is on.
  • Step 703 Identify the encoded information of the light emitting source according to the encoded data of the light emitting source in the identifying period.
  • the first method is preferred when identifying the code information of the light emitting source according to the image data corresponding to the light emitting source. If the coded information of the light-emitting source cannot be identified based on the gray value of the light-emitting source, the coded information of the light-emitting source can also be identified based on the area of the associated domain of the light-emitting source, which is the third method of the present application.
  • the coded information of the light-emitting source can be identified according to the calculated coded data.
  • the currently identified 16-frame image data does not always start with the starting frame of the active light rigid body encoding, and it cannot be completely excluded from the 16-frame image data.
  • There are occasional error data so if you want to identify the encoding information of the light source, you must first find the starting frame of the 16-frame data.
  • the generated code information includes: header information and footer information different from the header information.
  • the code length of the coded information of a light-emitting source is 16 (that is, the specified number of frames of image data in a recognition period is 16), that is, the on-off state of the light-emitting source (LED) cycles once every 16 frames.
  • the on and off status of the 16 frames of LED lights can be recorded as the encoding information of a light source, and the first 8 frames of the 16 frames are called the header, and the last 8 frames are called the tail. The last 8 frames of light and off state are different and unique.
  • these 8 frames of information determine the unique encoding information of each LED light.
  • the first 8 frames of header are just for better reminders.
  • the position of the status information at the end of the table on the server side, so its on and off status can be the same or different.
  • the meter heads are all the same, and they are all 01111110, which means that each LED light meter of each rigid body is off and on, while the 8 at the end of the watch
  • the frame light-off state is designed according to the Hamming code.
  • the code information at the end of an LED light is 11100001, that is, its light-off state is on, off off off off off, then the 16-frame encoding information of this LED is 0111111011100001 , That is, its on-off state is off-on-off, on-off, on-off, on-off, on-off, and off-off, and then continuously looping.
  • the header will never be the same as the footer, that is, the server side can clearly separate the header and footer;
  • the footer is configured in Hamming code to facilitate error correction, which can be adjusted to a certain extent Improve the identifiability of rigid bodies.
  • the coded data of each light-emitting source in a recognition period can be combined to obtain the combined coded data. Then the combined coded data is expanded, and the preset header information is searched from the expanded coded data; wherein, the encoding method of the above-mentioned luminous source ensures that the header information in the expanded coded data is unique. Then in the expanded coded data, according to the header information, the header information and the footer information are combined to obtain the encoding information of the light-emitting source in the identification period.
  • the combined coded data is expanded, preferably by multiples, to achieve the purpose of quickly searching for header information and footer information.
  • the 16-frame coded data in a recognition period of a certain luminous source can be calculated to be expanded into 32 frames, and then in these 32 frames, the position of the header can be quickly found by searching every 8 frames from the beginning. After the header is found, the start frame is found, and the 8 frames after the header are the end data we need.
  • the first advantage of this method is that the speed of finding the header and the end of the table is very fast, and the other is that if there is error data in the 16-frame data frame, the error data can be quickly eliminated when the header is not found.
  • the header information and the footer information are found again, the header information and the footer information are combined to obtain the code information of the luminous source in the identification period.
  • step 403 and step 404 are entered.
  • Step 404 Determine the light source belonging to the same rigid body according to the multiple frames of image data from the camera;
  • Step 405 Combine the code information of the light emitting sources belonging to the same rigid body to obtain the code information of the rigid body, and match the code information of the rigid body with the preset code information of the rigid body to identify the rigid body.
  • the relative distance relationship between the light-emitting sources can be calculated according to the 2D coordinate value of the light-emitting source; and according to the number of light-emitting sources set on a rigid body, the smaller the relative position of the light-emitting source is Determined to belong to the light source of the same rigid body.
  • step 405 in the process of matching the obtained encoding information of the rigid body with the preset encoding information of the rigid body, in an ideal state, all 8 LED lights on a rigid body can be identified, but due to the rigid experience during use It may be difficult to achieve such a perfect ideal state due to force majeure factors such as inevitably being blocked.
  • an active light rigid body can be identified, and 4 LED lights can also help us calculate the posture information of the active light rigid body.
  • the method of solving the attitude information of the rigid body is to know the mark of each LED light on the active light rigid body, and directly obtain the matching relationship between the three-dimensional coordinates of the rigid body and the two-dimensional coordinates of the LED light mark point, and then use the gradient descent method to calculate it.
  • the posture information of the rigid body is to know the mark of each LED light on the active light rigid body, and directly obtain the matching relationship between the three-dimensional coordinates of the rigid body and the two-dimensional coordinates of the LED light mark point, and then use the gradient descent method to calculate it.
  • the rigid body recognition method of this embodiment is applied to an active optical motion capture system. Since the active optical rigid body has coded information, when performing rigid body recognition, it is no longer dependent on the rigid body structure, but can directly obtain 2D coordinates and coordinates based on the coded information. The matching relationship of the 3D coordinates makes the posture calculation of the rigid body faster and more accurate.
  • FIG. 8 is a schematic diagram of a rigid body identification device provided by an embodiment of the present application. As shown in Fig. 8, the rigid body identification device 8 includes:
  • the processing unit 81 is used for judging whether the image data belonging to the same light spot in a recognition period is complete according to the multi-frame image data from the camera; a recognition period includes continuous specified frame image data; if the judgment result is no, then judge The light spot is a noise; if the judgment result is yes, it is judged that the light spot is a light emitting source on a rigid body, and the code information of the light emitting source is identified according to the image data corresponding to the light emitting source;
  • the determining unit 82 is configured to determine the light emitting sources belonging to the same rigid body among the light emitting sources processed by the processing unit according to the multi-frame image data from the camera;
  • the identification unit 83 is configured to combine the code information of the light emitting source belonging to the same rigid body obtained by the processing unit and determined by the determining unit to obtain the code information of the rigid body, and combine the code information of the rigid body with The coding information of the preset rigid body is matched to identify the rigid body; wherein the coding information of the preset rigid body is unique.
  • the rigid body recognition device 8 When the rigid body recognition device 8 performs rigid body recognition, it specifically adopts the rigid body recognition method disclosed in the second embodiment above, which will not be repeated here.
  • the rigid body recognition device 8 of this embodiment can be a server of an active optical motion capture system. Since the active optical rigid body has coded information, when performing rigid body recognition, it is no longer dependent on the rigid body structure, but can be directly based on the coded information. The matching relationship between 2D coordinates and 3D coordinates is obtained, and the posture calculation of the rigid body is faster and more accurate.
  • FIG. 9 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 9 of this embodiment includes a processor 90, a memory 91, and a computer program 92 stored in the memory 91 and running on the processor 90, such as a rigid body recognition program.
  • the processor 90 executes the computer program 92, the steps in the above embodiments of the rigid body recognition method are implemented, for example, steps 401 to 405 shown in FIG. 4.
  • the processor 90 executes the computer program 8
  • the functions of the modules/units in the foregoing device embodiments, such as the functions of the units 801 to 802 shown in FIG. 8, are realized.
  • the computer program 92 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 91 and executed by the processor 90 to complete This application.
  • the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 92 in the terminal device 9.
  • the computer program 92 can be divided into a coded information acquisition unit and a recognition unit. The specific functions of each unit are as follows:
  • the processing unit 81 is used to determine the same light spot in a recognition period based on multiple frames of image data from the camera.
  • one recognition cycle includes continuous designated frame image data; if the judgment result is no, the light spot is judged as a noise; if the judgment result is yes, the light spot is judged to be a rigid body
  • the light source, and the code information identifying the light emitting source according to the image data corresponding to the light emitting source; the determining unit 82 is configured to determine that the light emitting sources processed by the processing unit belong to the same rigid body according to the multi-frame image data from the camera
  • the identification unit 83 is used to combine the encoding information of the light-emitting sources belonging to the same rigid body obtained by the processing unit and determined by the determining unit to obtain the encoding information of the rigid body, and to combine the rigid body
  • the encoding information of the is matched with the encoding information of the preset rigid body to identify the rigid body; wherein the encoding information of the preset rigid body is unique.
  • the terminal device 9 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor 90 and a memory 91.
  • FIG. 9 is only an example of the terminal device 9 and does not constitute a limitation on the terminal device 7. It may include more or less components than shown in the figure, or a combination of certain components, or different components.
  • the terminal device may also include input and output devices, network access devices, buses, etc.
  • the so-called processor 90 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 91 may be an internal storage unit of the terminal device 9, such as a hard disk or memory of the terminal device 9.
  • the memory 91 may also be an external storage device of the terminal device 6, such as a plug-in hard disk equipped on the terminal device 9, a smart memory card (Smart Media Card, SMC), or a secure digital (Secure Digital, SD) Flash memory card Card) etc.
  • the memory 91 may also include both an internal storage unit of the terminal device 7 and an external storage device.
  • the memory 91 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 91 can also be used to temporarily store data that has been output or will be output.
  • the disclosed apparatus/terminal device and method may be implemented in other ways.
  • the device/terminal device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunications signals
  • software distribution media any entity or device capable of carrying the computer program code
  • recording medium U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un procédé et un appareil de reconnaissance de corps rigide, et un dispositif terminal et un système. Le procédé de reconnaissance de corps rigide comprend les étapes consistant à : déterminer, en fonction de multiples trames de données d'image provenant d'une caméra, si des données d'image appartenant au même point lumineux dans une période de reconnaissance sont complètes, la période de reconnaissance comprenant des trames désignées continues de données d'image ; si un résultat de détermination est négatif, déterminer le point lumineux comme étant un point divers ; si le résultat de la détermination est positif, déterminer le point lumineux comme étant une source d'émission de lumière sur un corps rigide, et reconnaître des informations de codage de la source d'émission de lumière en fonction de données d'image correspondant à la source d'émission de lumière ; déterminer, en fonction des multiples trames de données d'image provenant de la caméra, des sources d'émission de lumière appartenant au même corps rigide ; et combiner des informations de codage des sources d'émission de lumière appartenant au même corps rigide pour obtenir des informations de codage du corps rigide, et mettre en correspondance les informations de codage du corps rigide avec des informations de codage d'un corps rigide prédéfini de façon à reconnaître le corps rigide, les informations de codage du corps rigide prédéfini étant uniques.
PCT/CN2019/088159 2019-05-23 2019-05-23 Procédé et appareil de reconnaissance de corps rigide, et système et dispositif terminal WO2020232703A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010672046.3A CN111757010B (zh) 2019-05-23 2019-05-23 主动光刚体配置方法、系统及终端设备
PCT/CN2019/088159 WO2020232703A1 (fr) 2019-05-23 2019-05-23 Procédé et appareil de reconnaissance de corps rigide, et système et dispositif terminal
CN201980004924.XA CN111213368B (zh) 2019-05-23 2019-05-23 刚体识别方法、装置、系统及终端设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/088159 WO2020232703A1 (fr) 2019-05-23 2019-05-23 Procédé et appareil de reconnaissance de corps rigide, et système et dispositif terminal

Publications (1)

Publication Number Publication Date
WO2020232703A1 true WO2020232703A1 (fr) 2020-11-26

Family

ID=70790122

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088159 WO2020232703A1 (fr) 2019-05-23 2019-05-23 Procédé et appareil de reconnaissance de corps rigide, et système et dispositif terminal

Country Status (2)

Country Link
CN (2) CN111213368B (fr)
WO (1) WO2020232703A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508992A (zh) * 2020-12-11 2021-03-16 深圳市瑞立视多媒体科技有限公司 基于1€滤波追踪刚体位置信息的方法及其装置、设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914716B (zh) * 2020-07-24 2023-10-20 深圳市瑞立视多媒体科技有限公司 主动光刚体识别方法、装置、设备及存储介质
CN112781589B (zh) * 2021-01-05 2021-12-28 北京诺亦腾科技有限公司 一种基于光学数据和惯性数据的位置追踪设备及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014180695A1 (fr) * 2013-05-07 2014-11-13 Koninklijke Philips N.V. Dispositif d'analyse vidéo et procédé d'exploitation de ce dispositif
CN104216637A (zh) * 2014-09-23 2014-12-17 北京尚易德科技有限公司 一种通过识别光斑轨迹控制拼接大屏幕的方法和系统
CN108460824A (zh) * 2017-02-20 2018-08-28 北京三星通信技术研究有限公司 立体多媒体信息的确定方法、装置及系统
CN109691232A (zh) * 2016-07-21 2019-04-26 飞利浦照明控股有限公司 具有编码光功能的灯
CN109766882A (zh) * 2018-12-18 2019-05-17 北京诺亦腾科技有限公司 人体光点的标签识别方法、装置

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6112226A (en) * 1995-07-14 2000-08-29 Oracle Corporation Method and apparatus for concurrently encoding and tagging digital information for allowing non-sequential access during playback
CN101558427B (zh) * 2007-03-06 2012-03-21 松下电器产业株式会社 图像处理装置以及方法
CN102169366B (zh) * 2011-03-18 2012-11-07 汤牧天 三维立体空间中的多目标跟踪方法
US8752761B2 (en) * 2012-09-21 2014-06-17 Symbol Technologies, Inc. Locationing using mobile device, camera, and a light source
WO2015191605A1 (fr) * 2014-06-09 2015-12-17 The Johns Hopkins University Système et procédé de suivi optique de corps rigide virtuel
US10486061B2 (en) * 2016-03-25 2019-11-26 Zero Latency Pty Ltd. Interference damping for continuous game play
CN106204744B (zh) * 2016-07-01 2019-01-25 西安电子科技大学 利用编码光源为标志物的增强现实三维注册方法
CN106254458B (zh) * 2016-08-04 2019-11-15 山东大学 一种基于云机器人视觉的图像处理方法、平台及系统
JP6856914B2 (ja) * 2017-07-18 2021-04-14 ハンジョウ タロ ポジショニング テクノロジー カンパニー リミテッドHangzhou Taro Positioning Technology Co.,Ltd. インテリジェントな物体追跡
CN107633528A (zh) * 2017-08-22 2018-01-26 北京致臻智造科技有限公司 一种刚体识别方法及系统
CN108151738B (zh) * 2017-12-22 2019-07-16 北京轻威科技有限责任公司 带姿态解算的可编码主动光标识球
CN109067403A (zh) * 2018-08-02 2018-12-21 北京轻威科技有限责任公司 一种主动光标识球编解码方法及系统
CN109697422B (zh) * 2018-12-19 2020-12-04 深圳市瑞立视多媒体科技有限公司 光学动作捕捉方法及光学动作捕捉相机
CN109714588A (zh) * 2019-02-16 2019-05-03 深圳市未来感知科技有限公司 多视点立体图像定位输出方法、装置、设备以及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014180695A1 (fr) * 2013-05-07 2014-11-13 Koninklijke Philips N.V. Dispositif d'analyse vidéo et procédé d'exploitation de ce dispositif
CN104216637A (zh) * 2014-09-23 2014-12-17 北京尚易德科技有限公司 一种通过识别光斑轨迹控制拼接大屏幕的方法和系统
CN109691232A (zh) * 2016-07-21 2019-04-26 飞利浦照明控股有限公司 具有编码光功能的灯
CN108460824A (zh) * 2017-02-20 2018-08-28 北京三星通信技术研究有限公司 立体多媒体信息的确定方法、装置及系统
CN109766882A (zh) * 2018-12-18 2019-05-17 北京诺亦腾科技有限公司 人体光点的标签识别方法、装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508992A (zh) * 2020-12-11 2021-03-16 深圳市瑞立视多媒体科技有限公司 基于1€滤波追踪刚体位置信息的方法及其装置、设备
CN112508992B (zh) * 2020-12-11 2022-04-19 深圳市瑞立视多媒体科技有限公司 一种追踪刚体位置信息的方法及其装置、设备

Also Published As

Publication number Publication date
CN111213368B (zh) 2021-07-13
CN111757010B (zh) 2021-10-22
CN111757010A (zh) 2020-10-09
CN111213368A (zh) 2020-05-29

Similar Documents

Publication Publication Date Title
WO2020232703A1 (fr) Procédé et appareil de reconnaissance de corps rigide, et système et dispositif terminal
US9582888B2 (en) Structured light three-dimensional (3D) depth map based on content filtering
CN106127172B (zh) 一种非接触3d指纹采集的装置及方法
JP3779308B2 (ja) カメラ校正システム及び三次元計測システム
US7505607B2 (en) Identifying objects tracked in images using active device
KR20080012270A (ko) 촬상 장치들을 위치추적하는 시스템 및 방법
CN205448962U (zh) 一种具有三维扫描功能的移动终端
WO2019037105A1 (fr) Procédé de commande de puissance, module de télémétrie et dispositif électronique
WO2020232704A1 (fr) Procédé et appareil d'identification de corps rigide, système, et dispositif terminal
CN113037434B (zh) 解决编码式主动光动捕系统同步通讯丢包方法及相关设备
CN115604575A (zh) 图像采集设备及图像采集方法
CN111914716B (zh) 主动光刚体识别方法、装置、设备及存储介质
CN107222260A (zh) 一种基于变数据区长度的可见光通信编码扩码方法
CN111931614B (zh) 主动光刚体识别方法、装置、设备及存储介质
CN115457154A (zh) 三维扫描仪的标定方法、装置、计算机设备和存储介质
US9305200B2 (en) Information acquisition apparatus, information acquisition method, and non-transitory recording medium
CN108966342B (zh) 一种vr定位的方法、装置及系统
CN109391331B (zh) 光通讯系统及其方法与接收装置
WO2021258294A1 (fr) Dispositif électronique, procédé et appareil de déverrouillage, et support d'enregistrement
Chang et al. Greendicator: enabling optical pulse-encoded data output from WSN for display on smartphones
CN114596511A (zh) 主动光刚体识别方法、装置、设备及存储介质
CN108449138A (zh) 一种用于可见光通信的m序列视觉检测方法及其系统
US10475194B2 (en) Method, device, and non-transitory computer readable storage medium for object tracking
CN117058766B (zh) 一种基于主动光频闪的动作捕捉系统和方法
CN207926593U (zh) 一种用于可见光通信的m序列视觉检测系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19929653

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19929653

Country of ref document: EP

Kind code of ref document: A1