CN116580444A - Method and equipment for testing long-distance running timing based on multi-antenna radio frequency identification technology - Google Patents

Method and equipment for testing long-distance running timing based on multi-antenna radio frequency identification technology Download PDF

Info

Publication number
CN116580444A
CN116580444A CN202310861100.2A CN202310861100A CN116580444A CN 116580444 A CN116580444 A CN 116580444A CN 202310861100 A CN202310861100 A CN 202310861100A CN 116580444 A CN116580444 A CN 116580444A
Authority
CN
China
Prior art keywords
radio frequency
face
attention
feature map
antenna radio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310861100.2A
Other languages
Chinese (zh)
Inventor
周茂林
李杨杨
徐志麟
王浩文
陈秋华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Leti Technology Co ltd
Original Assignee
Guangzhou Silinger Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Silinger Technology Co ltd filed Critical Guangzhou Silinger Technology Co ltd
Priority to CN202310861100.2A priority Critical patent/CN116580444A/en
Publication of CN116580444A publication Critical patent/CN116580444A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0686Timers, rhythm indicators or pacing apparatus using electric or electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/80Special sensors, transducers or devices therefor
    • A63B2220/806Video cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Image Analysis (AREA)

Abstract

A method and apparatus for testing the timing of long-distance running based on multi-antenna radio frequency identification technology is disclosed. Firstly, erecting a first camera for shooting the position of a final point flushing line at the final point, then erecting edge computing equipment at the final point, wherein the edge computing equipment comprises a multi-antenna radio frequency module for detecting an RFID bracelet worn by an athlete, then erecting a second camera for acquiring face images of the athlete at the starting point, transmitting the face images to the edge computing equipment, the edge computing equipment is used for carrying out face recognition based on the face images, then clicking on the edge computing equipment to start movement, simultaneously starting the multi-antenna radio frequency module, then responding to the RFID bracelet to be detected by the multi-antenna radio frequency module for 1 time after the athlete passes through the final point, and finally, storing the score of the athlete. Thus, accurate and timely timing of the achievement of the participant can be realized.

Description

Method and equipment for testing long-distance running timing based on multi-antenna radio frequency identification technology
Technical Field
The present disclosure relates to the field of long-distance running timing, and more particularly, to a method and apparatus for testing long-distance running timing based on multi-antenna radio frequency identification technology.
Background
Long-distance running is a common sport that requires accurate and timely timing of the performance of the participants. The traditional timing method is mainly carried out manually, and has the problems of large error, easy interference when more participants exist, and the like.
Thus, an optimized long-run timing test scheme is desired.
Disclosure of Invention
In view of this, the present disclosure provides a method and apparatus for testing long-distance running timing based on multi-antenna radio frequency identification technology, which can accurately and timely time the performance of the participants.
According to an aspect of the present disclosure, there is provided a method for testing a long-distance running timer based on a multi-antenna radio frequency identification technology, including: erecting a first camera at the terminal, wherein the first camera is used for shooting the position of a terminal line; erecting edge computing equipment at a terminal, wherein the edge computing equipment comprises a multi-antenna radio frequency module, and the multi-antenna radio frequency module is used for detecting an RFID bracelet worn by an athlete; erecting a second camera at the starting point, wherein the second camera is used for acquiring face images of athletes and transmitting the face images to the edge computing equipment, and the edge computing equipment is used for carrying out face recognition based on the face images; clicking on the edge computing equipment to start movement, and starting the multi-antenna radio frequency module; when the athlete passes through the end point once, responding to the detection of the RFID bracelet by the multi-antenna radio frequency module, and marking as line flushing for 1 time; and saving the score of the athlete.
According to another aspect of the present disclosure, there is provided a multi-antenna radio frequency identification technology-based long-distance running timing test apparatus operating in the foregoing method.
According to the embodiment of the disclosure, a first camera for shooting a position of an end point impulse line is firstly erected at an end point, then an edge computing device is erected at the end point, the edge computing device comprises a multi-antenna radio frequency module for detecting an RFID bracelet worn by an athlete, then a second camera for acquiring a face image of the athlete is erected at a starting point and is transmitted to the edge computing device, the edge computing device is used for carrying out face recognition based on the face image, then clicking on the edge computing device to start movement, meanwhile, the multi-antenna radio frequency module is started, then the athlete responds to the RFID bracelet being detected by the multi-antenna radio frequency module for 1 time after passing through the end point, finally, the score of the athlete is saved. Thus, accurate and timely timing of the achievement of the participant can be realized.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 illustrates a flow chart of a method of testing a long-distance running timing based on multi-antenna radio frequency identification technology according to an embodiment of the present disclosure.
Fig. 2 shows a flowchart of substep S130 of a method of testing a long-distance running timing based on multi-antenna radio frequency identification technology according to an embodiment of the present disclosure.
Fig. 3 shows an architectural diagram of substep S130 of a method of testing a long-distance running timing based on multi-antenna radio frequency identification technology according to an embodiment of the present disclosure.
Fig. 4 shows a flowchart of substep S132 of a method of testing a long-distance running timing based on multi-antenna radio frequency identification technology according to an embodiment of the present disclosure.
Fig. 5 shows a flowchart of sub-step S1324 of a method of testing a long-distance running timer based on multi-antenna radio frequency identification technology according to an embodiment of the present disclosure.
Fig. 6 illustrates a block diagram of a testing system for a long-distance running timing based on multi-antenna radio frequency identification technology in accordance with an embodiment of the present disclosure.
Fig. 7 shows an application scenario diagram of substep S130 of a method of testing a long-distance running timing based on multi-antenna radio frequency identification technology according to an embodiment of the present disclosure.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the disclosure. All other embodiments, which can be made by one of ordinary skill in the art without undue burden based on the embodiments of the present disclosure, are also within the scope of the present disclosure.
As used in this disclosure and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
In sports long-distance sports, it is necessary to time the running of a long-distance runner. The way the long-distance running timing can be performed is as follows: the method comprises the steps of manually timing, namely, a person is required to hold a stopwatch at a terminal, timing is carried out when smoke of a starting gun at a starting point is lifted, when an athlete punches to the terminal position, the stopwatch is pressed down, then the number on the person is sequentially recorded, and one-to-one corresponding achievement is carried out according to the time on the stopwatch; the RFID detection of single antenna, detection scope is less, exists the detection dead angle easily, causes the building to detect. Accordingly, the present disclosure provides a method of testing a long-distance running timer based on multi-antenna radio frequency identification technology.
Fig. 1 illustrates a flow chart of a method of testing a long-distance running timing based on multi-antenna radio frequency identification technology according to an embodiment of the present disclosure. As shown in fig. 1, a method for testing a long-distance running timer based on a multi-antenna radio frequency identification technology according to an embodiment of the present disclosure includes the steps of: s110, erecting a first camera at the terminal, wherein the first camera is used for shooting the position of a terminal line; s120, erecting edge computing equipment at a terminal, wherein the edge computing equipment comprises a multi-antenna radio frequency module, and the multi-antenna radio frequency module is used for detecting an RFID bracelet worn by an athlete; s130, erecting a second camera at a starting point, wherein the second camera is used for acquiring face images of athletes and transmitting the face images to the edge computing equipment, and the edge computing equipment is used for carrying out face recognition based on the face images; s140, clicking on the edge computing equipment to start movement, and starting the multi-antenna radio frequency module; s150, after each time the athlete passes the end point, responding to the detection of the RFID bracelet by the multi-antenna radio frequency module, and marking as flushing for 1 time; and S160, saving the score of the athlete.
Specifically, in step S110, a first camera erected at the terminal may be placed under a gantry several meters behind the terminal line, downward nodding, and shooting the terminal line position, wherein the shooting direction is parallel to the runway. The first camera erected at the terminal can select a high-resolution camera so as to clearly shoot the position of the terminal line, and meanwhile, in order to ensure that the shooting direction is parallel to the runway, the camera can be realized by adjusting the angle or using a bracket. High resolution cameras are capable of capturing more detail, including the athlete's motion and the location of the end wash, which is important for timing and determining victory or defeat; a high resolution camera can provide a more accurate image, which is critical to determine whether an athlete crosses the finish line; also, a high resolution camera may provide more data for subsequent analysis. The high-resolution camera is used for correctly setting the angle, so that the accuracy and the reliability of timing can be improved, and more useful information is provided for subsequent data analysis.
Specifically, in step S120, the multi-antenna radio frequency module is a device including a plurality of antennas, where the plurality of antennas are used for receiving and transmitting radio frequency signals, and in the method for testing the long-distance running timing, the multi-antenna radio frequency module is used for detecting radio frequency signals sent by an RFID bracelet worn by an athlete. An RFID bracelet is a device worn on the wrist of an athlete and internally contains an RFID chip. RFID stands for radio frequency identification (Radio Frequency Identification) for identifying and tracking objects by radio frequency signals, in this test method an RFID bracelet is used to send radio frequency signals to a multi-antenna radio frequency module in order to detect the number of times an athlete passes through an end point.
Specifically, in step S130, face recognition is performed using the second camera, and the personal authentication information may be added by the local machine or synchronized by the information management system, and stored in a storage medium of the intelligent analysis computer, where the personal authentication information is bound with the face information and the bracelet unique number. The edge computer acquires the picture of the second camera, delimits a face recognition area, and the athlete performs identity recognition in the face recognition area and wears a bracelet with the bracelet number bound in the system.
Specifically, in step S140, clicking on the edge computer starts movement, and simultaneous detection of the multi-antenna rf module is started.
Specifically, in step S150, the athlete performs the long-distance running exercise, and the bracelet is detected and recorded as a wash 1 time each time the athlete passes through the end point, and the athlete' S performance is soundly broadcasted for a specified number of times, and the athlete is considered to have completed the exercise. It should be appreciated that in addition to voice reporting athlete performance, the following other reporting modes may be considered: 1. the visual display is carried out, an LED display screen or a large screen is arranged near the finish line, and the score of the athlete is displayed in real time, so that audience, coach and other personnel can directly see the score without depending on voice broadcasting; 2. real-time data transmission, namely transmitting the score of the athlete to equipment such as a mobile phone, a tablet computer and the like through a network or wirelessly, wherein the athlete and a coach can check the score in real time through an App (application) of the mobile phone or special software; 3. vibration reminding, namely, wearing an intelligent bracelet or a watch on the wrist of the athlete, wherein when the line punching is detected, the bracelet can remind the athlete of successful line punching through vibration, and the score is displayed; 4. character prompting, setting a display board or a slogan with large characters near the finishing line, and displaying the score of the athlete in a character form, thereby facilitating audience and coaches to quickly acquire information. The broadcasting modes can be selected and combined according to actual requirements and site conditions so as to provide more diversified score broadcasting modes and facilitate different crowds to acquire score information.
Specifically, in step S160, the athlete 'S score and the assessment video at the end point are saved to a storage medium of the intelligent analysis computer, and the subsequent athlete' S score and video are uploaded to the information management system.
The method uses a camera to carry out face recognition at a starting point, the connection between the starting point and a terminal point comprises a limited network and a wireless network, and RFID bracelet detection is carried out at the terminal point by adopting a plurality of antennas; and a camera is adopted at the terminal point to carry out video acquisition and recording, and communication between the camera and the intelligent computer comprises a network cable and a USB cable. The method can also adopt modes including but not limited to sound, optical signal and the like to broadcast results. The method can achieve the function of automatic timing in an RFID mode, and reduces errors caused by manual meter pinching; and the simultaneous detection is performed by adding a plurality of antennas, so that detection omission is avoided.
In particular, face recognition is performed in step S130 in order to confirm the identity of each player passing through the end point and match it with the corresponding score. Thus ensuring that the performance of each athlete is accurate. However, face recognition presents a significant challenge in doing long-distance runs. For example, the face of a player may be blurred and distorted due to motion, and the distance and viewing angle between the camera and the player may vary, which may result in degradation of the quality of the face image, thereby affecting the accuracy of face recognition. Thus, a solution is desired. In this regard, in the technical concept of the present application, it is expected to improve the face recognition scheme by using deep learning and computer vision techniques, and solve the challenges of degradation of the face image quality and the like faced during the long-distance running timing.
Fig. 2 illustrates a flow chart of a method of testing a long-distance running timing based on multi-antenna radio frequency identification technology in accordance with an embodiment of the present disclosure. Fig. 3 shows an architectural diagram of a method of testing a long-distance running timing based on multi-antenna radio frequency identification technology according to an embodiment of the present disclosure. As shown in fig. 2 and 3, according to a method for testing long-distance running timing based on a multi-antenna radio frequency identification technology in an embodiment of the disclosure, the edge computing device is configured to perform face recognition based on the face image, and includes: s131, acquiring a face image transmitted by the second camera; s132, extracting an optimized attention-enhancing face feature map from the face image; and S133, determining identity category labels based on the optimized attention-enhancing face feature map.
More specifically, in step S131, a face image transmitted by the second camera is acquired. In order to clearly acquire face images of an athlete, a high-resolution camera, such as a high-definition (HD) camera or a Full HD camera, may be used. Such cameras have higher pixel density and better image quality, and can capture more detailed and clear face images. In addition, cameras with higher frame rates may also be considered to ensure that dynamic face images are captured.
More specifically, in step S132, an optimized attention-enhancing face feature map is extracted from the face image. Accordingly, in one possible implementation, as shown in fig. 4, extracting an optimized attention-enhancing face feature map from the face image includes: s1321, carrying out image preprocessing on the face image to obtain a preprocessed face image; s1322, extracting image features of the preprocessed face image based on a deep convolutional neural network model to obtain a face feature map; s1323, carrying out information enhancement on the face feature map by using an attention mechanism to obtain the attention-enhanced face feature map; and S1324, performing feature distribution optimization on the attention-enhancing face feature map to obtain the optimized attention-enhancing face feature map.
In the technical scheme of the application, the face image transmitted by the second camera is firstly obtained, and the face image is subjected to image preprocessing to obtain a preprocessed face image. The image preprocessing can remove noise in the face image, enhance important information and enable the image to be more suitable to be input into a machine learning model for processing. In an embodiment of the present application, the image preprocessing may include: image denoising, image enhancement, face alignment and other operations. Specifically, the image denoising eliminates noise in the image by applying a filter or a denoising algorithm, so that the edge of the face is clearer; image enhancement improves the quality and the identifiability of the image by increasing the contrast, brightness or color saturation of the image; the face alignment can separate the face from the image and align the face so that the position and the angle of the face in the image are more consistent, which is helpful to reduce recognition errors caused by the difference of the face gesture and the angle. In such a way, the image obtained after the image preprocessing is clearer than the original image, and the accuracy and the robustness of the recognition algorithm can be improved.
Accordingly, in one possible implementation manner, the image feature extraction is performed on the preprocessed face image based on a deep convolutional neural network model to obtain a face feature map, which includes: and passing the preprocessed face image through a face feature extractor based on a convolutional neural network model to obtain the face feature map. That is, a face feature extractor is constructed using convolutional neural networks (Convolutional Neural Network, CNN) to extract more representative and discriminative features in the preprocessed face image. The convolution neural network can automatically learn local implicit association features in the image through a series of convolution layers and pooling layers. More specifically, in face recognition, a face feature extractor based on a convolutional neural network model extracts important features from a face image through a series of convolutional operations and feature mapping. These features may be contours of the face, eyes, nose, etc., or higher level abstract features such as texture, color distribution, etc. of the face. Through training, the convolutional neural network model can learn the differences and common points among different faces, so that a face feature map with distinction is generated.
In one specific example of the present application, the convolutional neural network model includes an input layer, a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, and an output layer. The convolution kernel size of the first convolution layer is 3x3, the number of convolution kernels is 32, the step length is 1, and a ReLU activation function is used; the convolution kernel size of the second convolution layer is 3x3, the number of convolution kernels is 64, the step length is 1, and a ReLU activation function is used; the first pooling layer and the second pooling layer both use an average pooling operation with a pooling kernel size of 2x2 and a step size of 2. In the above network structure, the convolution layer of the model uses the convolution kernel of 3x3 to extract the features, and in practical application, the number of the convolution kernels can be adjusted according to practical requirements. The pooling layer reduces the resolution of the feature map by specifying the kernel size and step size.
The input layer is the first layer of the neural network to receive input data, in convolutional neural networks, the input layer typically receives as input an original image or feature map. The convolutional layer is one of the core layers in the convolutional neural network, which extracts features in the input data by using a convolutional operation, and in the described model, the first convolutional layer uses 3x3 convolutional kernels, the number of convolutional kernels is 32, the step size is 1, a ReLU activation function is used, the second convolutional layer is similar, the 3x3 convolutional kernels are used, the number of convolutional kernels is 64, the step size is 1, and a ReLU activation function is also used. The pooling layer is used for reducing the size of the feature map and extracting the most remarkable features, and in the described model, the first pooling layer and the second pooling layer both use 2x2 pooling cores, the step length is 2, and the mean pooling operation is adopted. The output layer is the last layer of the neural network to produce the final output result, and is typically a fully connected layer with a number of neurons equal to the number of classes, and the activation function of the output layer is specific to a particular task, e.g., in a multi-class classification task, a softmax activation function may be used. In other words, the input layer receives input data, the convolutional layer extracts features, the pooling layer reduces the resolution of the feature map, the output layer generates the final output result, and the combination of these layers forms the basic structure of the convolutional neural network.
It should be appreciated that ReLU (Rectified Linear Unit) activation function is a commonly used nonlinear activation function that is widely used in neural networks and deep learning models. The ReLU function is defined as: f (x) =max (0, x), where x is the input value and f (x) is the output value. The ReLU function is characterized in that when the input value is greater than 0, the output value is equal to the input value; when the input value is equal to or less than 0, the output value is 0. In short, for positive number inputs, the ReLU function remains unchanged, while for negative number inputs, the ReLU function changes it to 0. The ReLU activation function is simple and efficient in calculation, nonlinear characteristics can be introduced, the neural network model is helped to learn more complex functional relationships, the problem of gradient disappearance does not occur, and in addition, the derivative of the ReLU function is constant in most areas, so that the calculation of back propagation is simpler and more efficient.
Accordingly, in one possible implementation manner, the information enhancement is performed on the face feature map by using an attention mechanism to obtain the attention-enhanced face feature map, including: and the human face feature map is passed through a cross-channel space attention module to obtain the attention-enhancing human face feature map. The cross-channel spatial attention is an important mechanism in the space-time attention module and is used for learning the spatial correlation between different channels of data. It can help the model better understand the relationship between different channels in the feature map, thereby improving the performance of the model. That is, the use of a attentive mechanism may make the model more focused on important features. In particular, the cross-channel spatial attention module may extract important features by dynamically adjusting weights between channels.
It is worth mentioning that the Cross-channel spatial attention module (Cross-Channel Spatial Attention Module) is a common attention mechanism module in deep learning, which is used to enhance the representation capability of the input feature map, so that the model can pay better attention to important features and suppress unimportant features. The main purpose of this module is to extract spatial attention in the feature map by learning the interrelationship between channels, which includes two key steps: 1. channel attention, obtaining a global description vector by carrying out global pooling operation on the channel dimension of the feature map, and then mapping the global description vector into a channel attention vector by using a full connection layer, wherein the vector represents the importance of each channel; 2. spatial attention, multiplying the original feature map with the channel attention vectors to weight each channel, and then obtaining a final attention-enhancing face feature map by performing spatial self-attention operation on the weighted feature map, wherein in the self-attention operation, the features of each position can interact with the features of other positions to capture the global spatial relationship. By using the cross-channel spatial attention module, the model can automatically learn and strengthen important face features, and improve the performance of tasks such as face recognition and the like.
Further, in one possible implementation manner, as shown in fig. 5, performing feature distribution optimization on the attention-enhancing face feature map to obtain the optimized attention-enhancing face feature map, including: s13241, two-dimensionally arranging each feature matrix in the attention-enhancing face feature map as a combined feature matrix; s13242, performing spatial multisource fusion pre-verification information distribution optimization on the feature values of each position of the combined feature matrix to obtain an optimized combined feature matrix; and S13243, restoring the optimized combined feature matrix into the optimized attention-enhancing face feature map according to the arrangement relation among the feature matrices.
In the technical scheme of the application, when the human face feature map is used for obtaining the attention-enhanced human face feature map through the cross-channel spatial attention module, the cross-channel spatial attention module applies a spatial attention mechanism to each feature matrix of the human face feature map, so that the spatial image semantic feature distribution of each feature matrix is enhanced, and the associated distribution expression effect of the attention-enhanced human face feature map on each feature matrix is reduced.
Here, the applicant of the present application considers that in the case where the respective feature matrix arrangements constitute the attention-enhancing face feature map, the attention-enhancing face feature map may be regarded as essentially an overall feature distribution set composed of local feature distribution subsets of the respective feature matrices, and thus have neighborhood-portion distribution associations that are associated with each other. Moreover, each feature matrix is extracted by the face feature extractor based on the convolutional neural network model, so that the feature matrix also has a multi-source information association relationship of homologous image semantics from the preprocessed face image.
Therefore, in order to further enhance the effect of the attention-enhancing face feature map on the expression of the association distribution among the respective feature matrices, the applicant of the present application first makes the plurality ofThe feature matrices are two-dimensionally arranged as a combined feature matrix, where the height and width values of the combined feature matrix are kept as close as possible, and then feature values for each position of the combined feature matrixOptimizing the spatial multisource fusion pre-verification information distribution to obtain optimized characteristic values +.>
Accordingly, in one possible implementation manner, performing spatial multisource fusion pre-verification information distribution optimization on the feature value of each position of the combined feature matrix to obtain an optimized combined feature matrix, including: carrying out spatial multisource fusion pre-verification information distribution optimization on the feature values of each position of the combined feature matrix by using the following optimization formula to obtain the optimized combined feature matrix; wherein, the optimization formula is:wherein (1)>Is the +.>Line->Characteristic value of column>And->Setting up superparameters for the neighborhood and when +.>Or->Less than or equal to zero or greater than the combined featureCharacteristic value +.>Is set to either zero or one,represents a logarithmic function with base 2, +.>For the +.>Characteristic values of the location.
The spatial multisource fusion pre-verification information distribution optimization can be based on robustness class maximum likelihood estimation of feature spatial distribution fusion, the combined feature matrix is used as a feature global set formed by feature local sets corresponding to a plurality of interrelated neighborhood parts, effective folding of the respective multisource pre-verification information of the feature local sets into the feature global set is achieved, and an optimization paradigm which can be used for evaluating standard expectations between internal spatial association and spatial information fusion change relations of the feature matrix is obtained through pre-verification information distribution construction under the multisource condition, so that the expression effect of the combined feature matrix based on multisource information spatial distribution association fusion is improved. And then, the combined feature matrix is restored into the attention-enhancing face feature image according to the arrangement relation among the feature matrices, so that the association distribution expression effect of the attention-enhancing face feature image on each feature matrix is improved, and the accuracy of a classification result of the attention-enhancing face feature image obtained by a classifier is improved.
More specifically, in step S133, an identity class label is determined based on the optimized attention-enhancing face feature map. Accordingly, in one possible implementation, determining an identity class label based on the optimized attention-enhancing face feature map includes: and the optimized attention-enhancing face feature map is passed through a classifier to obtain a classification result, wherein the classification result is used for representing identity class labels. That is, the attention-enhancing face feature map is mapped and converted into a specific identity class label by using a classifier as a classification result. In a practical scenario, the identity class label may be a name or ID of a person, and the application is not limited herein. In this way an accurate identification of each athlete's identity passing through the endpoint is achieved. This will ensure that each athlete's performance is accurate, improving the overall performance and reliability of the long-distance running timing system.
It should be appreciated that the role of the classifier is to learn the classification rules and classifier using a given class, known training data, and then classify (or predict) the unknown data. Logistic regression (logistics), SVM, etc. are commonly used to solve the classification problem, and for multi-classification problems (multi-class classification), logistic regression or SVM can be used as well, but multiple bi-classifications are required to compose multiple classifications, but this is error-prone and inefficient, and the commonly used multi-classification method is the Softmax classification function.
Accordingly, in one possible implementation manner, the optimized attention-enhancing face feature map is passed through a classifier to obtain a classification result, where the classification result is used to represent an identity class label, and the method includes: expanding the optimized attention-enhancing face feature map into an optimized classification feature vector according to a row vector or a column vector; performing full-connection coding on the optimized classification feature vector by using a full-connection layer of the classifier to obtain a coding classification feature vector; and inputting the coding classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
Accordingly, in one specific example, if the choice is to expand by row vector, this may be done as follows: 1. taking each row of the optimized attention-enhancing face feature map as a feature vector; 2. each feature vector is connected in sequence to form a long vector. Specifically, assuming that the size of the optimized attention-enhancing face feature map is H×W×C, wherein H represents height, W represents width, C represents channel number, flattening the feature map into a matrix with the shape of H×W, and taking each row as a feature vector to obtain H feature vectors; the H eigenvectors are connected in sequence to form a long vector with the size of H multiplied by W.
If the column vector expansion is selected, the following steps can be performed: 1. taking each column of the optimized attention-enhancing face feature map as a feature vector; 2. each feature vector is connected in sequence to form a long vector. Specifically, assume that the size of the optimized attention-enhancing face feature map is hxwxc, where H represents height, W represents width, C represents the number of channels, the feature map is flattened into a matrix in the shape of hx (wxc), each column is taken as a feature vector, wxc feature vectors are obtained, and wxc feature vectors are sequentially connected to form a long vector in the size of hx (wxc).
In summary, according to the method for testing the long-distance running timing based on the multi-antenna radio frequency identification technology disclosed by the embodiment of the application, accurate and timely timing of the achievement of the participant can be realized.
Further, the present disclosure also provides a testing device for a long-distance running timer based on a multi-antenna radio frequency identification technology, wherein the testing device for the long-distance running timer based on the multi-antenna radio frequency identification technology operates in any one of the foregoing methods.
Fig. 6 illustrates a block diagram of a testing system 100 for a long-distance running timing based on multi-antenna radio frequency identification technology in accordance with an embodiment of the present disclosure. As shown in fig. 6, a test system 100 for a long-distance running timer based on a multi-antenna radio frequency identification technology according to an embodiment of the present disclosure includes: the first camera setting module 110 is configured to set up a first camera at the destination, where the first camera is used to capture a position of the destination line; an edge computing device erection module 120, configured to erect an edge computing device at a destination, where the edge computing device includes a multi-antenna radio frequency module, and the multi-antenna radio frequency module is configured to detect an RFID bracelet worn by an athlete; the second camera erection module 130 is configured to erect a second camera at a starting point, where the second camera is configured to collect a face image of an athlete, and transmit the face image to the edge computing device, and the edge computing device is configured to perform face recognition based on the face image; a start module 140, configured to click on the edge computing device to start movement, and at the same time, turn on the multi-antenna radio frequency module; a recording module 150 for recording 1 wash in response to the RFID bracelet being detected by the multi-antenna radio frequency module for each endpoint passed by an athlete; and a performance preservation module 160 for preserving the performance of the athlete.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described multi-antenna radio frequency identification technology-based long distance race timing test system 100 have been described in detail in the above description of the multi-antenna radio frequency identification technology-based long distance race timing test method with reference to fig. 1 to 5, and thus, repetitive descriptions thereof will be omitted.
As described above, the long-distance running time measuring system 100 based on the multi-antenna radio frequency identification technology according to the embodiment of the present disclosure may be implemented in various wireless terminals, for example, a server or the like having a long-distance running time measuring algorithm based on the multi-antenna radio frequency identification technology. In one possible implementation, the multi-antenna radio frequency identification technology based long-distance running timing test system 100 according to embodiments of the present disclosure may be integrated into a wireless terminal as one software module and/or hardware module. For example, the multi-antenna radio frequency identification technology based long-distance running timing test system 100 may be a software module in the operating system of the wireless terminal or may be an application developed for the wireless terminal; of course, the multi-antenna rfid technology based long-distance running timing test system 100 can also be one of many hardware modules of the wireless terminal.
Alternatively, in another example, the multi-antenna radio frequency identification technology based long-distance counting test system 100 and the wireless terminal may be separate devices, and the multi-antenna radio frequency identification technology based long-distance counting test system 100 may be connected to the wireless terminal through a wired and/or wireless network and transmit interactive information according to an agreed data format.
Fig. 7 shows an application scenario diagram of substep S130 of a method of testing a long-distance running timing based on multi-antenna radio frequency identification technology according to an embodiment of the present disclosure. As shown in fig. 7, in this application scenario, first, a face image (for example, D shown in fig. 7) transmitted by the second camera (for example, C shown in fig. 7) is acquired, and then, the face image is input to a server (for example, S shown in fig. 7) in which a test algorithm for long-running timing based on a multi-antenna radio frequency identification technology is deployed, where the server can process the face image using the test algorithm for long-running timing based on a multi-antenna radio frequency identification technology to obtain a classification result for representing identity class labels.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (9)

1. The utility model provides a test method of long-distance running timing based on multi-antenna radio frequency identification technology which is characterized in that the test method comprises the following steps: erecting a first camera at the terminal, wherein the first camera is used for shooting the position of a terminal line; erecting edge computing equipment at a terminal, wherein the edge computing equipment comprises a multi-antenna radio frequency module, and the multi-antenna radio frequency module is used for detecting an RFID bracelet worn by an athlete; erecting a second camera at the starting point, wherein the second camera is used for acquiring face images of athletes and transmitting the face images to the edge computing equipment, and the edge computing equipment is used for carrying out face recognition based on the face images; clicking on the edge computing equipment to start movement, and starting the multi-antenna radio frequency module; when the athlete passes through the end point once, responding to the detection of the RFID bracelet by the multi-antenna radio frequency module, and marking as line flushing for 1 time; and saving the score of the athlete.
2. The method for testing the long-distance running timing based on the multi-antenna radio frequency identification technology according to claim 1, wherein the edge computing device is configured to perform face recognition based on the face image, and comprises: acquiring a face image transmitted by the second camera; extracting an optimized attention-enhancing face feature map from the face image; and determining an identity class label based on the optimized attention-enhancing face feature map.
3. The method for testing the long-distance running timing based on the multi-antenna radio frequency identification technology according to claim 2, wherein extracting the optimized attention-enhancing face feature map from the face image comprises: carrying out image preprocessing on the face image to obtain a preprocessed face image; carrying out image feature extraction on the preprocessed face image based on a depth convolution neural network model to obtain a face feature map; information enhancement is carried out on the face feature map by using an attention mechanism so as to obtain the attention-enhanced face feature map; and performing feature distribution optimization on the attention-enhancing face feature map to obtain the optimized attention-enhancing face feature map.
4. The method for testing long-distance running timing based on multi-antenna radio frequency identification technology according to claim 3, wherein the step of extracting image features of the preprocessed face image based on a deep convolutional neural network model to obtain a face feature map comprises the steps of: and passing the preprocessed face image through a face feature extractor based on a convolutional neural network model to obtain the face feature map.
5. The method for testing the long-distance running timing based on the multi-antenna radio frequency identification technology according to claim 4, wherein the step of enhancing the information of the face feature map by using an attention mechanism to obtain the attention-enhanced face feature map comprises the steps of: and the human face feature map is passed through a cross-channel space attention module to obtain the attention-enhancing human face feature map.
6. The method for testing the long-distance running timing based on the multi-antenna radio frequency identification technology according to claim 5, wherein performing feature distribution optimization on the attention-enhancing face feature map to obtain the optimized attention-enhancing face feature map comprises: two-dimensionally arranging each feature matrix in the attention-enhancing face feature map as a combined feature matrix; performing spatial multisource fusion pre-verification information distribution optimization on the feature values of each position of the combined feature matrix to obtain an optimized combined feature matrix; and restoring the optimized combined feature matrix into the optimized attention-enhancing face feature map according to the arrangement relation among the feature matrices.
7. The method for testing the long-distance running timing based on the multi-antenna radio frequency identification technology according to claim 6, wherein the feature values of each position of the combined feature matrix are subjected to spatial multi-source fusion pre-verification information distribution optimization to obtain optimized combined featuresA matrix, comprising: carrying out spatial multisource fusion pre-verification information distribution optimization on the feature values of each position of the combined feature matrix by using the following optimization formula to obtain the optimized combined feature matrix; wherein, the optimization formula is:wherein (1)>Is the +.>Line->Characteristic value of column>And->Setting up superparameters for the neighborhood and when +.>Or->When the value is smaller than or equal to zero or larger than the width or the height of the combined characteristic matrix, the characteristic value is +.>Set to zero or one,>represents a logarithmic function with base 2, +.>For the +.>Characteristic values of the location.
8. The method for testing the long-distance running timing based on the multi-antenna radio frequency identification technology according to claim 7, wherein determining the identity class label based on the optimized attention-enhancing face feature map comprises: and the optimized attention-enhancing face feature map is passed through a classifier to obtain a classification result, wherein the classification result is used for representing identity class labels.
9. A multi-antenna radio frequency identification technology based long-distance running timing test device, characterized in that the multi-antenna radio frequency identification technology based long-distance running timing test device operates in the method according to claims 1 to 8.
CN202310861100.2A 2023-07-14 2023-07-14 Method and equipment for testing long-distance running timing based on multi-antenna radio frequency identification technology Pending CN116580444A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310861100.2A CN116580444A (en) 2023-07-14 2023-07-14 Method and equipment for testing long-distance running timing based on multi-antenna radio frequency identification technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310861100.2A CN116580444A (en) 2023-07-14 2023-07-14 Method and equipment for testing long-distance running timing based on multi-antenna radio frequency identification technology

Publications (1)

Publication Number Publication Date
CN116580444A true CN116580444A (en) 2023-08-11

Family

ID=87538238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310861100.2A Pending CN116580444A (en) 2023-07-14 2023-07-14 Method and equipment for testing long-distance running timing based on multi-antenna radio frequency identification technology

Country Status (1)

Country Link
CN (1) CN116580444A (en)

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1895463A1 (en) * 2006-08-31 2008-03-05 Accenture Global Services GmbH Demographic based content delivery
WO2013159356A1 (en) * 2012-04-28 2013-10-31 中国科学院自动化研究所 Cross-media searching method based on discrimination correlation analysis
US20180033215A1 (en) * 2016-07-27 2018-02-01 Acer Incorporated Photographing system for long-distance running event and operation method thereof
US10148680B1 (en) * 2015-06-15 2018-12-04 ThetaRay Ltd. System and method for anomaly detection in dynamically evolving data using hybrid decomposition
WO2019170875A1 (en) * 2018-03-09 2019-09-12 Nokia Solutions And Networks Oy Reception of signals from multiple sources
CN110414305A (en) * 2019-04-23 2019-11-05 苏州闪驰数控系统集成有限公司 Artificial intelligence convolutional neural networks face identification system
CN110991413A (en) * 2019-12-20 2020-04-10 西南交通大学 Running detection method based on ReiD
CN111160149A (en) * 2019-12-16 2020-05-15 山东大学 Vehicle-mounted face recognition system and method based on motion scene and deep learning
US20200185084A1 (en) * 2018-12-11 2020-06-11 International Business Machines Corporation Automated Normality Scoring of Echocardiograms
CN111275030A (en) * 2020-05-06 2020-06-12 西南交通大学 Straight running detection and timing system and method based on face and human body recognition
CN111523521A (en) * 2020-06-18 2020-08-11 西安电子科技大学 Remote sensing image classification method for double-branch fusion multi-scale attention neural network
CN112258559A (en) * 2020-10-26 2021-01-22 上海萱闱医疗科技有限公司 Intelligent running timing scoring system and method based on multi-target tracking
CN112973098A (en) * 2021-03-19 2021-06-18 洛阳理工学院 Self-service automatic testing device and testing method for sprint project
WO2021147300A1 (en) * 2020-01-22 2021-07-29 中国农业机械化科学研究院 Multi-source heterogeneous farmland big data yield prediction method and system, and apparatus
CN114519877A (en) * 2021-12-30 2022-05-20 深圳云天励飞技术股份有限公司 Face recognition method, face recognition device, computer equipment and storage medium
WO2022203750A1 (en) * 2021-03-20 2022-09-29 Georgia State University Research Foundation, Inc. Systems and methods for predicting behavioral traits
CN115188084A (en) * 2022-08-03 2022-10-14 成都理工大学 Multi-mode identity recognition system and method for non-contact voiceprint and palm print palm vein
CN115273236A (en) * 2022-07-29 2022-11-01 临沂大学 Multi-mode human gait emotion recognition method
CN115410069A (en) * 2022-08-25 2022-11-29 绍兴幺贰玖零科技有限公司 Fault detection method and system based on multiple attention mechanism
WO2022247147A1 (en) * 2021-05-24 2022-12-01 Zhejiang Dahua Technology Co., Ltd. Methods and systems for posture prediction
US20220399936A1 (en) * 2021-06-11 2022-12-15 Netdrones, Inc. Systems and methods for drone swarm wireless communication
WO2023033653A1 (en) * 2021-09-06 2023-03-09 Mylaps B.V. Vision-based sports timing and identification system
CN115909582A (en) * 2022-11-11 2023-04-04 深圳市星宏智能科技有限公司 Entrance guard equipment and system for face recognition of wearing mask
WO2023065545A1 (en) * 2021-10-19 2023-04-27 平安科技(深圳)有限公司 Risk prediction method and apparatus, and device and storage medium
CN116363738A (en) * 2023-06-01 2023-06-30 成都睿瞳科技有限责任公司 Face recognition method, system and storage medium based on multiple moving targets
CN116351039A (en) * 2023-03-07 2023-06-30 深圳市培林体育科技有限公司 Middle-long distance running detection system and method

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1895463A1 (en) * 2006-08-31 2008-03-05 Accenture Global Services GmbH Demographic based content delivery
WO2013159356A1 (en) * 2012-04-28 2013-10-31 中国科学院自动化研究所 Cross-media searching method based on discrimination correlation analysis
US10148680B1 (en) * 2015-06-15 2018-12-04 ThetaRay Ltd. System and method for anomaly detection in dynamically evolving data using hybrid decomposition
US20180033215A1 (en) * 2016-07-27 2018-02-01 Acer Incorporated Photographing system for long-distance running event and operation method thereof
WO2019170875A1 (en) * 2018-03-09 2019-09-12 Nokia Solutions And Networks Oy Reception of signals from multiple sources
US20200185084A1 (en) * 2018-12-11 2020-06-11 International Business Machines Corporation Automated Normality Scoring of Echocardiograms
CN110414305A (en) * 2019-04-23 2019-11-05 苏州闪驰数控系统集成有限公司 Artificial intelligence convolutional neural networks face identification system
CN111160149A (en) * 2019-12-16 2020-05-15 山东大学 Vehicle-mounted face recognition system and method based on motion scene and deep learning
CN110991413A (en) * 2019-12-20 2020-04-10 西南交通大学 Running detection method based on ReiD
WO2021147300A1 (en) * 2020-01-22 2021-07-29 中国农业机械化科学研究院 Multi-source heterogeneous farmland big data yield prediction method and system, and apparatus
CN111275030A (en) * 2020-05-06 2020-06-12 西南交通大学 Straight running detection and timing system and method based on face and human body recognition
CN111523521A (en) * 2020-06-18 2020-08-11 西安电子科技大学 Remote sensing image classification method for double-branch fusion multi-scale attention neural network
CN112258559A (en) * 2020-10-26 2021-01-22 上海萱闱医疗科技有限公司 Intelligent running timing scoring system and method based on multi-target tracking
CN112973098A (en) * 2021-03-19 2021-06-18 洛阳理工学院 Self-service automatic testing device and testing method for sprint project
WO2022203750A1 (en) * 2021-03-20 2022-09-29 Georgia State University Research Foundation, Inc. Systems and methods for predicting behavioral traits
WO2022247147A1 (en) * 2021-05-24 2022-12-01 Zhejiang Dahua Technology Co., Ltd. Methods and systems for posture prediction
US20220399936A1 (en) * 2021-06-11 2022-12-15 Netdrones, Inc. Systems and methods for drone swarm wireless communication
WO2023033653A1 (en) * 2021-09-06 2023-03-09 Mylaps B.V. Vision-based sports timing and identification system
WO2023065545A1 (en) * 2021-10-19 2023-04-27 平安科技(深圳)有限公司 Risk prediction method and apparatus, and device and storage medium
CN114519877A (en) * 2021-12-30 2022-05-20 深圳云天励飞技术股份有限公司 Face recognition method, face recognition device, computer equipment and storage medium
CN115273236A (en) * 2022-07-29 2022-11-01 临沂大学 Multi-mode human gait emotion recognition method
CN115188084A (en) * 2022-08-03 2022-10-14 成都理工大学 Multi-mode identity recognition system and method for non-contact voiceprint and palm print palm vein
CN115410069A (en) * 2022-08-25 2022-11-29 绍兴幺贰玖零科技有限公司 Fault detection method and system based on multiple attention mechanism
CN115909582A (en) * 2022-11-11 2023-04-04 深圳市星宏智能科技有限公司 Entrance guard equipment and system for face recognition of wearing mask
CN116351039A (en) * 2023-03-07 2023-06-30 深圳市培林体育科技有限公司 Middle-long distance running detection system and method
CN116363738A (en) * 2023-06-01 2023-06-30 成都睿瞳科技有限责任公司 Face recognition method, system and storage medium based on multiple moving targets

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WANG, S等: "Interacting with soli: Exploring fine-grained dynamic gesture recognition in the radio-frequency spectrum", 《PROCEEDINGS OF THE 29TH ANNUAL SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY》, 31 October 2016 (2016-10-31), pages 851 - 860 *
杨苗: "基于注意力机制与数据增强的行人重识别方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 6, 15 June 2022 (2022-06-15), pages 138 - 482 *

Similar Documents

Publication Publication Date Title
US20220092882A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN102609724B (en) Method for prompting ambient environment information by using two cameras
CN109635634A (en) A kind of pedestrian based on stochastic linear interpolation identifies data enhancement methods again
JP7292492B2 (en) Object tracking method and device, storage medium and computer program
CN109002761A (en) A kind of pedestrian's weight identification monitoring system based on depth convolutional neural networks
CN110781962B (en) Target detection method based on lightweight convolutional neural network
CN110390308B (en) Video behavior identification method based on space-time confrontation generation network
CN112085534B (en) Attention analysis method, system and storage medium
CN110478862A (en) A kind of exercise guide system and its guidance method
CN111724199A (en) Intelligent community advertisement accurate delivery method and device based on pedestrian active perception
CN110879990A (en) Method for predicting queuing waiting time of security check passenger in airport and application thereof
WO2022205329A1 (en) Object detection method, object detection apparatus, and object detection system
CN114581990A (en) Intelligent running test method and device
CN111626212B (en) Method and device for identifying object in picture, storage medium and electronic device
CN112488165A (en) Infrared pedestrian identification method and system based on deep learning model
CN110135274B (en) Face recognition-based people flow statistics method
CN116580444A (en) Method and equipment for testing long-distance running timing based on multi-antenna radio frequency identification technology
CN111738043A (en) Pedestrian re-identification method and device
CN112990156B (en) Optimal target capturing method and device based on video and related equipment
Li et al. Human motion quality assessment toward sophisticated sports scenes based on deeply-learned 3D CNN model
CN116246200A (en) Screen display information candid photographing detection method and system based on visual identification
CN112966673B (en) Construction method of pedestrian re-identification model and pedestrian re-identification method
CN115909400A (en) Identification method for using mobile phone behaviors in low-resolution monitoring scene
CN113269730B (en) Image processing method, image processing device, computer equipment and storage medium
CN115171151A (en) Pig drinking behavior detection method based on pig face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231206

Address after: Room 1503, No. 266 Kefeng Road, Huangpu District, Guangzhou City, Guangdong Province, 510700

Applicant after: Feixiang Technology (Guangzhou) Co.,Ltd.

Address before: 510000 Room 101, 201, 301, 401, 501, building 2, 1003 Asian Games Avenue, Shiqi Town, Panyu District, Guangzhou City, Guangdong Province

Applicant before: Guangzhou silinger Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240511

Address after: Room 619, No. 2 Tengfei 1st Street, Zhongxin Guangzhou Knowledge City, Huangpu District, Guangzhou City, Guangdong Province, 510700, Room B504, Zhongke Chuanggu Zhongchuang Space

Applicant after: Guangzhou Yuedong Artificial Intelligence Technology Co.,Ltd.

Country or region after: China

Address before: Room 1503, No. 266 Kefeng Road, Huangpu District, Guangzhou City, Guangdong Province, 510700

Applicant before: Feixiang Technology (Guangzhou) Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right

Effective date of registration: 20240523

Address after: Room 3040-2, South Software Building, Guangzhou Nansha Information Technology Park, No. 2 Huanshi Avenue South, Nansha District, Guangzhou, Guangdong Province, 511400

Applicant after: Guangzhou Leti Technology Co.,Ltd.

Country or region after: China

Address before: Room 619, No. 2 Tengfei 1st Street, Zhongxin Guangzhou Knowledge City, Huangpu District, Guangzhou City, Guangdong Province, 510700, Room B504, Zhongke Chuanggu Zhongchuang Space

Applicant before: Guangzhou Yuedong Artificial Intelligence Technology Co.,Ltd.

Country or region before: China