CN111401179A - Radar data labeling method, device, server and storage medium - Google Patents

Radar data labeling method, device, server and storage medium Download PDF

Info

Publication number
CN111401179A
CN111401179A CN202010158119.7A CN202010158119A CN111401179A CN 111401179 A CN111401179 A CN 111401179A CN 202010158119 A CN202010158119 A CN 202010158119A CN 111401179 A CN111401179 A CN 111401179A
Authority
CN
China
Prior art keywords
data
radar
radar data
visual
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010158119.7A
Other languages
Chinese (zh)
Inventor
阳召成
刘海帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202010158119.7A priority Critical patent/CN111401179A/en
Publication of CN111401179A publication Critical patent/CN111401179A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a radar data labeling method, a device, a server and a storage medium, wherein the method comprises the following steps: acquiring visual data of a human body to be detected by a visual sensor and radar data of the human body to be detected by a radar sensor; identifying an action tag for the visual data according to a preset algorithm; associating the action tag with the radar data based on the visual data and the radar data for the same detection time. According to the technical scheme, the efficiency of radar data labeling is improved.

Description

Radar data labeling method, device, server and storage medium
Technical Field
The embodiment of the invention relates to a radar target classification technology, in particular to a radar data annotation method, a device, a server and a storage medium.
Background
Human motion recognition has been one of the hot points of research, and in particular, human motion recognition based on vision has become mature in recent years due to the rise of deep learning. Visual data is visual and easy to understand, and a plurality of existing public databases and a plurality of application scenes exist. However, the visual data is greatly interfered by the environment, such as the target is blocked, the light intensity is weak, and the like, and is not suitable for a relatively private scene, so that many scholars adopt the radar sensor for detection.
However, human motion recognition based on radar has no major breakthrough and lacks samples, which is largely because no large-scale radar database related to human motion recognition is disclosed on the internet at present, and the radar data can complete the labeling work through a more complicated signal processing process and a priori information of some environments, which is time-consuming and laborious, and increases a lot of workload of research.
Disclosure of Invention
The invention provides a radar data labeling method, a radar data labeling device, a server and a storage medium, and aims to improve the efficiency of radar data labeling.
In a first aspect, an embodiment of the present invention provides a radar data annotation method, including:
acquiring visual data of a human body to be detected by a visual sensor and radar data of the human body to be detected by a radar sensor;
identifying an action tag for the visual data according to a preset algorithm;
associating the action tag with the radar data based on the visual data and the radar data for the same detection time.
In a second aspect, an embodiment of the present invention further provides a radar data annotation device, including:
the data acquisition module is used for acquiring visual data of a human body to be detected by a visual sensor and radar data of the human body to be detected by a radar sensor;
the action tag identification module is used for identifying an action tag for the visual data according to a preset algorithm;
a tag association module to associate the visual data and the radar data based on a same detection time,
associating the action tag with the radar data.
In a third aspect, an embodiment of the present invention further provides a server, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a radar data tagging method as described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the radar data annotation method as described above.
The technical scheme of the invention comprises the steps of detecting visual data of a human body to be detected by a visual sensor and detecting radar data of the human body to be detected by a radar sensor; identifying an action tag for the visual data according to a preset algorithm; based on the visual data and the radar data with the same detection time, the action tag is associated with the radar data, the problem that the radar data labeling is time-consuming and labor-consuming is solved, and the effect of improving the efficiency of the radar data labeling is achieved.
Drawings
Fig. 1 is a flowchart of a method for annotating radar data according to a first embodiment of the present invention.
Fig. 2 is a flowchart of a radar data annotation method according to a second embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a radar data annotation device in a third embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a server in the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
Furthermore, the terms "first," "second," and the like may be used herein to describe various orientations, actions, steps, elements, or the like, but the orientations, actions, steps, or elements are not limited by these terms. These terms are only used to distinguish one direction, action, step or element from another direction, action, step or element. For example, the first preset time threshold may be the second preset time threshold, and similarly, the second preset time threshold may be referred to as the first preset time threshold, without departing from the scope of the present application. Both the first preset time threshold and the second preset time threshold are preset time thresholds, but they are not the same preset time threshold. The terms "first", "second", etc. are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Example one
Fig. 1 is a flowchart of a radar data annotation method according to an embodiment of the present invention, where the embodiment is applicable to a radar data annotation situation, and the method specifically includes the following steps:
s110, acquiring visual data of a human body to be detected by a visual sensor and radar data of the human body to be detected by a radar sensor.
In this embodiment, the visual sensor refers to an apparatus for acquiring image information of an external environment by using an optical element and an imaging device, and the performance of the visual sensor is generally described by using image resolution. The accuracy of the vision sensor is not only related to the resolution but also to the detection distance of the object to be measured. The farther the object is from the measurement object, the poorer the absolute positional accuracy. For example, a Kinect V2 sensor, which is a 3D motion sensing camera, may be used, and functions such as instant motion capture, image recognition, microphone input, voice recognition, community interaction, etc. are introduced. The video camera has a skeleton tracking function, can track skeleton images of one or two users in the visual field of the video camera, and does not need to wear any auxiliary instrument. The radar sensor of the embodiment is a millimeter wave radar sensor, and compared with optical sensors such as a camera, infrared sensors and laser sensors, the millimeter wave radar has the advantages of being strong in fog, smoke and dust penetrating capability and anti-interference capability and having all-weather (except heavy rainy days) all-day-long characteristics. The human body to be detected is a person for testing the action posture.
And S120, identifying the action tag for the visual data according to a preset algorithm.
In this embodiment, the preset algorithm is an algorithm based on a matching action tag on a built-in recognition algorithm of a visual sensor. The action tag is a classification of a human action gesture. Exemplary may be (1) standing still, (2) walking at will, (3) sitting, (4) rising, (5) playing a mobile phone while sitting, and (6) reading a book while sitting, etc. The human body posture can be recognized according to the visual data detected by the visual sensor, and the corresponding action tag is matched.
S130, associating the action tag with the radar data based on the visual data and the radar data at the same detection time.
In this embodiment, the action postures of the human body to be detected at different time periods may be different, so that the action tag labeling needs to be based on the same time period or even the same time. For example, the visual sensor and the radar sensor may be marked while detecting data, detection time may also be marked in advance for the visual data and the radar data, and subsequently, the action tag corresponding to the visual data may be marked on the radar data at the same detection time based on the detection time.
The technical scheme of the embodiment of the invention comprises the steps of detecting visual data of a human body to be detected by a visual sensor and detecting radar data of the human body to be detected by a radar sensor; identifying an action tag for the visual data according to a preset algorithm; the visual data and the radar data based on the same detection time are associated with the action tag, so that the problem that the radar data is time-consuming and labor-consuming in labeling is solved, the effect of improving the efficiency of radar data labeling is achieved, and the technical problem that the radar intelligent identification network model lacks of radar training data labeled with tags is also solved.
Example two
Fig. 2 is a flowchart of a radar data annotation method according to a second embodiment of the present invention, which is further optimized based on the second embodiment, and the method specifically includes:
and S210, performing time synchronization on the vision sensor and the radar sensor by taking preset reference time as reference.
In this embodiment, the problem of time asynchronism between the vision sensor and the radar sensor due to differences may exist, time synchronization for different sensors needs to be performed appropriately, the time of one sensor may be used as the preset reference time, and other sensors synchronize the time to the preset reference time, or the intermediate time of all the sensors may be used as the preset reference time to synchronize the time of all the sensors to the preset reference time. Optionally, the correspondence between the number of frames of the visual data and the radar data is one-to-many.
In this embodiment, because the frame rates of the visual sensor and the radar sensor are different from each other, for example, the frame rate of the visual sensor is 30FPS, and the frame rate of the radar sensor is 200FPS, data acquired by the two sensors cannot correspond to each frame, and therefore a method in which the correspondence between the frame numbers of the visual data and the radar data is one-to-many may be adopted, and for example, one frame of visual data may correspond to multiple frames of radar data. In addition, in order to increase the sample size for radar identification, an overlapped matching mode is adopted, namely, data of partial frames are identical between adjacent data samples. Optionally, the detection angles of the plurality of radar sensors to the human body to be detected are different.
In this embodiment, radar sensor is a plurality of, and different radar sensor can treat to detect the human body and carry out data acquisition based on different angles promptly, and is exemplary, can set up two sets of radar sensor, and a set of detection target is positive, another group detection target side, two radar sensor sight mutually perpendicular. More human body micro motion Doppler information can be obtained at different detection angles, and the identification accuracy is improved. Further, the radar may be placed at a high position so as to capture the doppler change of the human body to be detected in the vertical direction.
S220, acquiring visual data of a human body to be detected by a visual sensor and radar data of the human body to be detected by a radar sensor.
In this embodiment, the visual sensor refers to an apparatus for acquiring image information of an external environment by using an optical element and an imaging device, and the performance of the visual sensor is generally described by using image resolution. The radar sensor is a millimeter wave radar sensor. The human body to be detected is a person for testing the action posture. Optionally, the visual data and the radar data include header information, and the header information includes a frame number, a time tag, and a time difference between adjacent frames.
In this embodiment, when the vision sensor and the radar sensor collect data, header information may be added to each frame of data, which may be a frame number, a time tag, and a time difference between adjacent frames, where the time tag may be a total number of milliseconds from zero point of the time of the geonwei control to the current time, and when radar data is subsequently processed, the vision data corresponding to the time tag may be found according to the time tag, and the time difference between adjacent frames may be used as a criterion for determining whether there is a frame drop condition in the data.
S230, inputting the skeleton sequence of the visual data into a human body action recognition network to obtain the corresponding action label, wherein the human body action recognition network comprises a main point characteristic learning network, a global characteristic learning network and a full connection learning network.
In this embodiment, the human body motion recognition network is a preset tag classification algorithm for visual data. Illustratively, an HCN model may be employed, the entire network of which is divided into three parts: the first part is called key point feature learning, a skeleton sequence and a motion change sequence (a difference value of two adjacent frame skeleton sequences) are input into two convolution branches, and the skeleton key points are convoluted one by one; the second part is called global feature learning, namely global feature learning is carried out on all key points of the framework, and the output of the two branches is connected and then the global feature learning is continued; the third part is full-connection learning, and the output of the second part is flattened and then passes through two full-connection layers to obtain the final classification result. The embodiment is based on the skeleton recognition of the visual sensor, the skeleton sequence of human body postures in the visual data can be input into a human body action recognition network to obtain the action label with higher accuracy, and because the time synchronization is carried out when the data is collected, the radar data corresponding to the skeleton sequence can be found, so that the automatic labeling of the human body action of the radar data is completed.
S240, matching the same time labels in the visual data and the radar data.
And S250, associating the action tag identified by the visual data with the radar data based on the same time tag.
In this embodiment, based on the time tag in the header information, the visual data and the radar data are matched, the visual data and the radar data with the same time tag are selected, and the motion tag is recognized by the visual data according to the human motion recognition network and is associated with the radar data with the same detection time.
According to the technical scheme of the embodiment of the invention, the visual sensor and the radar sensor are time-synchronized by taking preset reference time as reference; acquiring visual data of a human body to be detected by a visual sensor and radar data of the human body to be detected by a radar sensor; inputting the skeleton sequence of the visual data into a human body action recognition network to obtain the corresponding action label; matching the same time tag in the visual data and the radar data; the action tag identified by the visual data is associated with the radar data based on the same time tag, so that the problem of how to identify the visual data is solved, and the effect of more convenient visual data human body action identification is achieved.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a radar data annotation apparatus 300 according to a third embodiment of the present invention, which is applicable to a radar data annotation condition, and has a specific structure as follows:
the data acquisition module 310 is configured to acquire visual data of a human body to be detected by a visual sensor and radar data of the human body to be detected by a radar sensor;
an action tag recognition module 320, configured to recognize an action tag for the visual data according to a preset algorithm;
a tag association module 330 for associating the action tag with the radar data based on the visual data and the radar data at the same detection time.
Optionally, the visual data and the radar data include header information, and the header information includes a frame number, a time tag, and a time difference between adjacent frames.
Optionally, the tag association module 330 includes a time tag matching unit and a tag association unit,
a time tag matching unit for matching the same time tag in the visual data and the radar data;
a tag association unit is configured to associate the action tag identified by the visual data with the radar data based on the same time tag.
Optionally, the apparatus 300 further includes a time synchronization module, configured to perform time synchronization on the vision sensor and the radar sensor with reference to a preset reference time.
Optionally, the action tag recognition module 320 includes inputting the skeleton sequence of the visual data into a human action recognition network to obtain the corresponding action tag, where the human action recognition network includes a main point feature learning network, a global feature learning network, and a fully connected learning network.
Optionally, the correspondence between the number of frames of the visual data and the radar data is one-to-many.
Optionally, the detection angles of the plurality of radar sensors to the human body to be detected are different.
The product can execute the method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a server according to a fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary server 412 suitable for use in implementing embodiments of the present invention. The server 412 shown in fig. 4 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present invention.
As shown in FIG. 4, server 412 is in the form of a general purpose server. Components of server 412 may include, but are not limited to: one or more processors 416, a storage device 428, and a bus 418 that couples the various system components including the storage device 428 and the processors 416.
Bus 418 represents one or more of any of several types of bus structures, including a memory device bus or memory device controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Server 412 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by server 412 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 428 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 430 and/or cache Memory 432. The terminal 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 434 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk such as a Compact disk Read-Only Memory (CD-ROM), Digital Video disk Read-Only Memory (DVD-ROM) or other optical media may be provided. In these cases, each drive may be connected to bus 418 by one or more data media interfaces. Storage 428 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 440 having a set (at least one) of program modules 442 may be stored, for instance, in storage 428, such program modules 442 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. The program modules 442 generally perform the functions and/or methodologies of the described embodiments of the invention.
The server 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing terminal, display 424, etc.), one or more terminals that enable a user to interact with the server 412, and/or any terminal (e.g., Network card, modem, etc.) that enables the server 412 to communicate with one or more other computing terminals.A communication may be made via AN input/output (I/O) interface 422. also, the server 412 may communicate via a Network adapter 420 with one or more networks (e.g., local Area Network (L Area Network, L AN), Wide Area Network (WAN), and/or a public Network, such as the Internet). As shown in FIG. 4, the Network adapter 420 communicates via a bus 418 with other modules of the server 412. it should be appreciated that, although not shown, other hardware and/or software modules may be used in conjunction with the server 412, including, but not limited to, Redundant microcode, terminal drives, external disk drive Arrays, disk Arrays (Disks) and disk drives, disk Arrays, disk drives, disk storage systems, and the like.
The processor 416 executes various functional applications and data processing by executing programs stored in the storage device 428, for example, implementing a radar data annotation method provided by any embodiment of the present invention, which may include:
acquiring visual data of a human body to be detected by a visual sensor and radar data of the human body to be detected by a radar sensor;
identifying an action tag for the visual data according to a preset algorithm;
associating the action tag with the radar data based on the visual data and the radar data for the same detection time.
EXAMPLE five
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for tagging radar data, where the method includes:
acquiring visual data of a human body to be detected by a visual sensor and radar data of the human body to be detected by a radar sensor;
identifying an action tag for the visual data according to a preset algorithm;
associating the action tag with the radar data based on the visual data and the radar data for the same detection time.
The computer-readable storage media of embodiments of the invention may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for annotating radar data, comprising:
acquiring visual data of a human body to be detected by a visual sensor and radar data of the human body to be detected by a radar sensor;
identifying an action tag for the visual data according to a preset algorithm;
associating the action tag with the radar data based on the visual data and the radar data for the same detection time.
2. The radar data tagging method of claim 1, wherein said visual data and said radar data comprise header information, said header information comprising a frame number, a time tag, and a time difference of adjacent frames.
3. The radar data tagging method of claim 2, wherein said associating the action tag with the radar data based on the visual data and the radar data at the same detection time comprises:
matching the same time tag in the visual data and the radar data;
associating the action tag identified by the visual data with the radar data based on the same time tag.
4. The method for labeling radar data according to claim 1, wherein the number of the radar sensors is plural, and before the obtaining of the visual data of the human body to be detected by the visual sensor and the radar data of the human body to be detected by the radar sensor, the method further comprises:
and time synchronization is carried out on the vision sensor and the radar sensor by taking a preset reference time as a reference.
5. The method of claim 1, wherein the identifying the action tag for the visual data according to a predetermined algorithm comprises:
and inputting the skeleton sequence of the visual data into a human body action recognition network to obtain a corresponding action tag, wherein the human body action recognition network comprises a main point characteristic learning network, a global characteristic learning network and a full-connection learning network.
6. The method of claim 4, wherein the visual data corresponds to the radar data in a one-to-many frame number.
7. The radar data annotation method of claim 4, wherein detection angles of the plurality of radar sensors with respect to the human body to be detected are different.
8. A radar data tagging apparatus, comprising:
the data acquisition module is used for acquiring visual data of a human body to be detected by a visual sensor and radar data of the human body to be detected by a radar sensor;
the action tag identification module is used for identifying an action tag for the visual data according to a preset algorithm;
a tag association module to associate the visual data and the radar data based on a same detection time,
associating the action tag with the radar data.
9. A server, comprising:
one or more processors;
a storage device for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the radar data annotation method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of radar data annotation according to any one of claims 1 to 7.
CN202010158119.7A 2020-03-09 2020-03-09 Radar data labeling method, device, server and storage medium Pending CN111401179A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010158119.7A CN111401179A (en) 2020-03-09 2020-03-09 Radar data labeling method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010158119.7A CN111401179A (en) 2020-03-09 2020-03-09 Radar data labeling method, device, server and storage medium

Publications (1)

Publication Number Publication Date
CN111401179A true CN111401179A (en) 2020-07-10

Family

ID=71432407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010158119.7A Pending CN111401179A (en) 2020-03-09 2020-03-09 Radar data labeling method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN111401179A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117849896A (en) * 2024-01-17 2024-04-09 北京神州明达高科技有限公司 Integrated microseism life detection method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180025235A1 (en) * 2016-07-21 2018-01-25 Mobileye Vision Technologies Ltd. Crowdsourcing the collection of road surface information
CN108875708A (en) * 2018-07-18 2018-11-23 广东工业大学 Behavior analysis method, device, equipment, system and storage medium based on video
CN110598743A (en) * 2019-08-12 2019-12-20 北京三快在线科技有限公司 Target object labeling method and device
CN110659543A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Vehicle control method and system based on gesture recognition and vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180025235A1 (en) * 2016-07-21 2018-01-25 Mobileye Vision Technologies Ltd. Crowdsourcing the collection of road surface information
CN110659543A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Vehicle control method and system based on gesture recognition and vehicle
CN108875708A (en) * 2018-07-18 2018-11-23 广东工业大学 Behavior analysis method, device, equipment, system and storage medium based on video
CN110598743A (en) * 2019-08-12 2019-12-20 北京三快在线科技有限公司 Target object labeling method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117849896A (en) * 2024-01-17 2024-04-09 北京神州明达高科技有限公司 Integrated microseism life detection method and system

Similar Documents

Publication Publication Date Title
US11379696B2 (en) Pedestrian re-identification method, computer device and readable medium
Qu et al. RGBD salient object detection via deep fusion
CN109948542B (en) Gesture recognition method and device, electronic equipment and storage medium
CN112861575A (en) Pedestrian structuring method, device, equipment and storage medium
CN110610127B (en) Face recognition method and device, storage medium and electronic equipment
CN110175528B (en) Human body tracking method and device, computer equipment and readable medium
CN110490905A (en) A kind of method for tracking target based on YOLOv3 and DSST algorithm
CN111914667B (en) Smoking detection method and device
Tian et al. Scene Text Detection in Video by Learning Locally and Globally.
CN113763466B (en) Loop detection method and device, electronic equipment and storage medium
CN112307864A (en) Method and device for determining target object and man-machine interaction system
CN112861808B (en) Dynamic gesture recognition method, device, computer equipment and readable storage medium
CN103105924A (en) Man-machine interaction method and device
Yu et al. Spatial cognition-driven deep learning for car detection in unmanned aerial vehicle imagery
CN112541403B (en) Indoor personnel falling detection method by utilizing infrared camera
Fei et al. Flow-pose Net: An effective two-stream network for fall detection
Beg et al. Text writing in the air
KR102440198B1 (en) VIDEO SEARCH METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
CN111399634B (en) Method and device for recognizing gesture-guided object
CN109345567B (en) Object motion track identification method, device, equipment and storage medium
CN112686122B (en) Human body and shadow detection method and device, electronic equipment and storage medium
Abdulghani et al. Discover human poses similarity and action recognition based on machine learning
CN111401179A (en) Radar data labeling method, device, server and storage medium
CN110849380B (en) Map alignment method and system based on collaborative VSLAM
CN115527083B (en) Image annotation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200710