CN115048954B - Retina-imitating target detection method and device, storage medium and terminal - Google Patents

Retina-imitating target detection method and device, storage medium and terminal Download PDF

Info

Publication number
CN115048954B
CN115048954B CN202210565522.0A CN202210565522A CN115048954B CN 115048954 B CN115048954 B CN 115048954B CN 202210565522 A CN202210565522 A CN 202210565522A CN 115048954 B CN115048954 B CN 115048954B
Authority
CN
China
Prior art keywords
pulse array
array signal
signal
time domain
pulse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210565522.0A
Other languages
Chinese (zh)
Other versions
CN115048954A (en
Inventor
田永鸿
李家宁
朱林
项锡捷
王艺璇
李典泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202210565522.0A priority Critical patent/CN115048954B/en
Publication of CN115048954A publication Critical patent/CN115048954A/en
Application granted granted Critical
Publication of CN115048954B publication Critical patent/CN115048954B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a retina-imitating target detection method, a retina-imitating target detection device, a storage medium and a terminal, wherein the retina-imitating target detection method comprises the following steps: acquiring a target pulse array signal; the target pulse array signal comprises a first pulse array signal from a simulated retina fovea sampling circuit and a second pulse array signal from a simulated retina periphery sampling circuit; performing space-time synchronization on the first pulse array signal and the second pulse array signal to obtain a pulse array signal to be processed; inputting the pulse array signal to be processed into a pre-trained target detector, and outputting a target detection result corresponding to the pulse array signal to be processed. The application utilizes the advantages of high-speed visual texture imaging of the simulated retina fovea sampling circuit and combines the advantages of high time resolution, high dynamic range and low power consumption of the simulated retina peripheral visual sampling circuit, thereby solving the problem that the traditional camera is difficult to detect with high precision in high-speed movement, over-illumination and low-illumination scenes and improving the detection precision of extreme scenes.

Description

Retina-imitating target detection method and device, storage medium and terminal
Technical Field
The invention relates to the technical field of computer vision, in particular to a retina-imitating target detection method, a retina-imitating target detection device, a retina-imitating target detection storage medium and a retina-imitating target detection terminal.
Background
Visual target detection is to collect visual information of a target through a specific sensor and calculate position information and category attributes of an object of interest. The accurate target detection is the basis of advanced visual tasks such as target tracking, behavior understanding, video object retrieval and the like, and is widely applied to the fields of automatic driving, video monitoring, man-machine interaction and the like. The depth learning technology greatly improves the accuracy of image or video target detection, but the traditional image frame paradigm greatly restricts the real-time accurate detection of a target object in extreme scenes such as high-speed movement or low illumination.
In recent years, the deep learning object detector learns the high-level semantic features of the object layer by layer in a supervised learning mode under the drive of big data and strong calculation force, avoids the tedious inefficiency of manually designing the features, and makes breakthrough progress and is relatively mature in application under the conventional movement speed or proper illumination. However, there is an increasing need for visual imaging and detection techniques for moving objects in actual complex scenes (high-speed motion or extreme illumination), such as high-speed aircraft pose measurement, high-speed running vehicle detection, ball object positioning in sporting events, and the like. The typical high-speed targets have the characteristics of short duration, strong maneuverability, complex track, poor regularity and the like, the typical high-speed targets have the characteristics of high motion speed exceeding 100 km/h, the traditional image frame normal sampling rate is generally 30-120 frames/s, the displacement of the high-speed targets in unit exposure time is large, and the imaging has serious motion blurring effect, so that the detection performance of the high-speed moving targets is greatly reduced.
In addition, most of the existing high-speed cameras adopt a traditional 'what you see is what you get' image frame paradigm imaging mechanism, a high-frame-rate image sequence is directly obtained through dense sampling, massive redundant data generated by the high-speed cameras can bring great challenges to storage and processing, and real-time accurate detection of a high-speed moving target is difficult to meet under the condition of limited resources. In addition, the high-speed camera is expensive, and the wide application of the high-speed camera is limited to a certain extent. Therefore, the existing image frame paradigm is difficult to meet the application requirements of efficient and accurate target detection in actual extreme scenes, and the exploration of novel visual sampling paradigm and target detection technology is urgent to be researched.
Disclosure of Invention
The embodiment of the application provides a retina-imitating target detection method, a retina-imitating target detection device, a storage medium and a terminal. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present application provides a retina-like target detection method, including:
Acquiring a target pulse array signal; wherein,
The target pulse array signal comprises a first pulse array signal from a simulated retina fovea sampling circuit and a second pulse array signal from a simulated retina periphery sampling circuit;
performing space-time synchronization on the first pulse array signal and the second pulse array signal to obtain a pulse array signal to be processed;
Inputting the pulse array signal to be processed into a pre-trained target detector, and outputting a target detection result corresponding to the pulse array signal to be processed.
Optionally, the space-time synchronization includes time domain synchronization and spatial synchronization;
performing space-time synchronization of the first pulse array signal and the second pulse array signal, comprising:
synchronizing the time domain of the first pulse array signal and the time domain of the second pulse array signal by adopting synchronous triggering acquisition software;
And constructing a mapping relation between the first pulse array signal and the second pulse array signal by adopting spatial method-based transformation so as to spatially synchronize the first pulse array signal and the second pulse array signal.
Optionally, the pre-trained target detector includes a time domain aggregation characterization module and a dynamic interaction fusion module;
Inputting the pulse array signal to be processed into a pre-trained target detector, and outputting a target detection result corresponding to the pulse array signal to be processed, wherein the method comprises the following steps:
The time domain aggregation characterization module performs time domain modeling according to the pulse array signal to be processed to obtain a first feature map sequence and a second feature map sequence;
the dynamic interaction fusion module carries out complementary fusion according to the first characteristic diagram sequence and the second characteristic diagram sequence to obtain a target detection result corresponding to the pulse array signal to be processed; wherein,
The fusion mode of complementary fusion is a characteristic addition mode, a characteristic splicing mode or a network model signal interaction mode.
Optionally, the time domain aggregation characterization module comprises a signal dividing sub-module, a feature characterization sub-module and an information mining sub-module; the pulse array signal to be processed comprises a first synchronous pulse array signal and a second synchronous pulse array signal;
The time domain aggregation characterization module performs time domain modeling according to the pulse array signal to be processed to obtain a first feature map sequence and a second feature map sequence, and the time domain aggregation characterization module comprises:
the signal dividing sub-module dynamically divides the first synchronous pulse array signal and the second synchronous pulse array signal to obtain divided pulse array signals;
The characteristic characterization submodule carries out characteristic coding on the divided pulse array signals to obtain a plurality of coding characteristics in the time domain;
And the information mining sub-module performs time domain modeling according to the plurality of coding features on the time domain to obtain a first feature map sequence and a second feature map sequence.
Optionally, dynamically dividing the first synchronous pulse array signal and the second synchronous pulse array signal to obtain divided pulse array signals, including:
respectively acquiring signal change information of each of the first synchronous pulse array signal and the second synchronous pulse array signal in a time domain to obtain first signal change information and second signal change information;
Dynamically adjusting a signal division threshold according to the first signal change information to obtain an adjusted first signal division threshold;
Dynamically adjusting the signal dividing threshold according to the second signal change information to obtain an adjusted second signal dividing threshold;
Dynamically dividing the first synchronous pulse array signal by adopting the adjusted first signal dividing threshold value to obtain a divided first pulse array signal;
and dynamically dividing the second synchronous pulse array signal by adopting the adjusted second signal dividing threshold value to obtain a divided second pulse array signal.
Optionally, feature encoding is performed on the divided pulse array signal to obtain a plurality of encoding features in a time domain, including:
reconstructing the divided first pulse array signal according to a timestamp with a preset fixed frequency in a time domain to obtain a reconstructed image sequence;
mapping, transposing and affine transforming the reconstructed image sequence to obtain a plurality of first coding features in the time domain;
and carrying out space transformation on the divided second pulse array signals to obtain a plurality of second coding features in the time domain.
Optionally, the simulated fovea sampling circuit is an integral visual sampling model, and the integral visual sampling model represents visual texture information by using an asynchronous pulse array signal; the retina-imitating peripheral sampling circuit is a differential visual sampling model, and the differential visual sampling model represents scene dynamic information by using an asynchronous pulse array signal; wherein the integral vision sampling model is a sampling neuron integral issuing model.
In a second aspect, an embodiment of the present application provides a retina-like target detection device, including:
The signal acquisition module is used for acquiring a target pulse array signal; wherein,
The target pulse array signal comprises a first pulse array signal from a simulated retina fovea sampling circuit and a second pulse array signal from a simulated retina periphery sampling circuit;
The space-time synchronization module is used for performing space-time synchronization on the first pulse array signal and the second pulse array signal to obtain a pulse array signal to be processed;
The detection result output module is used for inputting the pulse array signal to be processed into a pre-trained target detector and outputting a target detection result corresponding to the pulse array signal to be processed.
In a third aspect, embodiments of the present application provide a computer storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to perform the above-described method steps.
In a fourth aspect, an embodiment of the present application provides a terminal, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps described above.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, a target pulse array signal is firstly acquired by a target detection device imitating retina, wherein the target pulse array signal comprises a first pulse array signal from a central concave sampling circuit of the retina and a second pulse array signal from a peripheral sampling circuit of the retina, then the first pulse array signal and the second pulse array signal are subjected to space-time synchronization to obtain a pulse array signal to be processed, finally the pulse array signal to be processed is input into a target detector trained in advance, and a target detection result corresponding to the pulse array signal to be processed is output. The application utilizes the advantages of high-speed visual texture imaging of the simulated retina fovea sampling circuit and combines the advantages of high time resolution, high dynamic range and low power consumption of the simulated retina peripheral visual sampling circuit, thereby solving the problem that the traditional camera is difficult to detect with high precision in high-speed movement, over-illumination and low-illumination scenes and improving the detection precision of extreme scenes.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic flow chart of a retina-imitating target detection method according to an embodiment of the present application;
FIG. 2 shows a pulse array signal acquisition device combining a simulated fovea with the periphery;
FIG. 3 is a schematic diagram of a standard checkerboard correction plate provided by an embodiment of the present application;
FIG. 4 is a flow chart of a network configuration process of the object detection process provided by the present application;
FIG. 5 is a scene graph of a high-speed motion scene provided by the application;
FIG. 6 is a schematic block diagram of a retina-like target detection flow provided by the application;
Fig. 7 is a schematic structural diagram of a retina-imitating target detection device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments of the invention to enable those skilled in the art to practice them.
It should be understood that the described embodiments are merely some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention as detailed in the accompanying claims.
In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art. Furthermore, in the description of the present invention, unless otherwise indicated, "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The application provides a retina-imitating target detection method, a retina-imitating target detection device, a retina-imitating target detection storage medium and a retina-imitating target detection terminal, which are used for solving the problems in the related technical problems. In the technical scheme provided by the application, the advantages of high-speed visual texture imaging of the simulated retina fovea sampling circuit are utilized, and the advantages of high time resolution, high dynamic range and low power consumption of the simulated retina peripheral visual sampling circuit are combined, so that the problem that the traditional camera is difficult to detect with high precision in high-speed movement, over-illumination and low-illumination scenes can be solved, the detection precision of extreme scenes is improved, and the method is described in detail by adopting an exemplary embodiment.
The following describes in detail the retina-like target detection method according to the embodiment of the present application with reference to fig. 1 to 6. The method may be implemented in dependence on a computer program, and may be run on a von neumann system-based retinal-like target detection device. The computer program may be integrated in the application or may run as a stand-alone tool class application.
Referring to fig. 1, a flowchart of a retina-like target detection method is provided in an embodiment of the present application. As shown in fig. 1, the method according to the embodiment of the present application may include the following steps:
S101, acquiring a target pulse array signal;
The target pulse array signal comprises a first pulse array signal from a simulated retina fovea sampling circuit and a second pulse array signal from a simulated retina periphery sampling circuit;
Generally, the biological vision system has the advantages of high definition, low power consumption, strong robustness and the like, and can efficiently process optical signals, sense three-dimensional information of complex scenes and objects, and understand and identify the scenes.
In recent years, retinal-like sensors are imaging mechanisms that mimic the biological retinal visual pathway, and currently there are mainly dynamic vision sensors (Dynamic Vision Sensor, DVS) and ultra-high-speed full-time vision sensors (Vidar). The dynamic vision sensor is a vision sensor for simulating a brightness change sensitivity mechanism of peripheral cells of retina, the issued nerve pulse signals are described by space-time sparse pulse array signals, and compared with the traditional fixed frame rate camera, the dynamic vision sensor has the advantages of high time resolution, high dynamic range, low power consumption and the like, but cannot capture texture details of a scene; the ultra-high-speed full-time vision sensor is a mechanism for simulating clear imaging of the fovea, the issued nerve pulse signals are used for integrating the light intensity of a scene to exceed a threshold value, and the vision information is recorded in full time by using the space-time sparse pulse signals.
The dynamic vision sensor (Dynamic Vision Sensor, DVS) performs object detection, which overcomes the shortcomings of the conventional image frames in extreme scenes to some extent, but the DVS camera only senses motion information and cannot provide fine textures, so that it is difficult to meet the accurate detection of objects in some scenes (such as stationary or slow motion) by using only DVS pulse streams. A dual mode vision sensor (DYNAMIC AND ACTIVE pixel VIsion Sensor, DAVIS) that mixes dynamic vision with images is employed for target detection that utilizes the dominant complementary properties of both DVS pulse streams and image frames to enhance target detection performance, but the limited frame rate of conventional image frames limits the accuracy of the joint target detector in high speed motion scenes. Therefore, it is necessary to explore a new visual sampling means and a target detection framework to realize high-precision target detection in extreme scenes.
Recently, the Vidar camera simulates a three-layer abstraction structure of the fovea, and unlike the dynamic sensing function of the DVS camera simulating the periphery of the retina, a neuron integration distribution model is adopted, namely, light intensity information of a single pixel is modulated (Pulse Frequency Modulation, PFM) or pulse width modulated (Pulse Width Modulation, PWM) at a pulse frequency, and a texture image can be reconstructed according to the pulse distribution frequency or pulse interval in a window. The time domain sampling frequency of the Vidar camera is up to 20000 Hz, and the high-speed texture imaging capability provides a brand new scheme for high-speed moving object detection. However, the dynamic perceived range of Vidar cameras is lower than that of DVS cameras, so that Vidar is difficult to image with high quality under low light conditions. In fact, the Vidar camera and the DVS camera have complementary relationships in terms of perceptual characteristics such as time-domain resolution and dynamic range. Some studies have shown that: the fovea and peripheral regions of the human visual system are not separated from each other, and there is dynamic interaction and synergy to achieve greater visual perception and targeting capabilities. Therefore, the accurate detection of the extreme scene target can be realized by simulating the cooperative sensing mechanism of the fovea and the periphery.
In the embodiment of the application, for example, as shown in fig. 2, fig. 2 is a pulse array signal acquisition device combining a simulated fovea and a periphery, which is provided by the application, firstly adopts a Vidar camera and a DAVIS camera (comprising two modes of DVS and image frames), and selects an optical spectroscope of Thorlabs CCM1-BS013 to equally divide an optical path into 2 neuromorphic vision cameras, then the optical path is established, a double-sided prism of the Thorlabs CCM1-BS013 is respectively connected with the Vidar camera and the DAVIS camera, the two cameras are respectively adjacent to an acquisition server, and a chip is arranged in the server and can process pulse array signals.
In one possible implementation, in performing target detection, first a first pulse array signal from a simulated fovea sampling circuit and a second pulse array signal from a simulated retinal peripheral sampling circuit are acquired to obtain a target pulse array signal.
S102, performing space-time synchronization on a first pulse array signal and a second pulse array signal to obtain a pulse array signal to be processed;
In general, spatio-temporal synchronization includes time domain synchronization and spatial synchronization.
In the embodiment of the application, the first pulse array signal and the second pulse array signal are time domain synchronized by adopting synchronous triggering acquisition software, and then the mapping relation between the first pulse array signal and the second pulse array signal is constructed by adopting spatial method-based transformation so as to spatially synchronize the first pulse array signal and the second pulse array signal.
For example, the space-time synchronization can be implemented by designing a space-time synchronization triggering and collecting software at the server control end, triggering the Vidar camera and the DAVIS346,346 camera to collect software at the same time, and processing after two paths of asynchronous pulse array signals are obtained at the same time. Spatially synchronizing, a standard checkerboard correction plate is placed in front of the hybrid camera, as shown in FIG. 3, and a spatial-method transformation is used to construct a mapping of the two camera views, and map the Vidar camera views to DAVIS view.
In one possible implementation, the pulse array signal to be processed may be obtained after time-domain synchronization and spatial synchronization of the two pulse array signals of the two cameras.
S103, inputting the pulse array signal to be processed into a pre-trained target detector, and outputting a target detection result corresponding to the pulse array signal to be processed.
In the embodiment of the present application, for example, as shown in fig. 4, fig. 4 is a network structure processing flowchart of a target detection process provided by the present application, where a pre-trained target detector includes a time domain aggregation characterization module and a dynamic interaction fusion module, and after pulse array signals of two cameras are processed by the network structure, a detection result in a high-speed motion scene or a low-illumination scene can be output.
Specifically, when a pulse array signal to be processed is input into a pre-trained target detector and a target detection result corresponding to the pulse array signal to be processed is output, firstly, a time domain aggregation characterization module performs time domain modeling according to the pulse array signal to be processed to obtain a first feature map sequence and a second feature map sequence, and then a dynamic interaction fusion module performs complementary fusion according to the first feature map sequence and the second feature map sequence to obtain the target detection result corresponding to the pulse array signal to be processed. The fusion mode of complementary fusion is a characteristic addition mode, a characteristic splicing mode or a network model signal interaction mode.
Specifically, the time domain aggregation characterization module comprises a signal dividing sub-module, a characteristic characterization sub-module and an information mining sub-module; the pulse array signal to be processed comprises a first synchronous pulse array signal and a second synchronous pulse array signal.
Further, when the time domain aggregation characterization module performs time domain modeling according to the pulse array signal to be processed to obtain a first feature map sequence and a second feature map sequence, the signal dividing sub-module dynamically divides the first synchronous pulse array signal and the second synchronous pulse array signal to obtain divided pulse array signals, then the feature characterization sub-module performs feature encoding on the divided pulse array signals to obtain a plurality of encoding features in the time domain, and finally the information mining sub-module performs time domain modeling according to the plurality of encoding features in the time domain to obtain the first feature map sequence and the second feature map sequence.
Specifically, when the first synchronous pulse array signal and the second synchronous pulse array signal are dynamically divided to obtain divided pulse array signals, firstly, signal change information of the first synchronous pulse array signal and the second synchronous pulse array signal on a time domain is respectively obtained to obtain first signal change information and second signal change information, then, a signal division threshold value is dynamically adjusted according to the first signal change information to obtain an adjusted first signal division threshold value, then, the signal division threshold value is dynamically adjusted according to the second signal change information to obtain an adjusted second signal division threshold value, then, the first synchronous pulse array signal is dynamically divided by adopting the adjusted first signal division threshold value to obtain divided first pulse array signals, and finally, the second synchronous pulse array signal is dynamically divided by adopting the adjusted second signal division threshold value to obtain divided second pulse array signals.
Specifically, when feature encoding is performed on the divided pulse array signals to obtain a plurality of encoding features in the time domain, firstly reconstructing the divided first pulse array signals according to a timestamp with a preset fixed frequency in the time domain to obtain a reconstructed image sequence, then mapping, transposing and affine transforming the reconstructed image sequence to obtain a plurality of first encoding features in the time domain, and finally performing space transformation on the divided second pulse array signals to obtain a plurality of second encoding features in the time domain.
Specifically, the simulated fovea sampling circuit is an integral visual sampling model, and the integral visual sampling model represents visual texture information by using an asynchronous pulse array signal; the retina-imitating peripheral sampling circuit is a differential visual sampling model, and the differential visual sampling model represents scene dynamic information by using an asynchronous pulse array signal; wherein the integral vision sampling model is a sampling neuron integral issuing model.
For example, as shown in fig. 5, fig. 5 is a scene diagram of a high-speed motion scene provided by the application, wherein two paths of pulse signal arrays are collected for fan blades running at high speed through two cameras imitating retina, then the pulse signal arrays are input into a target detector, and finally a detection result is output, and letters on each fan blade can be clearly seen from the detection result.
For example, as shown in fig. 6, fig. 6 is a schematic block diagram of a retina-like target detection flow provided by the present application, in which firstly, a retina-like fovea sampling circuit and a retina-like peripheral sampling circuit are integrated into a sampling device, pulse array signals output by the two sampling devices are used as input of a target detector, then, two continuous pulse array signals are flexibly divided, feature encoding is performed on the divided pulse array signals, secondly, a plurality of encoding features in a time domain are subjected to time domain modeling, time domain correlation of the pulse array signals is mined, finally, feature fusion is performed on the two pulse array signals, and the output frequency of a target detection result is flexibly set according to the inference frequency requirement of a target detection task.
Specifically, the pulse array signal carries out a flexible dividing strategy, and can carry out self-adaptive adjustment on scene change degree, optical flow information, movement speed and the like.
Specifically, the feature coding of the pulse array signal can adopt a manual design kernel function, a convolution neural network or a pulse neural network to code a discrete lattice in a three-dimensional space into a three-dimensional tensor, and the tensor can be compatible with the existing deep learning model.
Specifically, the pulse array signal performs time domain modeling on a plurality of feature vectors in the time domain, and a cyclic neural network, a transducer and other structures can be adopted to mine the time sequence correlation of the pulse array signal.
Specifically, the frequency of target detection result output is flexibly set, which is different from the fixed frame rate output frequency of the traditional video, the frequency of inference output is flexibly adjusted according to the requirement of detection tasks, and the time domain length of pulse array signals or pulse data are subjected to parameter adjustment, for example, the inference frequency of a high-speed motion scene can reach kilohertz.
Specifically, the integral vision sampling model is a sampling neuron integral issuing model, and the neuromorphic vision sensor comprises, but is not limited to, an ultra-high-speed full-time vision sensor, and has the advantages of high time resolution, clear texture and the like.
Specifically, the neuromorphic vision sensor of the differential vision sampling model includes, but is not limited to DVS, DAVIS, ATIS, celex and the like, and has the advantages of high time domain resolution, high dynamic range, low power consumption and the like.
Specifically, the space-time synchronization method in the acquisition device is not limited to the simulated fovea sampling circuit and the simulated retinal periphery sampling circuit, and can be popularized to the combination of the simulated fovea sampling circuit, the simulated retinal periphery sampling circuit and the traditional image frame sampling circuit.
In the embodiment of the application, a target pulse array signal is firstly acquired by a target detection device imitating retina, wherein the target pulse array signal comprises a first pulse array signal from a central concave sampling circuit of the retina and a second pulse array signal from a peripheral sampling circuit of the retina, then the first pulse array signal and the second pulse array signal are subjected to space-time synchronization to obtain a pulse array signal to be processed, finally the pulse array signal to be processed is input into a target detector trained in advance, and a target detection result corresponding to the pulse array signal to be processed is output. The application utilizes the advantages of high-speed visual texture imaging of the simulated retina fovea sampling circuit and combines the advantages of high time resolution, high dynamic range and low power consumption of the simulated retina peripheral visual sampling circuit, thereby solving the problem that the traditional camera is difficult to detect with high precision in high-speed movement, over-illumination and low-illumination scenes and improving the detection precision of extreme scenes.
The following are examples of the apparatus of the present invention that may be used to perform the method embodiments of the present invention. For details not disclosed in the embodiments of the apparatus of the present invention, please refer to the embodiments of the method of the present invention.
Referring to fig. 7, a schematic structural diagram of a retina-like object detection device according to an exemplary embodiment of the present invention is shown. The retina-like object detection device may be implemented as all or part of a terminal by software, hardware, or a combination of both. The device 1 comprises a signal acquisition module 10, a space-time synchronization module 20 and a detection result output module 30.
A signal acquisition module 10 for acquiring a target pulse array signal; wherein,
The target pulse array signal comprises a first pulse array signal from a simulated retina fovea sampling circuit and a second pulse array signal from a simulated retina periphery sampling circuit;
The space-time synchronization module 20 is configured to perform space-time synchronization on the first pulse array signal and the second pulse array signal, so as to obtain a pulse array signal to be processed;
The detection result output module 30 is configured to input the pulse array signal to be processed into a pre-trained target detector, and output a target detection result corresponding to the pulse array signal to be processed.
It should be noted that, when the retina-like object detection device provided in the above embodiment performs the retina-like object detection method, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the retina-like target detection device provided in the above embodiment and the retina-like target detection method embodiment belong to the same concept, which embody detailed implementation procedures in the method embodiment, and are not described herein again.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the embodiment of the application, a target pulse array signal is firstly acquired by a target detection device imitating retina, wherein the target pulse array signal comprises a first pulse array signal from a central concave sampling circuit of the retina and a second pulse array signal from a peripheral sampling circuit of the retina, then the first pulse array signal and the second pulse array signal are subjected to space-time synchronization to obtain a pulse array signal to be processed, finally the pulse array signal to be processed is input into a target detector trained in advance, and a target detection result corresponding to the pulse array signal to be processed is output. The application utilizes the advantages of high-speed visual texture imaging of the simulated retina fovea sampling circuit and combines the advantages of high time resolution, high dynamic range and low power consumption of the simulated retina peripheral visual sampling circuit, thereby solving the problem that the traditional camera is difficult to detect with high precision in high-speed movement, over-illumination and low-illumination scenes and improving the detection precision of extreme scenes.
The present invention also provides a computer readable medium having stored thereon program instructions which, when executed by a processor, implement the retina-like object detection method provided by the above-described respective method embodiments. The invention also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the retina-like object detection method of the above-described method embodiments.
Referring to fig. 8, a schematic structural diagram of a terminal is provided in an embodiment of the present application. As shown in fig. 8, terminal 1000 can include: at least one processor 1001, at least one network interface 1004, a user interface 1003, a memory 1005, at least one communication bus 1002.
Wherein the communication bus 1002 is used to enable connected communication between these components.
The user interface 1003 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1003 may further include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 1001 may include one or more processing cores. The processor 1001 connects various parts within the overall electronic device 1000 using various interfaces and lines, performs various functions of the electronic device 1000 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005, and invoking data stored in the memory 1005. Alternatively, the processor 1001 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 1001 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 1001 and may be implemented by a single chip.
The Memory 1005 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). The memory 1005 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described respective method embodiments, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 1005 may also optionally be at least one storage device located remotely from the processor 1001. As shown in fig. 8, an operating system, a network communication module, a user interface module, and a retina-like object detection application may be included in a memory 1005 as one type of computer storage medium.
In terminal 1000 shown in fig. 8, user interface 1003 is mainly used for providing an input interface for a user, and acquiring data input by the user; and the processor 1001 may be configured to call the retina-like object detection application stored in the memory 1005, and specifically perform the following operations:
Acquiring a target pulse array signal; wherein,
The target pulse array signal comprises a first pulse array signal from a simulated retina fovea sampling circuit and a second pulse array signal from a simulated retina periphery sampling circuit;
performing space-time synchronization on the first pulse array signal and the second pulse array signal to obtain a pulse array signal to be processed;
Inputting the pulse array signal to be processed into a pre-trained target detector, and outputting a target detection result corresponding to the pulse array signal to be processed.
In one embodiment, the processor 1001, when performing the spatiotemporal synchronization of the first pulse array signal with the second pulse array signal, specifically performs the following operations:
synchronizing the time domain of the first pulse array signal and the time domain of the second pulse array signal by adopting synchronous triggering acquisition software;
And constructing a mapping relation between the first pulse array signal and the second pulse array signal by adopting spatial method-based transformation so as to spatially synchronize the first pulse array signal and the second pulse array signal.
In one embodiment, the processor 1001, when performing inputting the pulse array signal to be processed into the pre-trained target detector and outputting the target detection result corresponding to the pulse array signal to be processed, specifically performs the following operations:
The time domain aggregation characterization module performs time domain modeling according to the pulse array signal to be processed to obtain a first feature map sequence and a second feature map sequence;
the dynamic interaction fusion module carries out complementary fusion according to the first characteristic diagram sequence and the second characteristic diagram sequence to obtain a target detection result corresponding to the pulse array signal to be processed; wherein,
The fusion mode of complementary fusion is a characteristic addition mode, a characteristic splicing mode or a network model signal interaction mode.
In one embodiment, when executing the time domain aggregation characterization module to perform time domain modeling according to the pulse array signal to be processed, the processor 1001 specifically performs the following operations to obtain a first feature map sequence and a second feature map sequence:
the signal dividing sub-module dynamically divides the first synchronous pulse array signal and the second synchronous pulse array signal to obtain divided pulse array signals;
The characteristic characterization submodule carries out characteristic coding on the divided pulse array signals to obtain a plurality of coding characteristics in the time domain;
And the information mining sub-module performs time domain modeling according to the plurality of coding features on the time domain to obtain a first feature map sequence and a second feature map sequence.
In one embodiment, the processor 1001 performs the following operations when performing the dynamic division of the first synchronous pulse array signal and the second synchronous pulse array signal to obtain the divided pulse array signal:
respectively acquiring signal change information of each of the first synchronous pulse array signal and the second synchronous pulse array signal in a time domain to obtain first signal change information and second signal change information;
Dynamically adjusting a signal division threshold according to the first signal change information to obtain an adjusted first signal division threshold;
Dynamically adjusting the signal dividing threshold according to the second signal change information to obtain an adjusted second signal dividing threshold;
Dynamically dividing the first synchronous pulse array signal by adopting the adjusted first signal dividing threshold value to obtain a divided first pulse array signal;
and dynamically dividing the second synchronous pulse array signal by adopting the adjusted second signal dividing threshold value to obtain a divided second pulse array signal.
In one embodiment, the processor 1001 performs the following operations when performing feature encoding on the divided pulse array signal to obtain a plurality of encoded features in the time domain:
reconstructing the divided first pulse array signal according to a timestamp with a preset fixed frequency in a time domain to obtain a reconstructed image sequence;
mapping, transposing and affine transforming the reconstructed image sequence to obtain a plurality of first coding features in the time domain;
and carrying out space transformation on the divided second pulse array signals to obtain a plurality of second coding features in the time domain.
In the embodiment of the application, a target pulse array signal is firstly acquired by a target detection device imitating retina, wherein the target pulse array signal comprises a first pulse array signal from a central concave sampling circuit of the retina and a second pulse array signal from a peripheral sampling circuit of the retina, then the first pulse array signal and the second pulse array signal are subjected to space-time synchronization to obtain a pulse array signal to be processed, finally the pulse array signal to be processed is input into a target detector trained in advance, and a target detection result corresponding to the pulse array signal to be processed is output. The application utilizes the advantages of high-speed visual texture imaging of the simulated retina fovea sampling circuit and combines the advantages of high time resolution, high dynamic range and low power consumption of the simulated retina peripheral visual sampling circuit, thereby solving the problem that the traditional camera is difficult to detect with high precision in high-speed movement, over-illumination and low-illumination scenes and improving the detection precision of extreme scenes.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in the embodiments may be accomplished by computer programs to instruct related hardware, and the program for the object detection of the retina-like object may be stored in a computer readable storage medium, and the program may include the above-described methods in the embodiments when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, or the like.
The foregoing disclosure is illustrative of the present application and is not to be construed as limiting the scope of the application, which is defined by the appended claims.

Claims (5)

1. A method of retinal-like target detection, the method comprising:
Acquiring a target pulse array signal; wherein,
The target pulse array signal comprises a first pulse array signal from a simulated retina fovea sampling circuit and a second pulse array signal from a simulated retina periphery sampling circuit;
performing space-time synchronization on the first pulse array signal and the second pulse array signal to obtain a pulse array signal to be processed;
inputting a pulse array signal to be processed into a pre-trained target detector, and outputting a target detection result corresponding to the pulse array signal to be processed; wherein,
The pre-trained target detector comprises a time domain aggregation characterization module and a dynamic interaction fusion module;
Inputting the pulse array signal to be processed into a pre-trained target detector, and outputting a target detection result corresponding to the pulse array signal to be processed, wherein the method comprises the following steps:
The time domain aggregation characterization module performs time domain modeling according to the pulse array signal to be processed to obtain a first characteristic diagram sequence and a second characteristic diagram sequence;
The dynamic interaction fusion module performs complementary fusion according to the first feature map sequence and the second feature map sequence to obtain a target detection result corresponding to the pulse array signal to be processed; wherein,
The fusion mode of the complementary fusion is a characteristic addition mode, a characteristic splicing mode or a network model signal interaction mode; wherein,
The time domain aggregation characterization module comprises a signal dividing sub-module, a characteristic characterization sub-module and an information mining sub-module; the pulse array signal to be processed comprises a first synchronous pulse array signal and a second synchronous pulse array signal;
the time domain aggregation characterization module performs time domain modeling according to the pulse array signal to be processed to obtain a first feature map sequence and a second feature map sequence, and the time domain aggregation characterization module comprises:
The signal dividing sub-module dynamically divides the first synchronous pulse array signal and the second synchronous pulse array signal to obtain divided pulse array signals;
the characteristic characterization submodule performs characteristic coding on the divided pulse array signals to obtain a plurality of coding characteristics in the time domain;
the information mining submodule carries out time domain modeling according to a plurality of coding features on a time domain to obtain a first feature map sequence and a second feature map sequence; wherein,
The dynamically dividing the first synchronous pulse array signal and the second synchronous pulse array signal to obtain divided pulse array signals comprises the following steps:
respectively acquiring signal change information of the first synchronous pulse array signal and the second synchronous pulse array signal in a time domain to obtain first signal change information and second signal change information;
Dynamically adjusting a signal division threshold according to the first signal change information to obtain an adjusted first signal division threshold;
dynamically adjusting a signal division threshold according to the second signal change information to obtain an adjusted second signal division threshold;
Dynamically dividing the first synchronous pulse array signal by adopting an adjusted first signal dividing threshold value to obtain a divided first pulse array signal;
dynamically dividing the second synchronous pulse array signal by adopting an adjusted second signal dividing threshold value to obtain a divided second pulse array signal; wherein,
The feature encoding of the divided pulse array signal to obtain a plurality of encoding features in the time domain comprises the following steps:
reconstructing the divided first pulse array signal according to a timestamp with a preset fixed frequency in a time domain to obtain a reconstructed image sequence;
mapping, transposing and affine transforming the reconstructed image sequence to obtain a plurality of first coding features in the time domain;
Performing space transformation on the divided second pulse array signals to obtain a plurality of second coding features in the time domain; wherein,
The simulated fovea sampling circuit is an integral visual sampling model, and the integral visual sampling model represents visual texture information by using an asynchronous pulse array signal; the retina-imitating peripheral sampling circuit is a differential visual sampling model, and the differential visual sampling model represents scene dynamic information by using an asynchronous pulse array signal; wherein the integrated visual sampling model is a sampled neuron integration release model.
2. The method of claim 1, wherein the spatio-temporal synchronization comprises time domain synchronization and spatial synchronization;
The performing space-time synchronization of the first pulse array signal and the second pulse array signal includes:
performing time domain synchronization on the first pulse array signal and the second pulse array signal by adopting synchronous triggering acquisition software;
And constructing a mapping relation between the first pulse array signal and the second pulse array signal by adopting spatial method-based transformation so as to spatially synchronize the first pulse array signal and the second pulse array signal.
3. A retinal-like object detection device, the device comprising:
The signal acquisition module is used for acquiring a target pulse array signal; wherein,
The target pulse array signal comprises a first pulse array signal from a simulated retina fovea sampling circuit and a second pulse array signal from a simulated retina periphery sampling circuit;
The space-time synchronization module is used for performing space-time synchronization on the first pulse array signal and the second pulse array signal to obtain a pulse array signal to be processed;
The detection result output module is used for inputting the pulse array signal to be processed into a pre-trained target detector and outputting a target detection result corresponding to the pulse array signal to be processed; wherein,
The pre-trained target detector comprises a time domain aggregation characterization module and a dynamic interaction fusion module;
The detection result output module is specifically used for:
The time domain aggregation characterization module performs time domain modeling according to the pulse array signal to be processed to obtain a first characteristic diagram sequence and a second characteristic diagram sequence;
The dynamic interaction fusion module performs complementary fusion according to the first feature map sequence and the second feature map sequence to obtain a target detection result corresponding to the pulse array signal to be processed; wherein,
The fusion mode of the complementary fusion is a characteristic addition mode, a characteristic splicing mode or a network model signal interaction mode; wherein,
The time domain aggregation characterization module comprises a signal dividing sub-module, a characteristic characterization sub-module and an information mining sub-module; the pulse array signal to be processed comprises a first synchronous pulse array signal and a second synchronous pulse array signal;
the time domain aggregation characterization module performs time domain modeling according to the pulse array signal to be processed to obtain a first feature map sequence and a second feature map sequence, and the time domain aggregation characterization module comprises:
The signal dividing sub-module dynamically divides the first synchronous pulse array signal and the second synchronous pulse array signal to obtain divided pulse array signals;
the characteristic characterization submodule performs characteristic coding on the divided pulse array signals to obtain a plurality of coding characteristics in the time domain;
the information mining submodule carries out time domain modeling according to a plurality of coding features on a time domain to obtain a first feature map sequence and a second feature map sequence; wherein,
The dynamically dividing the first synchronous pulse array signal and the second synchronous pulse array signal to obtain divided pulse array signals comprises the following steps:
respectively acquiring signal change information of the first synchronous pulse array signal and the second synchronous pulse array signal in a time domain to obtain first signal change information and second signal change information;
Dynamically adjusting a signal division threshold according to the first signal change information to obtain an adjusted first signal division threshold;
dynamically adjusting a signal division threshold according to the second signal change information to obtain an adjusted second signal division threshold;
Dynamically dividing the first synchronous pulse array signal by adopting an adjusted first signal dividing threshold value to obtain a divided first pulse array signal;
dynamically dividing the second synchronous pulse array signal by adopting an adjusted second signal dividing threshold value to obtain a divided second pulse array signal; wherein,
The feature encoding of the divided pulse array signal to obtain a plurality of encoding features in the time domain comprises the following steps:
reconstructing the divided first pulse array signal according to a timestamp with a preset fixed frequency in a time domain to obtain a reconstructed image sequence;
mapping, transposing and affine transforming the reconstructed image sequence to obtain a plurality of first coding features in the time domain;
Performing space transformation on the divided second pulse array signals to obtain a plurality of second coding features in the time domain; wherein,
The simulated fovea sampling circuit is an integral visual sampling model, and the integral visual sampling model represents visual texture information by using an asynchronous pulse array signal; the retina-imitating peripheral sampling circuit is a differential visual sampling model, and the differential visual sampling model represents scene dynamic information by using an asynchronous pulse array signal; wherein the integrated visual sampling model is a sampled neuron integration release model.
4. A computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the method of any of claims 1-2.
5. A terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method according to any of claims 1-2.
CN202210565522.0A 2022-05-23 2022-05-23 Retina-imitating target detection method and device, storage medium and terminal Active CN115048954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210565522.0A CN115048954B (en) 2022-05-23 2022-05-23 Retina-imitating target detection method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210565522.0A CN115048954B (en) 2022-05-23 2022-05-23 Retina-imitating target detection method and device, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN115048954A CN115048954A (en) 2022-09-13
CN115048954B true CN115048954B (en) 2024-07-23

Family

ID=83158742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210565522.0A Active CN115048954B (en) 2022-05-23 2022-05-23 Retina-imitating target detection method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN115048954B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115497028B (en) * 2022-10-10 2023-11-07 中国电子科技集团公司信息科学研究院 Event-driven-based dynamic hidden target detection and recognition method and device
CN118037879A (en) * 2022-11-02 2024-05-14 脉冲视觉(北京)科技有限公司 Time sequence signal processing and image reconstruction method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427823A (en) * 2019-06-28 2019-11-08 北京大学 Joint objective detection method and device based on video frame and pulse array signals
CN111709967A (en) * 2019-10-28 2020-09-25 北京大学 Target detection method, target tracking device and readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275742B (en) * 2020-01-19 2022-01-11 北京大学 Target identification method, device and system and computer readable storage medium
CN113014805B (en) * 2021-02-08 2022-05-20 北京大学 Combined sampling method and device for simulating fovea and periphery of retina
CN113034542B (en) * 2021-03-09 2023-10-10 北京大学 Moving target detection tracking method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427823A (en) * 2019-06-28 2019-11-08 北京大学 Joint objective detection method and device based on video frame and pulse array signals
CN111709967A (en) * 2019-10-28 2020-09-25 北京大学 Target detection method, target tracking device and readable storage medium

Also Published As

Publication number Publication date
CN115048954A (en) 2022-09-13

Similar Documents

Publication Publication Date Title
CN115048954B (en) Retina-imitating target detection method and device, storage medium and terminal
US11861873B2 (en) Event camera-based gaze tracking using neural networks
EP3992846A1 (en) Action recognition method and apparatus, computer storage medium, and computer device
CN106355153A (en) Virtual object display method, device and system based on augmented reality
WO2021098338A1 (en) Model training method, media information synthesizing method, and related apparatus
CN107850936A (en) For the method and system for the virtual display for providing physical environment
CN102932638B (en) 3D video monitoring method based on computer modeling
CN104063871B (en) The image sequence Scene Segmentation of wearable device
CN114245007B (en) High-frame-rate video synthesis method, device, equipment and storage medium
CN113554726B (en) Image reconstruction method and device based on pulse array, storage medium and terminal
WO2022103877A1 (en) Realistic audio driven 3d avatar generation
JP2022537817A (en) Fast hand meshing for dynamic occlusion
US9161012B2 (en) Video compression using virtual skeleton
CN106331823A (en) Video playing method and device
EP3058926A1 (en) Method of transforming visual data into acoustic signals and aid device for visually impaired or blind persons
WO2022041182A1 (en) Method and device for making music recommendation
CN116778058B (en) Intelligent interaction system of intelligent exhibition hall
CN117095096A (en) Personalized human body digital twin model creation method, online driving method and device
WO2020234939A1 (en) Information processing device, information processing method, and program
CN116109974A (en) Volumetric video display method and related equipment
KR102613032B1 (en) Control method of electronic apparatus for providing binocular rendering based on depth map matching field of view of user
EP4344227A1 (en) Video frame interpolation method and apparatus, and device
US11823343B1 (en) Method and device for modifying content according to various simulation characteristics
CN118296682B (en) Digital twin construction method and system based on WEB configuration
US20240127538A1 (en) Scene understanding using occupancy grids

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant