CN111814811A - Image information extraction method, training method and device, medium and electronic equipment - Google Patents

Image information extraction method, training method and device, medium and electronic equipment Download PDF

Info

Publication number
CN111814811A
CN111814811A CN202010817705.8A CN202010817705A CN111814811A CN 111814811 A CN111814811 A CN 111814811A CN 202010817705 A CN202010817705 A CN 202010817705A CN 111814811 A CN111814811 A CN 111814811A
Authority
CN
China
Prior art keywords
image
information
training
descriptor
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010817705.8A
Other languages
Chinese (zh)
Other versions
CN111814811B (en
Inventor
樊欢欢
李姬俊男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010817705.8A priority Critical patent/CN111814811B/en
Publication of CN111814811A publication Critical patent/CN111814811A/en
Application granted granted Critical
Publication of CN111814811B publication Critical patent/CN111814811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an image information extraction method, an image information extraction network training method, an image information extraction device, an image information extraction network training device, a computer-readable storage medium and electronic equipment, and relates to the technical field of image processing. The image information extraction method includes: acquiring an image, and extracting first characteristic point information and first descriptor information of the image; determining intermediate image information by using first feature point information and first descriptor information of the image; the intermediate image information comprises geometric topological information of the characteristic points of the image; second feature point information and second descriptor information of the image are extracted based on the intermediate image information. The present disclosure can improve the accuracy of extracting feature points and descriptors.

Description

Image information extraction method, training method and device, medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image information extraction method, an image information extraction network training method, an image information extraction device, an image information extraction network training device, a computer-readable storage medium, and an electronic device.
Background
With the development of terminal technology, positioning, AR (Augmented Reality), unmanned driving, and the like can be realized based on a terminal. To implement these techniques, the terminal generally needs to perform a series of processes on the image. In the process of processing the image, the extraction of the feature points and the descriptors is particularly important, and whether the upper-layer application can be smoothly realized is directly influenced.
However, in a real scene, due to the influence of illumination, scale and the like, the extracted feature points and descriptors are not accurate enough, so that the image processing effect is not ideal.
Disclosure of Invention
The present disclosure provides an image information extraction method, a training method of an image information extraction network, an image information extraction device, a training device of an image information extraction network, a computer-readable storage medium, and an electronic device, thereby overcoming, at least to some extent, the problem that extracted feature points and descriptors are inaccurate.
According to a first aspect of the present disclosure, there is provided an image information extraction method including: acquiring an image, and extracting first characteristic point information and first descriptor information of the image; determining intermediate image information by using first feature point information and first descriptor information of the image; the intermediate image information comprises geometric topological information of the characteristic points of the image; second feature point information and second descriptor information of the image are extracted based on the intermediate image information.
According to a second aspect of the present disclosure, there is provided a training method of an image information extraction network, including: acquiring an image information extraction network and a training set; wherein the training set comprises a plurality of image pairs, each image pair comprising a first training image and a second training image belonging to the same scene; inputting a first training image into an image information extraction network to obtain feature point information and descriptor information of the first training image, wherein the descriptor information of the first training image comprises geometric topological information of feature points of the first training image; inputting a second training image into an image information extraction network to obtain feature point information and descriptor information of the second training image, wherein the descriptor information of the second training image comprises geometric topological information of feature points of the second training image; and training the image information extraction network by using a loss function according to the characteristic point information and the descriptor information of the first training image and the characteristic point information and the descriptor information of the second training image.
According to a third aspect of the present disclosure, there is provided an image information extraction apparatus including: the first extraction module is used for acquiring an image and extracting first characteristic point information and first descriptor information of the image; the image information determining module is used for determining intermediate image information by utilizing first feature point information and first descriptor information of the image; the intermediate image information comprises geometric topological information of the characteristic points of the image; and the second extraction module is used for extracting second characteristic point information and second descriptor information of the image based on the intermediate image information.
According to a fourth aspect of the present disclosure, there is provided a training apparatus for an image information extraction network, comprising: the network acquisition module is used for acquiring an image information extraction network and a training set; wherein the training set comprises a plurality of image pairs, each image pair comprising a first training image and a second training image belonging to the same scene; the first information extraction module is used for inputting the first training image into an image information extraction network to obtain feature point information and descriptor information of the first training image, wherein the descriptor information of the first training image comprises geometric topology information of the feature points of the first training image; the second information extraction module is used for inputting a second training image into the image information extraction network to obtain feature point information and descriptor information of the second training image, wherein the descriptor information of the second training image comprises geometric topology information of the feature points of the second training image; and the training module is used for training the image information extraction network by using a loss function according to the characteristic point information and the descriptor information of the first training image and the characteristic point information and the descriptor information of the second training image.
According to a fifth aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image information extraction method or the training method of the image information extraction network described above.
According to a sixth aspect of the present disclosure, there is provided an electronic device comprising a processor; a memory for storing one or more programs which, when executed by the processor, cause the processor to implement the image information extraction method or the training method of the image information extraction network described above.
In the technical solution provided by some embodiments of the present disclosure, first feature point information and first descriptor information of an image are extracted, intermediate image information including geometric topology information is determined by using the first feature point information and the first descriptor information, and second feature point information and second descriptor information of the image are extracted based on the intermediate image information. On one hand, in the process of extracting the feature points and the descriptors, the geometric topological information of the feature points is determined, and the influence of illumination change, scale change, time change and the like is overcome to a certain extent by combining the geometric topological information, so that the accuracy of extracting the feature points and the descriptors can be improved; on the other hand, in the case of improving the accuracy of the feature points and the descriptors, the accuracy of the upper-layer application algorithm is also improved for upper-layer applications such as positioning, AR, unmanned driving, and the like.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 shows a schematic diagram of an exemplary system architecture for an image information extraction scheme of an embodiment of the present disclosure;
FIG. 2 illustrates a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure;
fig. 3 schematically shows a flowchart of an image information extraction method according to an exemplary embodiment of the present disclosure;
fig. 4 illustrates a network structure diagram of an image information extraction network according to an exemplary embodiment of the present disclosure;
FIG. 5 is a diagram illustrating effects of an image information extraction scheme to which exemplary embodiments of the present disclosure are applied;
FIG. 6 schematically illustrates a flow chart of a method of training an image information extraction network according to an exemplary embodiment of the present disclosure;
fig. 7 schematically shows a block diagram of an image information extraction apparatus according to an exemplary embodiment of the present disclosure;
fig. 8 schematically shows a block diagram of a training apparatus of an image information extraction network according to an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation. In addition, all of the following terms "first", "second", and "third" are used for distinguishing purposes only, and should not be construed as limiting the present disclosure.
Fig. 1 shows a schematic diagram of an exemplary system architecture of an image information extraction scheme of an embodiment of the present disclosure.
As shown in fig. 1, the system architecture 1000 may include one or more of terminal devices 1001, 1002, 1003, a network 1004, and a server 1005. The network 1004 is used to provide a medium for communication links between the terminal devices 1001, 1002, 1003 and the server 1005. Network 1004 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 1005 may be a server cluster composed of a plurality of servers.
A user may use the terminal devices 1001, 1002, 1003 to interact with a server 1005 via a network 1004 to receive or transmit messages or the like. The terminal devices 1001, 1002, 1003 may be various electronic devices having a display screen, including but not limited to smart phones, AR devices, tablet computers, laptop and desktop computers, and the like.
It should be noted that the image information extraction method and the training method of the image information extraction network of the exemplary embodiments of the present disclosure may be implemented only by the terminal devices 1001, 1002, 1003, that is, the terminal devices 1001, 1002, 1003 may perform the respective steps of the image information extraction method and the training method of the image information extraction network. However, each step of the image information extraction method and the training method of the image information extraction network may be performed only by the server 1005.
Alternatively, the server 1005 may execute the training method of the image information extraction network of the exemplary embodiment of the present disclosure, and after obtaining the trained image information extraction network, the server 1005 may transmit the network to the terminal devices 1001, 1002, 1003, or actively download the network from the server 1005 by the terminal devices 1001, 1002, 1003. Next, the process of image information extraction of the exemplary embodiment of the present disclosure is implemented by the terminal devices 1001, 1002, 1003 using the network.
Alternatively, the training process of the network may be implemented by one terminal device, and then the terminal device sends the trained network to another terminal device, and the other terminal device performs the steps of image information extraction.
For convenience of explanation, although the following description explains the present disclosure by taking as an example that the terminal devices 1001, 1002, 1003 execute the image information extraction method and the server 1005 executes the training method of the image information extraction network, from the above explanation, those skilled in the art may know that the present disclosure does not limit the execution ends of the image information extraction method and the training method of the image information extraction network.
The process for image information extraction:
first, the terminal devices 1001, 1002, and 1003 may acquire an image, and extract first feature point information and first descriptor information of the image; next, the terminal device 1001, 1002, 1003 may determine intermediate image information containing geometric topology information of feature points of the image, using the first feature point information and the first descriptor information of the image; then, the terminal apparatuses 1001, 1002, 1003 may extract the second feature point information and the second descriptor information of the image based on the intermediate image information.
After the second feature Point information and the second descriptor information of the image are obtained, the terminal devices 1001, 1002, and 1003 may further perform a similar pixel matching process of the image by using the second feature Point information and the second descriptor information, and calculate a pose relationship between the images according to a matching result and by combining a PnP (Perspective-n-Point) algorithm, thereby implementing visual positioning. In particular, the method can be applied to scenes such as AR, navigation positioning and auxiliary driving.
In the case where the image information extraction method of the present disclosure is implemented by the terminal device 1001, 1002, 1003, an image information extraction means described below may be configured in the terminal device 1001, 1002, 1003.
The process of extracting network training for image information:
first, the server 1005 may obtain an image information extraction network and a training set (may also be referred to as training data), where the training set includes a plurality of image pairs, each of the image pairs includes a first training image and a second training image, and the first training image and the second training image belong to a same scene, and may be two images obtained by shooting the same scene due to different shooting times or different shooting angles; next, the server 1005 may input the first training image into an image information extraction network to obtain feature point information and descriptor information of the first training image, where the descriptor information includes geometric topology information of feature points of the first training image, and in addition, the server 1005 may input the second training image into the image information extraction network to obtain feature point information and descriptor information of the second training image, where the descriptor information includes geometric topology information of feature points of the second training image; then, the server 1005 may train the image information extraction network by using the loss function according to the feature point information and the descriptor information of the first training image and the feature point information and the descriptor information of the second training image, and obtain the trained image information extraction network after convergence.
In the case where the training method of the image information extraction network of the present disclosure is implemented by the server 1005, a training device of the image information extraction network described below may be configured in the server 1005.
FIG. 2 shows a schematic diagram of an electronic device suitable for use in implementing exemplary embodiments of the present disclosure. The terminal device described above may be configured in the form of an electronic device as shown in fig. 2. It should be noted that the electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device of the present disclosure includes at least a processor and a memory for storing one or more programs, which when executed by the processor, cause the processor to implement the image information extraction method or the training method of the image information extraction network of the exemplary embodiments of the present disclosure.
Specifically, as shown in fig. 2, the electronic device 200 may include: a processor 210, an internal memory 221, an external memory interface 222, a Universal Serial Bus (USB) interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 271, a microphone 272, a microphone 273, an earphone interface 274, a sensor module 280, a display screen 290, a camera module 291, an indicator 292, a motor 293, a key 294, and a Subscriber Identity Module (SIM) card interface 295, and the like. The sensor module 280 may include a depth sensor, a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 200. In other embodiments of the present application, the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units, such as: the Processor 210 may include an Application Processor (AP), a modem Processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural Network Processor (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors. Additionally, a memory may be provided in processor 210 for storing instructions and data.
The electronic device 200 may implement a shooting function through the ISP, the camera module 291, the video codec, the GPU, the display screen 290, the application processor, and the like. In some embodiments, the electronic device 200 may include 1 or N camera modules 291, where N is a positive integer greater than 1, and if the electronic device 200 includes N cameras, one of the N cameras is a main camera.
Internal memory 221 may be used to store computer-executable program code, including instructions. The internal memory 221 may include a program storage area and a data storage area. The external memory interface 222 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 200.
The present disclosure also provides a computer-readable storage medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device.
A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable storage medium may transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The computer-readable storage medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Fig. 3 schematically shows a flowchart of an image information extraction method of an exemplary embodiment of the present disclosure. Referring to fig. 3, the image information extraction method may include the steps of:
and S32, acquiring the image, and extracting first feature point information and first descriptor information of the image.
The image in the exemplary embodiment of the present disclosure may be an image captured by the terminal device through the camera module thereof. For example, for an AR scene, when the terminal device performs a mapping operation or a repositioning operation, the terminal device needs to start the camera module to acquire an image in the scene, and the acquired image may be used as an image to be subjected to information extraction in the present disclosure.
In addition, the image in the exemplary embodiment of the present disclosure may also be an image acquired by the terminal device from another device or a server. For example, in a scene requiring image retrieval, the terminal device downloads an image from the server, and through the subsequent steps of the scheme, the image information of the image can be obtained, and the same or similar image is searched by using the image information, so as to realize the image retrieval.
After the image is acquired, the terminal device may extract first feature point information and first descriptor information of the image. The process of extracting the first feature point information and the first descriptor information can be realized in a neural network mode.
Specifically, the terminal device may extract first feature point information of the image by using a first neural network, and may extract first descriptor information of the image by using a second neural network. The number of network layers of the first neural network is smaller than that of the second neural network.
The first Neural network may be a shallow CNN (Convolutional Neural network). The number of network layers of the first neural network is smaller, and the extracted first feature point information contains more image texture information, that is, in the embodiment of the present disclosure, the first feature point information may also be represented as image texture information. Specifically, the first feature point information includes coordinate information of the feature point.
The second neural network may be a deep CNN. The number of network layers of the second neural network is large, and the extracted first descriptor information can sufficiently represent the context semantic information of the image, that is, in the embodiment of the present disclosure, the first descriptor information can also be represented as the context semantic information of the image, and can also be referred to as the semantic information of the image.
Although the shallow CNN corresponds to the first neural network and the deep CNN corresponds to the second neural network, other models may be used to extract the first feature point information and the first descriptor information, and the disclosure is not limited thereto.
S34, determining intermediate image information by using first feature point information and first descriptor information of the image; wherein the intermediate image information contains geometric topological information of the feature points of the image.
After obtaining the first feature point information and the first descriptor information of the image, the terminal device may obtain the geometric topology information of the feature point by using the first feature point information and the first descriptor information. The geometric topological information described in the present disclosure characterizes geometric relationships of feature points, e.g., relative relationships of feature points, whether the feature points are in the same straight line, whether the feature points are coplanar, and the like.
In an exemplary embodiment of the present disclosure, a Graph Neural Network (GNN) may be used to obtain geometric topology information of feature points. Graph neural networks are algorithmic models that run on graph structures using deep learning. The Graph processed (Graph) generally refers to a topological Graph, including nodes and edges, rather than an image consisting of pixels. In contrast, the pixel image mainly includes visual information, the topological graph mainly includes relationship information, and the graph neural network is a model for processing the relationship information in the topological graph.
Specifically, the terminal device may input the first feature point information and the first descriptor information of the image into the graph neural network as nodes of the graph neural network. Next, the graph neural network may determine an edge relationship between the nodes, i.e., geometric topology information characterizing the feature points.
Subsequently, intermediate image information may be generated in conjunction with the geometric topology information of the feature points. That is, the intermediate image information output by the graph neural network contains the geometric topology information of the determined feature points. It is easily understood that the intermediate image information may further include the first feature point information and the first descriptor information determined by step S32.
And S36, extracting second feature point information and second descriptor information of the image based on the intermediate image information.
After generating the intermediate image information based on the graph neural network, the terminal device may extract the second feature point information and the second descriptor information of the image based on the intermediate image information, and output as the image information.
Aiming at the process of extracting the second feature point information:
first, the terminal device may determine intermediate feature points from the intermediate image information. The intermediate feature point is a feature point included in the intermediate image information. In some embodiments, the feature point is a feature point corresponding to the first feature point information. In other embodiments, the feature point may also be a feature point remaining after the feature point corresponding to the first feature point information is subjected to a screening operation, and the screening operation may be performed by the terminal device according to the relative position relationship of the feature point, or may be performed manually, which is not limited by the present disclosure. In addition, in still other embodiments, as will be appreciated by those skilled in the art, the number of valid feature points may be increased by the determined geometric topological information, in which case the intermediate feature points may not completely correspond to the feature points corresponding to the first feature point information.
Next, the terminal device may calculate the confidence of the intermediate feature points. The confidence is the probability that the intermediate feature point can be the feature point to be output. In the scheme of using the neural network, the calculation of the confidence can be realized by using a Sigmoid function.
Then, second feature point information of the image can be extracted based on the confidence degrees of the intermediate feature points. In particular, the confidence of the intermediate feature points may be compared to a confidence threshold. The confidence threshold may be preset, and the present disclosure does not limit the specific value thereof. If the confidence of the intermediate feature point is greater than the confidence threshold, the intermediate feature point can be determined as a second feature point, and the second feature point information of the image can be extracted. If the confidence of the intermediate feature point is less than or equal to the confidence threshold, the intermediate feature point can be rejected.
It is understood that the second feature point information of the present disclosure may include coordinates and confidence levels of corresponding feature points.
For the process of extracting the second descriptor information:
the terminal device may perform dimensionality reduction (or referred to as compression) on the intermediate image information by using a third neural network to extract second descriptor information of the image. The third neural network is used for integrating and extracting information, and may be specifically configured in the form of CNN, and the present disclosure does not specifically limit the specific structure of the network.
It should be noted that the second descriptor information contains geometric topology information of feature points determined based on the graph neural network. Therefore, the extracted image information is more accurate.
The neural networks described above may constitute the image information extraction network of the present solution. Fig. 4 illustrates a network structure diagram of an image information extraction network according to an exemplary embodiment of the present disclosure.
Referring to fig. 4, the image information extraction network may include a first image extraction structure, a geometric topology information extraction structure, and a second image extraction structure. Wherein the first image extraction structure comprises a first neural network 41 and a second neural network 42, the geometric topological information extraction structure may comprise a graph neural network 43, and the second image extraction structure may comprise a Sigmoid function 44 and a third neural network 45.
With reference to the network structure shown in fig. 4, first, the input image may obtain first feature point information after passing through the first neural network 41, and the input image may obtain first descriptor information after passing through the second neural network 42; next, the first feature point information and the first descriptor information are input to the graph neural network 43 as nodes of the graph neural network 43, and geometric topology information of the feature points can be learned through the graph neural network 43; subsequently, the intermediate image information containing the geometric topology information of the feature points may be respectively passed through the Sigmoid function 44 and the third neural network 45, and the obtained second feature point information and second descriptor information are the feature point information and the descriptor information to be output by the present scheme.
Fig. 5 shows an effect diagram of an image information extraction scheme to which an exemplary embodiment of the present disclosure is applied. Because the descriptor information output by the scheme contains the geometric topology information, the image information can be accurately extracted.
The following describes a training method of the image information extraction network of the present disclosure. Although the following description is made taking the example in which the server performs the training method, the training method may also be performed by the terminal device.
Fig. 6 schematically shows a flowchart of a training method of an image information extraction network of an exemplary embodiment of the present disclosure. Referring to fig. 6, the training method of the image information extraction network may include the steps of:
s62, acquiring an image information extraction network and a training set; wherein the training set comprises a plurality of image pairs, each image pair comprising a first training image and a second training image belonging to the same scene.
In an exemplary embodiment of the present disclosure, the training set includes a plurality of image pairs, each of the image pairs includes a first training image and a second training image, and the first training image and the second training image are images of the same scene, for example, two images photographed for the same scene at different times (e.g., day and night), at different photographing angles, under different illumination, and the like.
In addition, the training data can also obtain a characteristic point pair corresponding to the image pair besides the image pair, and the characteristic point pair is used as a true value to supervise and learn the network.
As shown in fig. 4, the image information extraction network may include a first image information extraction structure, a geometric topology information extraction structure, and a second image information extraction structure. The first image extraction structure comprises a first neural network and a second neural network, the geometric topological information extraction structure comprises a graph neural network, and the second image extraction structure comprises a Sigmoid function and a third neural network.
And S64, inputting the first training image into an image information extraction network to obtain feature point information and descriptor information of the first training image, wherein the descriptor information of the first training image comprises geometric topology information of the feature points of the first training image.
Specifically, the server inputs a first training image into a first image information extraction structure to obtain first feature point information and first descriptor information of the first training image; inputting first feature point information and first descriptor information of a first training image into a geometric topological information extraction structure to obtain intermediate image information of the first training image, wherein the intermediate image information of the first training image comprises geometric topological information of feature points of the first training image; and inputting the intermediate image information of the first training image into a second image information extraction structure to obtain second feature point information and second descriptor information of the first training image, wherein the second feature point information and the second descriptor information are used as feature point information and descriptor information of the first training image.
And S66, inputting the second training image into an image information extraction network to obtain feature point information and descriptor information of the second training image, wherein the descriptor information of the second training image comprises geometric topology information of the feature points of the second training image.
Specifically, the server inputs a second training image into the first image information extraction structure to obtain first feature point information and first descriptor information of the second training image; inputting first feature point information and first descriptor information of a second training image into a geometric topological information extraction structure to obtain intermediate image information of the second training image, wherein the intermediate image information of the second training image comprises geometric topological information of feature points of the second training image; and inputting the intermediate image information of the second training image into a second image information extraction structure to obtain second feature point information and second descriptor information of the second training image, wherein the second feature point information and the second descriptor information are used as feature point information and descriptor information of the second training image.
In addition, the present disclosure does not limit the execution order of steps S64 and S66, and the first training image and the second training image may be input to the network at the same time, and the processing procedure may be executed at the same time.
And S68, training the image information extraction network by using a loss function according to the characteristic point information and the descriptor information of the first training image and the characteristic point information and the descriptor information of the second training image.
The loss function of the image information extraction network of the exemplary embodiment of the present disclosure includes a descriptor loss function and a feature point loss function.
The descriptor loss function is shown in equation 1:
Figure BDA0002633322370000141
wherein L isdescriptorIn order to describe the sub-loss function,
Figure BDA0002633322370000142
descriptor information output via the network for a first training image (denoted as image a),
Figure BDA0002633322370000143
descriptor information output via the network for a second training image (denoted as image B).
The characteristic point loss function is shown in equation 2:
Figure BDA0002633322370000144
wherein L ispeakyAs a characteristic point loss function, SijFor the probability of the output feature point, specifically, the probability of the feature point should be greater than the probability values of the surrounding pixel points, and P is the size of the block to be calculated, for example, a 256 × 256 image may be divided into 4 × 4 image blocks, and the loss function is calculated according to the image blocks.
In this case, the loss function of the image information extraction network may be a sum of the above two losses, which is expressed as formula 3:
L(Ldescriptor,Lpeaky)=Ldescriptor+Lpeaky(formula 3))
And executing a back propagation process through the calculation of the loss function, and adjusting each parameter of the image information extraction network until convergence so as to obtain the trained image information extraction network.
After obtaining the trained image information extraction network, the server may perform the process of the image information extraction method by using the network. Alternatively, the server may send the trained image information extraction network to another server or terminal device, and the other server or terminal device performs the process of the image information extraction method by using the network.
It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Further, an image information extraction device is also provided in the present exemplary embodiment.
Fig. 7 schematically shows a block diagram of an image information extraction apparatus of an exemplary embodiment of the present disclosure. Referring to fig. 7, the image information extraction apparatus 7 according to an exemplary embodiment of the present disclosure may include a first extraction module 71, an image information determination module 73, and a second extraction module 75.
Specifically, the first extraction module 71 may be configured to obtain an image, and extract first feature point information and first descriptor information of the image; the image information determining module 73 may be configured to determine intermediate image information using the first feature point information and the first descriptor information of the image; the intermediate image information comprises geometric topological information of the characteristic points of the image; the second extraction module 75 may be configured to extract second feature point information and second descriptor information of the image based on the intermediate image information.
According to an exemplary embodiment of the present disclosure, the first extraction module 71 may be configured to perform: extracting first characteristic point information of the image by adopting a first neural network; extracting first descriptor information of the image by using a second neural network; the number of network layers of the first neural network is smaller than that of the second neural network.
According to an exemplary embodiment of the present disclosure, the image information determination module 73 may be configured to perform: inputting first feature point information and first descriptor information of the image into a graph neural network as nodes of the graph neural network; determining edge relations among the nodes by using a graph neural network to obtain geometric topological information of the feature points of the image; and generating intermediate image information by combining the geometric topological information of the characteristic points of the image.
According to an exemplary embodiment of the present disclosure, the second extraction module 75 may be configured to perform: determining intermediate characteristic points from the intermediate image information; calculating the confidence coefficient of the intermediate characteristic point; and extracting second feature point information of the image based on the confidence degrees of the intermediate feature points.
According to an exemplary embodiment of the present disclosure, the process of the second extraction module 75 extracting the second feature point information of the image based on the confidence of the intermediate feature point may be configured to perform: comparing the confidence of the intermediate feature points to a confidence threshold; and if the confidence coefficient of the intermediate feature point is greater than the confidence coefficient threshold value, determining the intermediate feature point as a second feature point so as to extract second feature point information of the image.
According to an exemplary embodiment of the present disclosure, the second extraction module 75 may be configured to perform: performing dimensionality reduction processing on the intermediate image information by adopting a third neural network to extract second descriptor information of the image; and the second descriptor information comprises geometric topological information of the characteristic points of the image.
Further, the present exemplary embodiment also provides a training apparatus for an image information extraction network.
Fig. 8 schematically shows a block diagram of a training apparatus of an image information extraction network of an exemplary embodiment of the present disclosure. Referring to fig. 8, the training apparatus 8 of an image information extraction network according to an exemplary embodiment of the present disclosure may include a network acquisition module 81, a first information extraction module 83, a second information extraction module 85, and a training module 87.
Specifically, the network obtaining module 81 may be configured to obtain an image information extraction network and a training set; wherein the training set comprises a plurality of image pairs, each image pair comprising a first training image and a second training image belonging to the same scene; the first information extraction module 83 may be configured to input the first training image into an image information extraction network to obtain feature point information and descriptor information of the first training image, where the descriptor information of the first training image includes geometric topology information of feature points of the first training image; the second information extraction module 85 may be configured to input the second training image into the image information extraction network to obtain feature point information and descriptor information of the second training image, where the descriptor information of the second training image includes geometric topology information of feature points of the second training image; the training module 87 may be configured to train the image information extraction network using a loss function according to the feature point information and the descriptor information of the first training image and the feature point information and the descriptor information of the second training image.
According to an exemplary embodiment of the present disclosure, an image information extraction network includes a first image information extraction structure, a geometric topology information extraction structure, and a second image information extraction structure. The first information extraction module 83 may be configured to perform: inputting a first training image into a first image information extraction structure to obtain first characteristic point information and first descriptor information of the first training image; inputting first feature point information and first descriptor information of a first training image into a geometric topological information extraction structure to obtain intermediate image information of the first training image, wherein the intermediate image information of the first training image comprises geometric topological information of feature points of the first training image; and inputting the intermediate image information of the first training image into a second image information extraction structure to obtain second feature point information and second descriptor information of the first training image, wherein the second feature point information and the second descriptor information are used as feature point information and descriptor information of the first training image.
According to an exemplary embodiment of the present disclosure, the second information extraction module 85 may be configured to perform: inputting a second training image into the first image information extraction structure to obtain first characteristic point information and first descriptor information of the second training image; inputting first feature point information and first descriptor information of a second training image into a geometric topological information extraction structure to obtain intermediate image information of the second training image, wherein the intermediate image information of the second training image comprises geometric topological information of feature points of the second training image; and inputting the intermediate image information of the second training image into a second image information extraction structure to obtain second feature point information and second descriptor information of the second training image, wherein the second feature point information and the second descriptor information are used as feature point information and descriptor information of the second training image.
Since each functional module of the image information extraction device and the training device of the image information extraction network in the embodiments of the present disclosure is the same as that in the above-described method embodiments, it is not described herein again.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (12)

1. An image information extraction method, characterized by comprising:
acquiring an image, and extracting first characteristic point information and first descriptor information of the image;
determining intermediate image information by using first feature point information and first descriptor information of the image; wherein the intermediate image information contains geometric topological information of feature points of the image;
and extracting second feature point information and second descriptor information of the image based on the intermediate image information.
2. The image information extraction method according to claim 1, wherein extracting first feature point information and first descriptor information of the image includes:
extracting first characteristic point information of the image by adopting a first neural network;
extracting first descriptor information of the image by using a second neural network;
the number of network layers of the first neural network is smaller than that of the second neural network.
3. The image information extraction method according to claim 2, wherein determining intermediate image information using first feature point information and first descriptor information of the image includes:
inputting first feature point information and first descriptor information of the image into a graph neural network as nodes of the graph neural network;
determining edge relations among the nodes by using the graph neural network to obtain geometric topological information of the feature points of the image;
and generating intermediate image information by combining the geometric topological information of the characteristic points of the image.
4. The image information extraction method according to any one of claims 1 to 3, wherein extracting second feature point information of the image based on the intermediate image information includes:
determining intermediate feature points from the intermediate image information;
calculating the confidence of the intermediate feature points;
and extracting second characteristic point information of the image based on the confidence degree of the intermediate characteristic point.
5. The image information extraction method according to claim 4, wherein extracting second feature point information of the image based on the confidence of the intermediate feature point includes:
comparing the confidence of the intermediate feature points to a confidence threshold;
and if the confidence degree of the intermediate feature point is greater than the confidence degree threshold value, determining the intermediate feature point as a second feature point so as to extract second feature point information of the image.
6. The image information extraction method according to any one of claims 1 to 3, wherein extracting second descriptor information of the image based on the intermediate image information includes:
performing dimensionality reduction processing on the intermediate image information by adopting a third neural network to extract second descriptor information of the image;
wherein the second descriptor information contains geometric topology information of feature points of the image.
7. A training method for an image information extraction network is characterized by comprising the following steps:
acquiring an image information extraction network and a training set; wherein the training set comprises a plurality of image pairs, each image pair comprising a first training image and a second training image belonging to the same scene;
inputting the first training image into the image information extraction network to obtain feature point information and descriptor information of the first training image, wherein the descriptor information of the first training image comprises geometric topology information of the feature points of the first training image;
inputting the second training image into the image information extraction network to obtain feature point information and descriptor information of the second training image, wherein the descriptor information of the second training image comprises geometric topology information of the feature points of the second training image;
and training the image information extraction network by using a loss function according to the characteristic point information and the descriptor information of the first training image and the characteristic point information and the descriptor information of the second training image.
8. The method for training an image information extraction network according to claim 7, wherein the image information extraction network comprises a first image information extraction structure, a geometric topology information extraction structure, and a second image information extraction structure; inputting the first training image into the image information extraction network to obtain feature point information and descriptor information of the first training image, wherein the method comprises the following steps:
inputting the first training image into the first image information extraction structure to obtain first feature point information and first descriptor information of the first training image; inputting first feature point information and first descriptor information of the first training image into the geometric topological information extraction structure to obtain intermediate image information of the first training image, wherein the intermediate image information of the first training image comprises geometric topological information of feature points of the first training image; and inputting the intermediate image information of the first training image into the second image information extraction structure to obtain second feature point information and second descriptor information of the first training image, wherein the second feature point information and the second descriptor information are used as feature point information and descriptor information of the first training image.
9. An image information extraction apparatus characterized by comprising:
the first extraction module is used for acquiring an image and extracting first feature point information and first descriptor information of the image;
the image information determining module is used for determining intermediate image information by utilizing first feature point information and first descriptor information of the image; wherein the intermediate image information contains geometric topological information of feature points of the image;
and the second extraction module is used for extracting second feature point information and second descriptor information of the image based on the intermediate image information.
10. An apparatus for training an image information extraction network, comprising:
the network acquisition module is used for acquiring an image information extraction network and a training set; wherein the training set comprises a plurality of image pairs, each image pair comprising a first training image and a second training image belonging to the same scene;
a first information extraction module, configured to input the first training image into the image information extraction network to obtain feature point information and descriptor information of the first training image, where the descriptor information of the first training image includes geometric topology information of feature points of the first training image;
a second information extraction module, configured to input the second training image into the image information extraction network to obtain feature point information and descriptor information of the second training image, where the descriptor information of the second training image includes geometric topology information of feature points of the second training image;
and the training module is used for training the image information extraction network by using a loss function according to the characteristic point information and the descriptor information of the first training image and the characteristic point information and the descriptor information of the second training image.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the image information extraction method according to any one of claims 1 to 6 or the training method of the image information extraction network according to claim 7 or 8.
12. An electronic device, comprising:
a processor;
a memory for storing one or more programs that, when executed by the processor, cause the processor to implement the image information extraction method of any one of claims 1 to 6 or the training method of the image information extraction network of claim 7 or 8.
CN202010817705.8A 2020-08-14 2020-08-14 Image information extraction method, training method and device, medium and electronic equipment Active CN111814811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010817705.8A CN111814811B (en) 2020-08-14 2020-08-14 Image information extraction method, training method and device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010817705.8A CN111814811B (en) 2020-08-14 2020-08-14 Image information extraction method, training method and device, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111814811A true CN111814811A (en) 2020-10-23
CN111814811B CN111814811B (en) 2024-07-26

Family

ID=72859363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010817705.8A Active CN111814811B (en) 2020-08-14 2020-08-14 Image information extraction method, training method and device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111814811B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469985A (en) * 2021-07-13 2021-10-01 中国科学院深圳先进技术研究院 Method for extracting characteristic points of endoscope image
CN114639006A (en) * 2022-03-15 2022-06-17 北京理工大学 Loop detection method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130243250A1 (en) * 2009-09-14 2013-09-19 Trimble Navigation Limited Location of image capture device and object features in a captured image
CN103971112A (en) * 2013-02-05 2014-08-06 腾讯科技(深圳)有限公司 Image feature extracting method and device
CN110211164A (en) * 2019-06-05 2019-09-06 中德(珠海)人工智能研究院有限公司 The image processing method of characteristic point operator based on neural network learning basic figure
CN111311588A (en) * 2020-02-28 2020-06-19 浙江商汤科技开发有限公司 Relocation method and apparatus, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130243250A1 (en) * 2009-09-14 2013-09-19 Trimble Navigation Limited Location of image capture device and object features in a captured image
CN103971112A (en) * 2013-02-05 2014-08-06 腾讯科技(深圳)有限公司 Image feature extracting method and device
CN110211164A (en) * 2019-06-05 2019-09-06 中德(珠海)人工智能研究院有限公司 The image processing method of characteristic point operator based on neural network learning basic figure
CN111311588A (en) * 2020-02-28 2020-06-19 浙江商汤科技开发有限公司 Relocation method and apparatus, electronic device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469985A (en) * 2021-07-13 2021-10-01 中国科学院深圳先进技术研究院 Method for extracting characteristic points of endoscope image
WO2023284246A1 (en) * 2021-07-13 2023-01-19 中国科学院深圳先进技术研究院 Endoscopic image feature point extraction method
CN114639006A (en) * 2022-03-15 2022-06-17 北京理工大学 Loop detection method and device and electronic equipment
CN114639006B (en) * 2022-03-15 2023-09-26 北京理工大学 Loop detection method and device and electronic equipment

Also Published As

Publication number Publication date
CN111814811B (en) 2024-07-26

Similar Documents

Publication Publication Date Title
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
CN112927363B (en) Voxel map construction method and device, computer readable medium and electronic equipment
CN111967515B (en) Image information extraction method, training method and device, medium and electronic equipment
CN111444826B (en) Video detection method, device, storage medium and computer equipment
CN110009059B (en) Method and apparatus for generating a model
CN112598780B (en) Instance object model construction method and device, readable medium and electronic equipment
CN111444744A (en) Living body detection method, living body detection device, and storage medium
CN112270709B (en) Map construction method and device, computer readable storage medium and electronic equipment
CN111476783A (en) Image processing method, device and equipment based on artificial intelligence and storage medium
CN111814811B (en) Image information extraction method, training method and device, medium and electronic equipment
CN114330565A (en) Face recognition method and device
CN113744286A (en) Virtual hair generation method and device, computer readable medium and electronic equipment
CN113920023B (en) Image processing method and device, computer readable medium and electronic equipment
CN114049417B (en) Virtual character image generation method and device, readable medium and electronic equipment
CN113610034B (en) Method and device for identifying character entities in video, storage medium and electronic equipment
CN112258647B (en) Map reconstruction method and device, computer readable medium and electronic equipment
CN113902636A (en) Image deblurring method and device, computer readable medium and electronic equipment
CN113902932A (en) Feature extraction method, visual positioning method and device, medium and electronic equipment
CN114973293B (en) Similarity judging method, key frame extracting method and device, medium and equipment
CN112950641B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN112598074B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN116310615A (en) Image processing method, device, equipment and medium
CN114419279A (en) Three-dimensional object generation method and device, storage medium and electronic equipment
CN113298731A (en) Image color migration method and device, computer readable medium and electronic equipment
CN114067394A (en) Face living body detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant