CN107480581B - Object recognition method and device - Google Patents

Object recognition method and device Download PDF

Info

Publication number
CN107480581B
CN107480581B CN201710207520.3A CN201710207520A CN107480581B CN 107480581 B CN107480581 B CN 107480581B CN 201710207520 A CN201710207520 A CN 201710207520A CN 107480581 B CN107480581 B CN 107480581B
Authority
CN
China
Prior art keywords
feature
identified
feature points
image
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710207520.3A
Other languages
Chinese (zh)
Other versions
CN107480581A (en
Inventor
严彦
肖洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Senscape Technologies Beijing Co ltd
Original Assignee
Senscape Technologies Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Senscape Technologies Beijing Co ltd filed Critical Senscape Technologies Beijing Co ltd
Priority to CN201710207520.3A priority Critical patent/CN107480581B/en
Publication of CN107480581A publication Critical patent/CN107480581A/en
Application granted granted Critical
Publication of CN107480581B publication Critical patent/CN107480581B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Abstract

The invention discloses an object identification method, which comprises the following steps: establishing a model of the feature point set according to a preset rule; and comparing the obtained images in the model to identify the object to be identified in the images. The invention also provides an object recognition device. According to the object identification method and the object identification device, object identification can be automatically carried out, and manpower is reduced.

Description

Object recognition method and device
Technical Field
The invention relates to the field of object identification, in particular to an object identification method and device.
Background
The remote assistant application is mainly an application program for remote services such as mechanical maintenance, after-sales service or remote guidance. The remote assistant application mainly comprises a terminal and a server, and when the terminal needs remote service, the server needs to perform remote video processing on video transmitted by the terminal so as to remotely service the terminal.
The existing remote service mainly manually compares and serves objects included in images, so that the service efficiency is low, and too much manpower is occupied.
Therefore, there is a need for an object recognition method and an object recognition apparatus that can be performed automatically.
Disclosure of Invention
The embodiment of the invention provides an object identification method and device, which are used for realizing automatic object identification.
In order to achieve the above object, an embodiment of the present invention provides an object identification method, including:
establishing a model of the feature point set according to a preset rule;
and comparing the obtained images in the model to identify the object to be identified in the images.
In one embodiment, the step of establishing a model of the feature point set according to a predetermined rule specifically includes:
acquiring images of a plurality of preset angles of an object to be identified;
extracting a plurality of feature points in the object region to be recognized aiming at each image of the preset angle;
adding the feature points to a globally unified index inverted table;
and establishing a set of characteristic points near each characteristic point as an adjacent point set according to a preset extraction rule.
In one embodiment, the feature points of the object to be recognized include: one or more of a color feature, a texture feature, a shape feature, or a local feature point.
In one embodiment, the predetermined extraction rule is to extract a point a predetermined distance away from the feature point, or to extract a predetermined number of points closest to the feature point.
In one embodiment, the step of comparing the obtained images in the model to identify the object to be identified included in the images specifically includes:
extracting characteristic points of the obtained image;
acquiring an index list corresponding to the feature points of the image in the index inverted list;
traversing all index items of the index list, and finding the object with the largest occurrence frequency as a pre-identified object;
finding out a characteristic point set C contained in the pre-identified object from the model;
traversing all the feature points of the set C, finding out N feature points which are closest to the feature points and are in the set C for each feature point, and comparing the N feature points with the feature points extracted from the image and corresponding to the pre-identified object;
judging whether a feature point set near a feature point with a preset number in the set C is consistent with the neighbor relation in the feature point set extracted from the image and corresponding to the pre-identified object;
if so, confirming that the object to be recognized in the image is a pre-recognized object, otherwise, not recognizing the object to be recognized.
In one embodiment, the index items are distinguished by object categories, and the number of occurrences is the largest as the number of occurrences of the feature point of the image in the object category is the largest.
In one embodiment, the predetermined number is greater than or equal to half of the number of all feature points of the set C.
According to another object of the present invention, there is also provided an object recognition apparatus, including:
the establishing module is used for establishing a model of the feature point set according to a preset rule;
and the identification module is used for comparing the obtained images in the model so as to identify the object to be identified in the images.
In one embodiment, the establishing module includes:
a first acquisition unit for acquiring images of a plurality of predetermined angles of an object to be recognized;
a first extraction unit, configured to extract a plurality of feature points in the object region to be recognized for each of the images at the predetermined angles;
the adding unit is used for adding the feature points to a globally unified index inverted list;
and the adjacent point set establishing unit is used for establishing a set of the characteristic points near each characteristic point as an adjacent point set according to a preset extraction rule.
In one embodiment, the identification module comprises:
a second extraction unit for extracting feature points from the obtained image;
a second obtaining unit, configured to obtain an index list corresponding to the feature point of the image in the index inverted list;
the first traversal unit is used for traversing all index items of the index list and finding the object with the largest occurrence frequency as a pre-identified object;
a finding unit, configured to find a feature point set C included in the pre-identified object from the model;
a second traversal unit, configured to traverse all the feature points in the set C, find, for each feature point, N feature points that are closest to the feature point and are in the set C, and compare the N feature points with a feature point set extracted from the image and corresponding to the pre-identified object;
a judging unit, configured to judge whether a feature point set near a feature point exceeding a preset number in the set C matches with a neighbor relation in a feature point set extracted from the image and corresponding to the pre-identified object;
and the identification unit is used for confirming that the object to be identified in the image is the object to be identified in advance according to the judgment result of the judgment unit if the feature point sets near the feature points with the number exceeding the preset number conform to the neighbor relation, otherwise, not identifying the object to be identified.
According to another object of the present invention, there is further provided an object recognition apparatus, characterized by comprising any one of the object recognition devices described above.
In one embodiment, the object identification device is a handheld device or an intelligent identification terminal.
The existing object identification technology needs more manpower, so that the service efficiency is low, and too much manpower is occupied. According to the object identification method and the object identification device, the object identification can be automatically carried out, the accuracy is high, the stability is good, the manpower can be reduced, and the efficiency is improved.
Drawings
FIG. 1 is a flow chart and method diagram of an object identification method according to an embodiment of the invention;
FIG. 2 is a flowchart of step S120 of the embodiment shown in FIG. 1;
FIG. 3 is a flowchart of step S140 of the embodiment shown in FIG. 1;
fig. 4 is a block diagram of an object recognition apparatus according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the examples, but without limiting the invention.
FIG. 1 is a flow chart and method diagram of an object identification method according to an embodiment of the invention;
FIG. 2 is a flowchart of step S120 of the embodiment shown in FIG. 1;
fig. 3 is a flowchart of step S140 of the embodiment shown in fig. 1.
Referring to fig. 1, an embodiment of the present invention provides an object identification method, including:
step 120, establishing a model of the feature point set according to a preset rule;
step 140, comparing the obtained images in the model to identify the object to be identified included in the images.
Referring to fig. 2, the step S120 further includes the following steps:
step S122: acquiring images of a plurality of preset angles of an object to be identified;
step S124: extracting a plurality of feature points in the object region to be recognized aiming at each image of the preset angle;
step S126: adding the feature points to a globally unified index inverted table;
step S128: and establishing a set of characteristic points near each characteristic point as an adjacent point set according to a preset extraction rule.
Wherein, the characteristic point of the above-mentioned object to be discerned includes: one or more of a color feature, a texture feature, a shape feature, or a local feature point. The predetermined extraction rule is to extract points at a predetermined distance from the feature points, or to extract a predetermined number of points closest to the feature points.
Further, the image features may include color features, texture features, and the like, shape features, local feature points, and the like. Wherein the local characteristics of the utility model have good stability and are not easy to be interfered by the external environment,
the image feature extraction is a precondition for image analysis and image recognition, and is the most effective way to simplify and express high-dimensional image data, and no information can be seen from an M × N × 3M × N × 3 data matrix of an image, so that key information, some basic elements and the relationship thereof in the image must be extracted according to the data.
The local feature point is a local expression of the image feature, and can only reverse the local specificity of the image, so that the local feature point is only suitable for matching, searching and other applications of the image. It is not well suited for image understanding. While the latter is more concerned with global features such as color distribution, texture features, shape of the main object, etc. Global features are susceptible to environmental interference, illumination, rotation, noise and other adverse factors. In contrast, local feature points often intersect corresponding to lines in an image, and the structure with changed brightness is less interfered.
And blobs and corner points are two types of local feature points. A blob is typically an area that is different in color and shade from the surroundings, such as a tree or a house on a grassland. It is an area, so it has stronger noise capability and better stability than the corner points. And the corner points are the intersections between the corners or lines of an object on one side in the image.
Referring to fig. 3, the step of comparing the obtained images in the model to identify the object to be identified included in the images specifically includes the following steps:
step S141: extracting characteristic points of the obtained image;
step S142: acquiring an index list corresponding to the feature points of the image in the index inverted list;
step S143: traversing all index items of the index list, and finding the object with the largest occurrence frequency as a pre-identified object;
step S148: finding out a characteristic point set C contained in the pre-identified object from the model;
step S144: traversing all the feature points of the set C, finding out N feature points which are closest to the feature points and are in the set C for each feature point, and comparing the N feature points with the feature points extracted from the image and corresponding to the pre-identified object;
step S145: and judging whether the feature point set near the feature points with the number exceeding the preset number in the set C is consistent with the neighbor relation in the feature point set extracted from the image and corresponding to the pre-identified object.
The specific method for judging whether the characteristic points are in the neighbor relation is to arrange the previous m characteristic points near a certain characteristic point into a sequence from near to far, find out a common subsequence of the sequences of the characteristic points near two characteristic points, and calculate the length of the common subsequence, wherein if the length reaches a predetermined length (for example, accounts for 80%), the neighbor relation of the two characteristic points can be considered to be consistent. In practice, different predetermined lengths may be set to determine the severity of the neighbor relation. For example, when the requirement for object identification is high, the predetermined length is increased; conversely, the predetermined length value may be decreased. Then, it is counted which feature point sets near the feature points match the neighbor relation, and which feature point sets near the feature points do not match the neighbor relation. Thereby judging whether the number exceeds the preset number. Optionally, the predetermined number is greater than or equal to half of the number of all feature points of the set C.
Step S146: if so, confirming that the object to be recognized in the image is a pre-recognized object; that is, when more than a preset number of feature point sets in the vicinity of the feature point conform to the adjacency relationship, the object is recognized.
Step S147: otherwise, the object to be recognized is not recognized. That is, when the feature point sets near the feature points less than or equal to the preset number conform to the adjacency relationship, the object to be recognized is not recognized.
The index items are distinguished according to object types, and the frequency of occurrence is the maximum frequency of occurrence of the feature points of the image in the object types.
Wherein the index entry may be a list of object categories. The index list may be indexed by a keypoint. The index inverted list is arranged in order from the keypoint to the object class. The number of occurrences for each object class is unique. And when the number of times of the feature points appearing in a certain index item is the largest, the class object is taken as a pre-identified object.
Fig. 4 is a block diagram of an object recognition apparatus according to an embodiment of the present invention.
Referring to fig. 4, an object recognition apparatus 200 includes: a setup module 210 and an identification module 230. The establishing module 210 establishes a model of the feature point set according to a predetermined rule, and the identifying module 230 is configured to compare the obtained image with the model to identify an object to be identified included in the image.
Referring to fig. 4, the setup module 210 includes: a first acquisition unit 211, a first extraction unit 212, an addition unit 213, and an immediate vicinity set establishment unit 214.
Wherein the first acquisition unit 211 acquires images of a plurality of predetermined angles of the object to be recognized; the first extraction unit 212 extracts a plurality of feature points in the object region to be recognized for each of the images of the predetermined angles; the adding unit 213 adds the feature points to a globally unified index inverted table; the immediate-neighbor-point-set establishing unit 214 establishes a set of feature points near each of the feature points as an immediate neighbor point set according to a predetermined extraction rule.
Wherein, the recognition module 230 includes: a second extracting unit 231, a second obtaining unit 232, a first traversing unit 233, a finding unit 238, a second traversing unit 234, a judging unit 235 and an identifying unit 236.
The second extraction unit 231 extracts feature points from the obtained image, and the second acquisition unit 232 acquires an index list corresponding to the feature points of the image in the index inverted list; the first traversal unit 233 traverses all the index items of the index list, and finds the object with the largest occurrence number as a pre-identified object; a finding unit 238, configured to find a feature point set C included in the pre-identified object from the model; the second traversal unit 234 traverses all the feature points in the set C, finds out, for each feature point, N feature points that are closest to the feature point and are in the set C, and compares the N feature points with the feature point set extracted from the image and corresponding to the pre-identified object; the judging unit 235 judges whether a feature point set near a feature point exceeding a preset number in the set C is consistent with the neighbor relation in the feature point set extracted from the image and corresponding to the pre-identified object; the identifying unit 236 determines that the object to be identified in the image is a pre-identified object if the feature point sets near the feature points exceeding the preset number conform to the neighbor relation according to the judgment result of the judging unit, otherwise, the object to be identified is not identified.
Wherein the apparatus 200 implements object recognition according to the above-described method.
The present invention also provides an object recognition apparatus comprising any of the object recognition devices 200 described above. The object identification device can be a handheld device or an intelligent identification terminal.
By the object identification method, the device and the equipment, the identification of the image and the identification of the object included in the image can be automatically realized on various handheld equipment or identification-only terminals. Its object identification that can go on automatically, the accuracy high stability is good to can reduce the manpower, promote efficiency. After the target object is identified, an important premise is provided for subsequently providing corresponding services, for example, according to the identified object, dialing a corresponding customer service, or calling a corresponding instruction for pushing, or calling a corresponding solution. Thus, remote self-service is realized.
Of course, the above is a preferred embodiment of the present invention. It should be noted that, for a person skilled in the art, several modifications and refinements can be made without departing from the basic principle of the invention, and these modifications and refinements are also considered to be within the protective scope of the invention.

Claims (8)

1. An object recognition method, comprising:
establishing a model of the feature point set according to a preset rule;
comparing the obtained images in the model to identify the object to be identified in the images, specifically comprising:
extracting characteristic points of the obtained image;
acquiring an index list corresponding to the feature points of the image in an index inverted list, wherein the index inverted list comprises a plurality of feature points extracted in the object region to be identified aiming at each image of a plurality of images with preset angles of the object to be identified;
traversing all index items of the index list, and finding the object with the largest occurrence frequency as a pre-identified object;
finding out a characteristic point set C contained in the pre-identified object from the model;
traversing all the feature points of the set C, finding out N feature points which are closest to the feature points and are in the set C for each feature point, and comparing the N feature points with the feature points extracted from the image and corresponding to the pre-identified object;
judging whether a feature point set near a feature point with a preset number in the set C is consistent with the neighbor relation in the feature point set extracted from the image and corresponding to the pre-identified object;
if so, confirming that the object to be recognized in the image is a pre-recognized object, otherwise, not recognizing the object to be recognized.
2. The object recognition method according to claim 1, wherein the step of modeling the feature point set according to a predetermined rule specifically includes:
acquiring images of a plurality of preset angles of an object to be identified;
extracting a plurality of feature points in the object region to be recognized aiming at each image of the preset angle;
adding the feature points to a globally unified index inverted table;
and establishing a set of characteristic points near each characteristic point as an adjacent point set according to a preset extraction rule.
3. The object recognition method according to claim 2, wherein the feature points of the object to be recognized include: one or more of a color feature, a texture feature, a shape feature, or a local feature point.
4. The object recognition method according to claim 2 or 3, wherein the predetermined extraction rule is to extract a point a predetermined distance away from the feature point or to extract a predetermined number of points closest to the feature point.
5. The object recognition method according to claim 1, wherein the index items are distinguished by object categories, and the number of occurrences is at most the number of occurrences of the feature point of the image in the object category.
6. The object recognition method of claim 1, wherein the preset number is greater than or equal to half of the number of all feature points of the set C.
7. An object recognition device, comprising:
the establishing module is used for establishing a model of the feature point set according to a preset rule;
the identification module is used for comparing the obtained images in the model so as to identify the object to be identified in the images, and the identification module comprises:
a second extraction unit for extracting feature points from the obtained image;
a second acquisition unit, configured to acquire an index list in which feature points of the image correspond to an index inverted table, where the index inverted table includes, for each of images of a plurality of predetermined angles of an object to be recognized, a plurality of feature points extracted in an object region to be recognized;
the first traversal unit is used for traversing all index items of the index list and finding the object with the largest occurrence frequency as a pre-identified object;
a finding unit, configured to find a feature point set C included in the pre-identified object from the model;
a second traversal unit, configured to traverse all the feature points in the set C, find, for each feature point, N feature points that are closest to the feature point and are in the set C, and compare the N feature points with a feature point set extracted from the image and corresponding to the pre-identified object;
a judging unit, configured to judge whether a feature point set near a feature point exceeding a preset number in the set C matches with a neighbor relation in a feature point set extracted from the image and corresponding to the pre-identified object;
and the identification unit is used for confirming that the object to be identified in the image is the object to be identified in advance according to the judgment result of the judgment unit if the feature point sets near the feature points with the number exceeding the preset number conform to the neighbor relation, otherwise, not identifying the object to be identified.
8. The object identifying apparatus of claim 7, wherein the establishing module comprises:
a first acquisition unit for acquiring images of a plurality of predetermined angles of an object to be recognized;
a first extraction unit, configured to extract a plurality of feature points in the object region to be recognized for each of the images at the predetermined angles;
the adding unit is used for adding the feature points to a globally unified index inverted list;
and the adjacent point set establishing unit is used for establishing a set of the characteristic points near each characteristic point as an adjacent point set according to a preset extraction rule.
CN201710207520.3A 2017-03-31 2017-03-31 Object recognition method and device Active CN107480581B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710207520.3A CN107480581B (en) 2017-03-31 2017-03-31 Object recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710207520.3A CN107480581B (en) 2017-03-31 2017-03-31 Object recognition method and device

Publications (2)

Publication Number Publication Date
CN107480581A CN107480581A (en) 2017-12-15
CN107480581B true CN107480581B (en) 2021-06-15

Family

ID=60594008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710207520.3A Active CN107480581B (en) 2017-03-31 2017-03-31 Object recognition method and device

Country Status (1)

Country Link
CN (1) CN107480581B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859326A (en) * 2010-06-09 2010-10-13 南京大学 Image searching method
CN103793466A (en) * 2013-12-20 2014-05-14 深圳先进技术研究院 Image retrieval method and image retrieval device
CN103996207A (en) * 2014-04-28 2014-08-20 清华大学深圳研究生院 Object tracking method
CN103999097A (en) * 2011-07-11 2014-08-20 华为技术有限公司 System and method for compact descriptor for visual search
CN104156362A (en) * 2013-05-14 2014-11-19 视辰信息科技(上海)有限公司 Large-scale image feature point matching method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7990380B2 (en) * 2004-09-30 2011-08-02 Intel Corporation Diffuse photon map decomposition for parallelization of global illumination algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859326A (en) * 2010-06-09 2010-10-13 南京大学 Image searching method
CN103999097A (en) * 2011-07-11 2014-08-20 华为技术有限公司 System and method for compact descriptor for visual search
CN104156362A (en) * 2013-05-14 2014-11-19 视辰信息科技(上海)有限公司 Large-scale image feature point matching method
CN103793466A (en) * 2013-12-20 2014-05-14 深圳先进技术研究院 Image retrieval method and image retrieval device
CN103996207A (en) * 2014-04-28 2014-08-20 清华大学深圳研究生院 Object tracking method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An Image Indexing and Retrieval Model Using Reasoning Services;Suihua Wang,and etc;《2009 International Conference on Multimedia Information Networking and Security》;20091231;第193-196页 *
有序的KD-tree在图像特征匹配上的应用;熊云艳等;《化工自动化及仪表》;20101031;第37卷(第10期);第84-87页 *

Also Published As

Publication number Publication date
CN107480581A (en) 2017-12-15

Similar Documents

Publication Publication Date Title
CN107480236B (en) Information query method, device, equipment and medium
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
CN111563509B (en) Tesseract-based substation terminal row identification method and system
US20130243249A1 (en) Electronic device and method for recognizing image and searching for concerning information
CN103679147A (en) Method and device for identifying model of mobile phone
CN108009928B (en) Electronic insurance policy signing method and device, computer equipment and storage medium
CN104506857A (en) Camera position deviation detection method and device
CN109766891B (en) Method for acquiring equipment facility information and computer readable storage medium
US20180157682A1 (en) Image information processing system
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN106610983A (en) Picture management method and apparatus, and terminal
CN106326454A (en) Image identification method
CN105760844A (en) Video stream data processing method, apparatus and system
CN104977011A (en) Positioning method and positioning device based on street-photographing image in electronic map
CN106203406A (en) A kind of identification system based on cloud computing
CN109255408A (en) A kind of Construction Safety hidden troubles removing method for positioning and taking pictures based on electronic beacon
CN104142955A (en) Method and terminal for recommending learning courses
CN113807342A (en) Method and related device for acquiring equipment information based on image
CN111368867A (en) Archive classification method and system and computer readable storage medium
CN108399621B (en) Engineering test piece rapid identification method and system
CN107480581B (en) Object recognition method and device
CN112434049A (en) Table data storage method and device, storage medium and electronic device
CN109919164B (en) User interface object identification method and device
AU2019200458B2 (en) Method and system for acquiring data files of blocks of land and of building plans and for automatic making of matches thereof
CN112232295B (en) Method and device for confirming newly-added target ship and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant