CN112836592A - Image identification method, system, electronic equipment and storage medium - Google Patents
Image identification method, system, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112836592A CN112836592A CN202110051122.3A CN202110051122A CN112836592A CN 112836592 A CN112836592 A CN 112836592A CN 202110051122 A CN202110051122 A CN 202110051122A CN 112836592 A CN112836592 A CN 112836592A
- Authority
- CN
- China
- Prior art keywords
- image
- article
- identified
- coordinate information
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000013135 deep learning Methods 0.000 claims abstract description 37
- 238000010586 diagram Methods 0.000 description 14
- 238000004590 computer program Methods 0.000 description 11
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000004075 alteration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image recognition method, an image recognition system, an electronic device and a storage medium, wherein the method comprises the following steps: inputting the collected images to be recognized into a trained deep learning network, and acquiring position coordinate information of each article in the images to be recognized and the name of a category to which each article belongs, wherein the position coordinate information is output by the deep learning network; and finding the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified. According to the method, after the categories of the articles in the images are identified, the vector information of the categories of the identified articles is measured on the texture model based on the corresponding relation between the images and the texture model, so that not only can the categories of the articles in the images be identified, but also the vector information of each category can be identified.
Description
Technical Field
The present invention relates to the field of image recognition, and more particularly, to an image recognition method, system, electronic device, and storage medium.
Background
For various image data shot in the air or various object pages on a shopping platform, categories to which various objects in the images or the pages belong need to be identified from the image data, that is, the objects or objects belong to which categories, for example, for bridge image data shot, specific details related to a bridge need to be acquired from the image data; or for the goods page, a classification of various goods needs to be identified therefrom.
At present, the categories of various articles or objects in the image can only be identified and marked from the image manually, automatic identification cannot be achieved, the identification efficiency is low, and the accuracy is not high.
Disclosure of Invention
The present invention provides an image recognition method, system, electronic device and storage medium that overcome the above-mentioned problems or at least partially solve the above-mentioned problems.
According to a first aspect of the present invention, there is provided an image recognition method comprising: inputting the collected images to be recognized into a trained deep learning network, and acquiring position coordinate information of each article in the images to be recognized and the name of a category to which each article belongs, wherein the position coordinate information is output by the deep learning network; and finding the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified.
On the basis of the technical scheme, the invention can be improved as follows.
Optionally, before inputting the acquired image into the trained deep learning network, the method further includes: collecting a plurality of images, wherein each image comprises at least one article, and marking the position coordinate information of each article in each image and the category name of each article; forming a training data set by the position coordinate information of each article in the plurality of images and each marked image and the category name of each article; and training the deep learning network by utilizing the training data set.
Optionally, before finding the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified, the method further includes: acquiring a plurality of images to be identified in a specific area, and performing space-three matching on the plurality of images to be identified to acquire point cloud data of the images to be identified; constructing a three-dimensional texture model of the image to be identified according to the point cloud data of the image to be identified; the point cloud data of the image to be identified comprises three-dimensional coordinate information of each article in the image to be identified and vector information of each article.
Optionally, the position coordinate information of any article identified from the image to be identified is two-dimensional position coordinate information; correspondingly, the finding the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified includes: and finding three-dimensional coordinate information corresponding to the two-dimensional position coordinate information of any article identified from the image to be identified from the three-dimensional texture model, and acquiring vector information of the article corresponding to the three-dimensional coordinate information from the three-dimensional texture model.
Optionally, the vector information of the article at least includes size information, width information, length information, and height information.
According to a second aspect of the present invention, there is provided an image recognition system comprising: the first acquisition module is used for inputting the acquired images to be recognized into the trained deep learning network and acquiring the position coordinate information of each article and the category name of each article in the images to be recognized, which are output by the deep learning network; and the searching module is used for searching the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified.
Optionally, the method further includes: further comprising: the collecting module is used for collecting a plurality of images, each image comprises at least one article, and position coordinate information of each article in each image and a category name of each article are labeled; the system comprises a plurality of images, position coordinate information of each article in each marked image and a category name of each article, wherein the position coordinate information of each article in each marked image and the category name of each article form a training data set; and the training module is used for training the deep learning network by utilizing the training data set.
Optionally, the method further includes: the second acquisition module is used for acquiring a plurality of images to be identified in a specific area, performing space-three matching on the plurality of images to be identified and acquiring point cloud data of the images to be identified; the building module is used for building a three-dimensional texture model of the image to be identified according to the point cloud data of the image to be identified; the point cloud data of the image to be identified comprises three-dimensional coordinate information of each article in the image to be identified and vector information of each article.
According to a third aspect of the present invention, there is provided an electronic device comprising a memory, a processor for implementing the steps of the image recognition method when executing a computer management class program stored in the memory.
According to a fourth aspect of the present invention, there is provided a computer-readable storage medium having stored thereon a computer management-like program, which when executed by a processor, performs the steps of the image recognition method.
The invention provides an image recognition method, an image recognition system, electronic equipment and a storage medium, wherein collected images to be recognized are input into a trained deep learning network, and position coordinate information of each article in the images to be recognized and the name of a category to which each article belongs, which are output by the deep learning network, are obtained; and finding the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified. After the categories of the articles in the images are identified, vector information of the categories of the identified articles is measured on the texture model based on the corresponding relation between the images and the texture model, so that not only the categories of the articles in the images can be identified, but also the vector information of each category can be identified.
Drawings
Fig. 1 is a flowchart of an image recognition method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a training process for the deep learning network of the present invention;
FIG. 3 is a flow chart of a process for constructing a three-dimensional texture model according to the present invention;
FIG. 4 is a flow chart of vector information for an item identified by a three-dimensional texture model;
FIG. 5 is a schematic diagram of an image recognition system according to the present invention;
FIG. 6 is a schematic diagram of an image recognition system according to the present invention;
FIG. 7 is a schematic diagram of a hardware structure of a possible electronic device according to the present invention;
fig. 8 is a schematic diagram of a hardware structure of a possible computer-readable storage medium according to the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Fig. 1 is a flowchart of an image recognition method provided by the present invention, and as shown in fig. 1, the method includes: 101. inputting the collected images to be recognized into a trained deep learning network, and acquiring position coordinate information of each article in the images to be recognized and the name of a category to which each article belongs, wherein the position coordinate information is output by the deep learning network; 102. and finding the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified.
It will be understood that for images or pictures, which include various types of objects, it is necessary to classify the objects, i.e. identify the category of the objects in the images. For example, the image map captures road information in a certain area, including bridge piers, bridge pillars, signboards, etc., or the image map includes various commodities, such as clothes, tableware, etc., and the type of each item in the image or image is called the category of the item. The vector information of each article can be understood as specific attribute information of the article.
In order to identify the category and the vector information of each article in the image, the invention provides an image identification method capable of identifying the category and the vector information of the article.
And based on the position coordinate information of each article, acquiring the vector information of each article from the three-dimensional texture model corresponding to the image to be identified, so as to obtain the category name and the vector information of each article in the image to be identified.
According to the method, after the categories of the articles in the images are identified, the vector information of the categories of the identified articles is measured on the texture model based on the corresponding relation between the images and the texture model, so that not only can the categories of the articles in the images be identified, but also the vector information of each category can be identified.
In a possible embodiment, before inputting the acquired image into the trained deep learning network, the method further includes: collecting a plurality of images, wherein each image comprises at least one article, and marking the position coordinate information of each article in each image and the category name of each article; forming a training data set by the position coordinate information of each article in the plurality of images and each marked image and the category name of each article; and training the deep learning network by utilizing the training data set.
It can be understood that, referring to fig. 2, the training process of the deep learning network is to collect a plurality of images, each image includes a plurality of articles, and the position coordinate information of each article in the image and the category name to which each article belongs are labeled in advance. And (3) forming a training data set of the deep learning network by each image and the position coordinate information and the category name of each marked article, and training the deep learning network by using the training data set. And identifying the category name and the position coordinate information of each article in the image by using the trained deep learning network.
In a possible embodiment, before finding the vector information of any article in the three-dimensional texture model corresponding to the image to be recognized according to the position coordinate information of any article recognized from the image to be recognized, the method further includes: acquiring a plurality of images to be identified in a specific area, and performing space-three matching on the plurality of images to be identified to acquire point cloud data of the images to be identified; constructing a three-dimensional texture model of the image to be identified according to the point cloud data of the image to be identified; the point cloud data of the image to be identified comprises three-dimensional coordinate information of each article in the image to be identified and vector information of each article.
It can be understood that, referring to fig. 3, each image map has a corresponding three-dimensional texture model, when the three-dimensional texture model corresponding to each image is constructed, a plurality of image sequences of different angles of a shot area are obtained, the shot image sequences can be sorted according to time, and the plurality of images are subjected to space-three matching to obtain point cloud data of the images.
Because the shot image sequences have certain overlapping degree, the image sequences with certain overlapping degree are introduced into Lensphoto software to carry out space-three matching. The specific method of the space-three matching is that the same-name points of every two adjacent images are automatically matched and fused according to the color point cloud between every two adjacent images, and therefore the integral point cloud data of the shot area is obtained.
Based on the integral point cloud data of the shot area, a triangulation network, namely a mold, is obtained through normal calculation, the original image is mapped to the mold, and a three-dimensional texture model of the image is obtained.
The point cloud data comprises three-dimensional coordinate information of each article in the image and vector information of each article. And constructing a three-dimensional texture model of the image according to the point cloud data of the image. The vector information of each article may include size information, width, length, height, and the like.
In one possible embodiment, the position coordinate information of any article identified from the image to be identified is two-dimensional position coordinate information; correspondingly, according to the position coordinate information of any article identified from the image to be identified, finding the vector information of any article in the three-dimensional texture model corresponding to the image to be identified comprises: and finding the three-dimensional coordinate information corresponding to the two-dimensional position coordinate information of any article identified from the image to be identified from the three-dimensional texture model, and acquiring the vector information of the article corresponding to the three-dimensional coordinate information from the three-dimensional texture model.
It can be understood that, referring to fig. 4, the position coordinate information of each article identified from the image to be identified by the deep learning network is two-dimensional position coordinate information, the corresponding three-dimensional position coordinate information is found from the three-dimensional texture model of the image to be identified according to the two-dimensional position coordinate information, and the vector information of the article corresponding to the three-dimensional position coordinate information is obtained, where the vector information of the article mainly includes the size information of the article, such as width information, length information, height information, and the like, so that the category name and the vector information of each article in the image to be identified can be obtained.
Fig. 5 is a structural diagram of an image recognition system provided in the present invention, and as shown in fig. 5, an image recognition system includes a first obtaining module 501 and a searching module 502, where:
a first obtaining module 501, configured to input the acquired image to be recognized into the trained deep learning network, and obtain position coordinate information of each article in the image to be recognized output by the deep learning network and a category name to which each article belongs;
the searching module 502 is configured to search, according to the position coordinate information of any article identified from the image to be identified, the vector information of any article in the three-dimensional texture model corresponding to the image to be identified.
Referring to fig. 6, the image recognition system further includes a collection module 504, a training module 505, a second acquisition module 506, and a construction module 507, wherein:
a collecting module 504, configured to collect a plurality of images, where each image includes at least one article, and label position coordinate information of each article in each image and a category name to which each article belongs; the system comprises a plurality of images, position coordinate information of each article in each marked image and a category name of each article, wherein the position coordinate information of each article in each marked image and the category name of each article form a training data set; and a training module 505, configured to train the deep learning network by using a training data set.
A second obtaining module 506, configured to obtain multiple images to be identified in a specific area, perform null-three matching on the multiple images to be identified, and obtain point cloud data of the images to be identified; the building module 507 is used for building a three-dimensional texture model of the image to be identified according to the point cloud data of the image to be identified; the point cloud data of the image to be identified comprises three-dimensional coordinate information of each article in the image to be identified and vector information of each article.
Referring to fig. 7, fig. 7 is a schematic view of an embodiment of an electronic device according to the present invention. As shown in fig. 7, the present invention provides an electronic device, which includes a memory 710, a processor 720 and a computer program 711 stored in the memory 720 and running on the processor 720, wherein the processor 720 executes the computer program 711 to implement the following steps: inputting the collected images to be recognized into a trained deep learning network, and acquiring position coordinate information of each article in the images to be recognized and the name of a category to which each article belongs, wherein the position coordinate information is output by the deep learning network; and finding the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified.
Referring to fig. 8, fig. 8 is a schematic diagram of an embodiment of a computer-readable storage medium according to the present invention. As shown in fig. 8, the present embodiment provides a computer-readable storage medium 800 having a computer program 811 stored thereon, the computer program 811 realizing the following steps when executed by a processor: inputting the collected images to be recognized into a trained deep learning network, and acquiring position coordinate information of each article in the images to be recognized and the name of a category to which each article belongs, wherein the position coordinate information is output by the deep learning network; and finding the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified.
The invention provides an image recognition method, an image recognition system, electronic equipment and a storage medium, wherein collected images to be recognized are input into a trained deep learning network, and position coordinate information of each article in the images to be recognized and the name of a category to which each article belongs, which are output by the deep learning network, are obtained; and finding the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified. After the categories of the articles in the images are identified, vector information of the categories of the identified articles is measured on the texture model based on the corresponding relation between the images and the texture model, so that not only the categories of the articles in the images can be identified, but also the vector information of each category can be identified.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to include such modifications and variations.
Claims (10)
1. An image recognition method, comprising:
inputting the collected images to be recognized into a trained deep learning network, and acquiring position coordinate information of each article in the images to be recognized and the name of a category to which each article belongs, wherein the position coordinate information is output by the deep learning network;
and finding the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified.
2. The image recognition method of claim 1, wherein the inputting the captured image into the trained deep learning network further comprises:
collecting a plurality of images, wherein each image comprises at least one article, and marking the position coordinate information of each article in each image and the category name of each article;
forming a training data set by the position coordinate information of each article in the plurality of images and each marked image and the category name of each article;
and training the deep learning network by utilizing the training data set.
3. The image recognition method according to claim 1, wherein the step of finding the vector information of any one of the articles identified from the image to be recognized in the three-dimensional texture model corresponding to the image to be recognized according to the position coordinate information of the any one of the articles further comprises:
acquiring a plurality of images to be identified in a specific area, and performing space-three matching on the plurality of images to be identified to acquire point cloud data of the images to be identified;
constructing a three-dimensional texture model of the image to be identified according to the point cloud data of the image to be identified;
the point cloud data of the image to be identified comprises three-dimensional coordinate information of each article in the image to be identified and vector information of each article.
4. The image recognition method according to claim 3, wherein the position coordinate information of any one of the articles recognized from the image to be recognized is two-dimensional position coordinate information; correspondingly, the finding the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified includes:
and finding three-dimensional coordinate information corresponding to the two-dimensional position coordinate information of any article identified from the image to be identified from the three-dimensional texture model, and acquiring vector information of the article corresponding to the three-dimensional coordinate information from the three-dimensional texture model.
5. The image recognition method according to claim 3 or 4, wherein the vector information of the article includes at least size information, width information, length information, and height information.
6. An image recognition system, comprising:
the first acquisition module is used for inputting the acquired images to be recognized into the trained deep learning network and acquiring the position coordinate information of each article and the category name of each article in the images to be recognized, which are output by the deep learning network;
and the searching module is used for searching the vector information of any article in the three-dimensional texture model corresponding to the image to be identified according to the position coordinate information of any article identified from the image to be identified.
7. The image recognition system of claim 6, further comprising:
the collecting module is used for collecting a plurality of images, each image comprises at least one article, and position coordinate information of each article in each image and a category name of each article are labeled; the system comprises a plurality of images, position coordinate information of each article in each marked image and a category name of each article, wherein the position coordinate information of each article in each marked image and the category name of each article form a training data set;
and the training module is used for training the deep learning network by utilizing the training data set.
8. The image recognition system according to claim 6 or 7, further comprising:
the second acquisition module is used for acquiring a plurality of images to be identified in a specific area, performing space-three matching on the plurality of images to be identified and acquiring point cloud data of the images to be identified;
the building module is used for building a three-dimensional texture model of the image to be identified according to the point cloud data of the image to be identified;
the point cloud data of the image to be identified comprises three-dimensional coordinate information of each article in the image to be identified and vector information of each article.
9. An electronic device, comprising a memory, a processor for implementing the steps of the image recognition method according to any one of claims 1-6 when executing a computer management class program stored in the memory.
10. A computer-readable storage medium, on which a computer management class program is stored, which, when executed by a processor, carries out the steps of the image recognition method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110051122.3A CN112836592A (en) | 2021-01-14 | 2021-01-14 | Image identification method, system, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110051122.3A CN112836592A (en) | 2021-01-14 | 2021-01-14 | Image identification method, system, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112836592A true CN112836592A (en) | 2021-05-25 |
Family
ID=75928114
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110051122.3A Pending CN112836592A (en) | 2021-01-14 | 2021-01-14 | Image identification method, system, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112836592A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102914267A (en) * | 2012-09-27 | 2013-02-06 | 无锡天授信息科技有限公司 | System and method for detecting size of moving object |
CN105136064A (en) * | 2015-09-13 | 2015-12-09 | 维希艾信息科技(无锡)有限公司 | Moving object three-dimensional size detection system and method |
CN109344795A (en) * | 2018-10-21 | 2019-02-15 | 江苏跃鑫科技有限公司 | Vehicle speed detector device |
CN110009690A (en) * | 2019-03-23 | 2019-07-12 | 西安电子科技大学 | Binocular stereo vision image measuring method based on polar curve correction |
CN110826499A (en) * | 2019-11-08 | 2020-02-21 | 上海眼控科技股份有限公司 | Object space parameter detection method and device, electronic equipment and storage medium |
-
2021
- 2021-01-14 CN CN202110051122.3A patent/CN112836592A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102914267A (en) * | 2012-09-27 | 2013-02-06 | 无锡天授信息科技有限公司 | System and method for detecting size of moving object |
CN105136064A (en) * | 2015-09-13 | 2015-12-09 | 维希艾信息科技(无锡)有限公司 | Moving object three-dimensional size detection system and method |
CN109344795A (en) * | 2018-10-21 | 2019-02-15 | 江苏跃鑫科技有限公司 | Vehicle speed detector device |
CN110009690A (en) * | 2019-03-23 | 2019-07-12 | 西安电子科技大学 | Binocular stereo vision image measuring method based on polar curve correction |
CN110826499A (en) * | 2019-11-08 | 2020-02-21 | 上海眼控科技股份有限公司 | Object space parameter detection method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111340797B (en) | Laser radar and binocular camera data fusion detection method and system | |
CN106951830B (en) | Image scene multi-object marking method based on prior condition constraint | |
Balali et al. | Multi-class US traffic signs 3D recognition and localization via image-based point cloud model using color candidate extraction and texture-based recognition | |
CN107909107A (en) | Fiber check and measure method, apparatus and electronic equipment | |
Tonioni et al. | Product recognition in store shelves as a sub-graph isomorphism problem | |
Rostianingsih et al. | COCO (creating common object in context) dataset for chemistry apparatus | |
CN104615986A (en) | Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change | |
CN112446363A (en) | Image splicing and de-duplication method and device based on video frame extraction | |
CN106845496B (en) | Fine target identification method and system | |
CN109977983A (en) | Obtain the method and device of training image | |
CN108932509A (en) | A kind of across scene objects search methods and device based on video tracking | |
CN107977592B (en) | Image text detection method and system, user terminal and server | |
CN112016605A (en) | Target detection method based on corner alignment and boundary matching of bounding box | |
CN110458096A (en) | A kind of extensive commodity recognition method based on deep learning | |
CN109857878B (en) | Article labeling method and device, electronic equipment and storage medium | |
CN113065447B (en) | Method and equipment for automatically identifying commodities in image set | |
Sharma | Object detection and recognition using Amazon Rekognition with Boto3 | |
CN107330363B (en) | Rapid internet billboard detection method | |
CN111160374A (en) | Color identification method, system and device based on machine learning | |
CN111797704B (en) | Action recognition method based on related object perception | |
CN113723558A (en) | Remote sensing image small sample ship detection method based on attention mechanism | |
CN113223037A (en) | Unsupervised semantic segmentation method and unsupervised semantic segmentation system for large-scale data | |
CN115937492B (en) | Feature recognition-based infrared image recognition method for power transformation equipment | |
CN117315229A (en) | Target detection method based on characteristic grafting | |
CN105809181A (en) | Logo detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210525 |
|
RJ01 | Rejection of invention patent application after publication |