CN111161227A - Target positioning method and system based on deep neural network - Google Patents

Target positioning method and system based on deep neural network Download PDF

Info

Publication number
CN111161227A
CN111161227A CN201911326585.5A CN201911326585A CN111161227A CN 111161227 A CN111161227 A CN 111161227A CN 201911326585 A CN201911326585 A CN 201911326585A CN 111161227 A CN111161227 A CN 111161227A
Authority
CN
China
Prior art keywords
target
image data
image
feature vector
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911326585.5A
Other languages
Chinese (zh)
Other versions
CN111161227B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Shuzhilian Technology Co Ltd
Original Assignee
Chengdu Shuzhilian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Shuzhilian Technology Co Ltd filed Critical Chengdu Shuzhilian Technology Co Ltd
Priority to CN201911326585.5A priority Critical patent/CN111161227B/en
Publication of CN111161227A publication Critical patent/CN111161227A/en
Application granted granted Critical
Publication of CN111161227B publication Critical patent/CN111161227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target positioning method and a system based on a deep neural network, which comprises the following steps: establishing a simulation model matched with a preset object to obtain simulation image data; acquiring real image data after preprocessing; performing data enhancement processing on the simulation image data and the real image data to obtain a data set; training a deep neural network; inputting the data set into the trained deep neural network to obtain target type judgment and target area judgment results; screening target type judgment and target area judgment results to obtain a round hole target; extracting the image coordinates of each pixel point in the region corresponding to the round hole target; and calculating the average value of the horizontal and vertical coordinates of the pixel points in the circular hole target area to obtain the image position coordinates of the target center of the preset object. The invention solves the problems that the low-definition image is difficult to position and the image shot at an inclined angle is difficult to identify based on the traditional image processing technical method at present.

Description

Target positioning method and system based on deep neural network
Technical Field
The invention relates to the field of computer vision application, in particular to a target center positioning method and system based on a deep neural network.
Background
In an actual production process, there are a large number of targets that cannot be measured in a contact manner. Particularly in a severe industrial environment, some moving objects cannot be measured by a conventional sensor, so that measuring the moving objects in real time in a non-contact manner is one of the hot spots of current research. In recent years, CCD image sensors have been widely applied in the fields of aerospace, satellite reconnaissance, remote sensing and remote measuring, optical image processing and the like. The bulls-eye identification based on the DIM hardware equipment foundation is one of the problems to be solved in the current image identification field.
The current identification method is difficult to accurately detect the edge of a target by using an edge detection algorithm on an image with low definition and a complex background, and further difficult to locate a target. Secondly, the shape of the target changes due to the difference of the shooting angles, and the target is difficult to judge by a single rule. Based on the above limitations, it is difficult to achieve full-automatic target center identification at present, manual processing is required, efficiency is low, and consistency of reconnaissance results is poor.
With the rapid development of deep learning technology, mature application cases are already available in the fields of voice recognition, image recognition and the like. One of the key technologies for real-time digital photogrammetry is the real-time processing of two-dimensional images. Therefore, automatic image target detection and recognition can be realized by utilizing the deep learning technology.
Disclosure of Invention
The invention provides a method and a system for positioning a target based on a deep neural network aiming at an image under a real and complex condition, and aims to solve the problems that the low-definition image is difficult to position and the image shot at an inclined angle is difficult to identify based on the traditional image processing technical method.
Because the data acquisition difficulty is higher in a real scene, the image quality is lower, and the target positioning effect by directly using the neural network model is poorer. The invention increases the training sample size of the neural network model by using the mode of creating the simulation image data, thereby achieving the purpose of improving the performance of the neural network model and the generalization capability of the model.
In order to achieve the above object, in one aspect, the present invention provides a method for locating a bulls-eye based on a deep neural network, including:
step 1, collecting simulation image data:
step 1.1, a 3D cylindrical target model is created, the upper bottom surface and the lower bottom surface of the cylindrical target model are both oval, round holes are formed in the upper bottom surface and the lower bottom surface, and a column body is a slender column;
step 1.2, setting a light source to irradiate towards the cylindrical target model, wherein the light source type uses a point light source or a natural light source; the purpose of using the light source is to simulate the shape and characteristics of an observed target in a dark scene, and the illumination direction is emitted to the target;
step 1.3, setting the material of the cylindrical target model to make the cylindrical target model present the texture of metal material;
step 1.4, setting different viewpoint observation cylindrical target models, and intercepting images to obtain two-dimensional images at different angles;
and step 1.5, Gaussian noise, Poisson noise, salt and pepper noise and multiplicative noise are respectively added to the image obtained in step 1.4.
Step 2, data preprocessing and data enhancement:
step 2.1, carrying out histogram equalization on the marked real image data, and achieving the effect of enhancing contrast by homogenizing the gray value distribution of the image;
2.2, performing data enhancement on the real image data in a mode of turning the real image data up and down, turning the real image data left and right, adjusting the brightness of the image and the like, and expanding a data set;
and 2.3, performing data enhancement on the simulation image data in modes of turning up and down, turning left and right, adjusting the brightness of the image and the like, and expanding a data set.
Step 3, learning of a network model:
step 3.1, extracting the features of the image by using a ResNext network, and respectively extracting a first feature vector, a second feature vector, a third feature vector and a fourth feature vector;
step 3.2, respectively carrying out up-sampling and convolution operation on the first feature vector, the second feature vector, the third feature vector and the fourth feature vector to obtain a fifth feature vector, a sixth feature vector, a seventh feature vector and an eighth feature vector;
step 3.3, performing pooling operation on the eighth feature vector to obtain a ninth feature vector;
step 3.4, inputting the first feature vector, the second feature vector, the third feature vector and the fourth feature vector into a Region generation network (RPN) to obtain Region proposals (Region suggestions);
step 3.5, matching the candidate area with the real marked area to obtain a positive sample and a negative sample; the candidate region is the screened region proposal.
And 3.6, inputting the positive sample, the negative sample, the fifth feature vector, the sixth feature vector, the seventh feature vector, the eighth feature vector and the ninth feature vector into a classification network and a frame regression network to perform target category judgment and target area judgment.
Step 4, target center positioning:
4.1, screening the result output in the step 3.6 to obtain a round hole target; removing non-circular hole targets from the circular hole-shaped recessed area on the top surface or the bottom surface of the circular hole target, namely the top surface or the bottom surface of the cylindrical model, according to the recognition result of the deep neural network;
step 4.2, extracting the image coordinates of each pixel point in the region of the round hole target, and recording the image coordinates as (x)i,yi) Wherein i is 1, 2, 3 … M; m is the number of pixel points in the target area;
step 4.3, calculating the average value of the horizontal and vertical coordinates of the pixel points of the 'round hole' target area to obtain the image position coordinates (X, Y) of the target center, wherein the formula is as follows:
Figure BDA0002328546240000031
Figure BDA0002328546240000032
on the other hand, corresponding to the method, the invention also provides a target positioning system based on the deep neural network, and the system comprises:
the simulation image data acquisition unit is used for establishing a simulation model matched with a preset object, and acquiring an image of the simulation model to obtain simulation image data;
the real image data acquisition unit is used for acquiring image data of a preset object, labeling the image data of the preset object, and preprocessing the labeled data to obtain preprocessed real image data;
the data enhancement unit is used for carrying out data enhancement processing on the simulation image data and the real image data to obtain a data set;
the deep neural network training unit is used for obtaining a training set based on the data set, training the deep neural network by using the training set, inputting the data set into the trained deep neural network, and obtaining target category judgment and target area judgment results;
the round hole target screening unit is used for screening target type judgment and target area judgment results to obtain a round hole target; removing non-circular hole targets from the circular hole-shaped recessed area on the top surface or the bottom surface of the circular hole target, namely the top surface or the bottom surface of the cylindrical model, according to the recognition result of the deep neural network;
an image coordinate extraction unit for extracting the image coordinate of each pixel point in the region corresponding to the round hole target and recording as (x)i,yi) Wherein i is 1, 2, 3 … M; m is the number of pixel points in the corresponding area of the round hole target;
and the bull's-eye image position coordinate calculating unit is used for calculating the average value of horizontal and vertical coordinates of pixel points in the circular hole target area to obtain the image position coordinates (X, Y) of the bull's-eye of the preset object.
One or more technical schemes provided by the invention at least have the following technical effects or advantages:
the method and the system have better identification accuracy rate for the images acquired under various complex conditions and have certain noise resistance. More than 99% of identification accuracy and recall rate in 8 pixel points can be achieved for ideal simulation image data. The real scene image data under the complex condition can reach more than 95% of recognition accuracy and recall rate in 10 pixel points. The method and the system use the deep neural network model to carry out target positioning on the cylindrical model under the complex background condition, thereby obtaining remarkable effect.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention;
FIG. 1 is a schematic flow chart of a method for locating a target based on a deep neural network according to the present invention;
fig. 2 is a schematic composition diagram of a target location system based on a deep neural network according to the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflicting with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described and thus the scope of the present invention is not limited by the specific embodiments disclosed below.
Examples
Fig. 1 is a flow chart of a bulls-eye positioning method according to an embodiment of the invention. As shown in fig. 1, the data sources of the bulls-eye localization method include simulation image data and real image data. The method comprises the following specific steps:
step 1, collecting simulation image data:
step 1.1 imitates the cylindrical model in the real image data, and uses OpenGL to create a 3D cylindrical target model. The upper bottom surface and the lower bottom surface of the cylindrical target model are both oval, circular holes are formed in the upper bottom surface and the lower bottom surface, and the cylinder body is a slender cylinder. In a preferred embodiment of the present invention, the ratio of the radius of the circular hole to the radius of the cylinder above and below the ground is, for example, 1: 2;
step 1.2, point light sources with different brightness are arranged to irradiate the cylindrical model, and the type of the light source is a point light source or a natural light source;
step 1.3, setting a cylindrical model material to make the cylindrical model material present a metal texture;
step 1.4, setting different viewpoint observation cylindrical models, and intercepting images to obtain two-dimensional images at different angles; in a preferred embodiment of the invention, the included angle between the cylindrical model in the image and the vertical direction is kept between 0 and 60 degrees;
and 1.5, marking the round hole of the cylindrical model to obtain a binary mask image, and preparing for subsequent model training.
Step 1.6 is to add gaussian noise, poisson noise, salt and pepper noise, multiplicative noise to the two-dimensional image generated in step 1.4 respectively to simulate real image data.
Step 2, acquiring real image data:
step 2.1, collecting image data collected by a CCD under a real scene;
step 2.2, aiming at the image collected in the step 2.1, manually marking a round hole of the cylindrical model, and processing to obtain a binary mask image to prepare for subsequent model training;
and 2.3, carrying out histogram equalization on the image acquired in the step 2.1, and achieving the effect of enhancing the contrast by homogenizing the gray value distribution of the image.
And step 3, data enhancement:
3.1, carrying out up-down turning, left-right turning and picture brightness adjustment on the real scene image data to expand a data set;
and 3.2, carrying out up-down overturning, left-right overturning and picture brightness adjustment on the simulation image data to expand the data set.
Step 4, training a deep neural network:
step 4.1, the deep neural network model selects Smooth L1 loss and cross entropy loss as the loss of the main use of the model; constructing loss functions of 5 subtasks in total according to different subtasks; summing the values of the 5 loss functions to obtain a total target loss function of the whole model;
4.2, selecting a random gradient descent method training model, and setting the iteration times e of the whole model training;
4.3, monitoring the loss of the verification set in real time in the training process, and monitoring the updating of the model by using an early-stopping method; stopping the model training when the number of cycles of the deep neural network model training is equal to the number of iterations e, or the loss stops descending and continues for a plurality of cycles; the model reaches the optimum; in a preferred embodiment of the present invention, the number of iterations is 100.
Step 5, area segmentation:
step 5.1, calling the deep neural network model trained in the step 4;
and 5.2, inputting the real image data and the simulation image data in the data set into the deep neural network model for image region detection.
Step 6, target calculation:
step 6.1, extracting a 'round hole' target area in the detection result of the step 5.2;
step 6.2, counting the image coordinates of each pixel point in the round hole target area, and recording as (x)i,yi) Wherein i is 1, 2, 3 … M; m is the number of pixel points in the corresponding area of the round hole target;
step 6.3, calculating the average value of the horizontal and vertical coordinates of the pixel points of the 'round hole' target area to obtain the image set coordinates (X, Y) of the target center, wherein the formula is as follows:
Figure BDA0002328546240000051
Figure BDA0002328546240000052
referring to fig. 2, an embodiment of the present invention further provides a target positioning system based on a deep neural network, the system including:
the simulation image data acquisition unit is used for establishing a simulation model matched with a preset object, and acquiring an image of the simulation model to obtain simulation image data;
the real image data acquisition unit is used for acquiring image data of a preset object, labeling the image data of the preset object, and preprocessing the labeled data to obtain preprocessed real image data;
the data enhancement unit is used for performing data enhancement processing on the simulation image data and the real image data to obtain a data set;
the deep neural network training unit is used for obtaining a training set based on the data set, training the deep neural network by using the training set, inputting the data set into the trained deep neural network, and obtaining target category judgment and target area judgment results;
the round hole target screening unit is used for screening target type judgment and target area judgment results to obtain a round hole target;
an image coordinate extraction unit for extracting the image coordinate of each pixel point in the region corresponding to the round hole target and recording as (x)i,yi) Wherein i is 1, 2, 3 … M; m is the number of pixel points in the corresponding area of the round hole target;
and the bull's-eye image position coordinate calculating unit is used for calculating the average value of horizontal and vertical coordinates of pixel points in the circular hole target area to obtain the image position coordinates (X, Y) of the bull's-eye of the preset object.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A target center positioning method based on a deep neural network is characterized by comprising the following steps:
establishing a simulation model matched with a preset object, and acquiring an image of the simulation model to obtain simulation image data;
acquiring image data of a preset object, labeling the image data of the preset object, and preprocessing the labeled data to obtain preprocessed real image data;
carrying out data enhancement processing on the simulation image data and the real image data to obtain a data set;
obtaining a training set based on the data set, training the deep neural network by using the training set, and inputting the data set into the trained deep neural network to obtain target category judgment and target area judgment results;
screening target type judgment and target area judgment results to obtain a round hole target;
extracting the image coordinate of each pixel point in the region corresponding to the round hole target, and recording the image coordinate as (x)i,yi) Wherein i is 1, 2, 3 … M; m is the number of pixel points in the corresponding area of the round hole target;
and calculating the average value of the horizontal and vertical coordinates of the pixel points in the circular hole target area to obtain the image position coordinates (X, Y) of the target center of the preset object.
2. The method for locating the bulls-eye based on the deep neural network as claimed in claim 1, wherein the steps of: establishing a simulation model matched with a preset object, and acquiring an image of the simulation model to obtain simulation image data, wherein the simulation image data specifically comprises the following steps:
creating a simulation model;
setting a light source illumination direction simulation model;
setting different viewpoint observation simulation models, and intercepting images to obtain two-dimensional images at different angles;
and adding noise to the obtained two-dimensional image to obtain simulated image data.
3. The method as claimed in claim 2, wherein the simulation model is a 3D cylindrical target model, the upper and lower bottom surfaces of the cylindrical target model are elliptical, and the upper and lower bottom surfaces are provided with circular holes, and the cylindrical target model is made of metal.
4. The method for locating the bulls-eye based on the deep neural network as claimed in claim 2, wherein the light source is set to illuminate the cylindrical target model to simulate the shape and characteristics of the target observed in a dark scene, and the type of the light source is a point light source or a natural light source.
5. The method as claimed in claim 2, wherein Gaussian noise, Poisson noise, salt and pepper noise or multiplicative noise is added to the obtained two-dimensional image to obtain simulated image data.
6. The method of claim 1, wherein the pre-processing of the labeled data is performed by histogram equalization on the labeled image data.
7. The method of claim 1, wherein the artificial image data and the real image data are subjected to data enhancement by flipping the artificial image data and the real image data and adjusting the brightness of the image to expand the data set.
8. The method for locating the bulls-eye based on the deep neural network as claimed in claim 1, wherein the steps of: obtaining a training set based on the data set, training the deep neural network by using the training set, inputting the data set into the trained deep neural network, and obtaining a target type judgment result and a target area judgment result, wherein the method specifically comprises the following steps:
extracting the characteristics of the simulated image and the real image by using a ResNext network, and respectively extracting a first characteristic vector, a second characteristic vector, a third characteristic vector and a fourth characteristic vector;
respectively performing up-sampling and convolution operation on the first feature vector, the second feature vector, the third feature vector and the fourth feature vector to obtain a fifth feature vector, a sixth feature vector, a seventh feature vector and an eighth feature vector;
performing pooling operation on the eighth feature vector to obtain a ninth feature vector;
inputting the first feature vector, the second feature vector, the third feature vector and the fourth feature vector into a region to generate a network to obtain a region proposal, and obtaining a candidate region based on the region proposal;
matching the candidate area with the real labeling area to obtain a positive sample and a negative sample;
and inputting the positive sample, the negative sample, the fifth feature vector, the sixth feature vector, the seventh feature vector, the eighth feature vector and the ninth feature vector into a classification network and a frame regression network to perform target type judgment and target area judgment so as to obtain target type judgment and target area judgment results.
9. The method for locating the bulls-eye based on the deep neural network of claim 1, wherein the calculation formula of the image position coordinates (X, Y) of the bulls-eye of the round hole target area is as follows:
Figure FDA0002328546230000021
Figure FDA0002328546230000022
10. a target positioning system based on a deep neural network, the system comprising:
the simulation image data acquisition unit is used for establishing a simulation model matched with a preset object, and acquiring an image of the simulation model to obtain simulation image data;
the real image data acquisition unit is used for acquiring image data of a preset object, labeling the image data of the preset object, and preprocessing the labeled data to obtain preprocessed real image data;
the data enhancement unit is used for carrying out data enhancement processing on the simulation image data and the real image data to obtain a data set;
the deep neural network training unit is used for obtaining a training set based on the data set, training the deep neural network by using the training set, inputting the data set into the trained deep neural network, and obtaining target category judgment and target area judgment results;
the round hole target screening unit is used for screening target type judgment and target area judgment results to obtain a round hole target;
an image coordinate extraction unit for extracting the image coordinate of each pixel point in the region corresponding to the round hole target and recording as (x)i,yi) Wherein i is 1, 2, 3 … M; m is the number of pixel points in the corresponding area of the round hole target;
and the bull's-eye image position coordinate calculating unit is used for calculating the average value of horizontal and vertical coordinates of pixel points in the circular hole target area to obtain the image position coordinates (X, Y) of the bull's-eye of the preset object.
CN201911326585.5A 2019-12-20 2019-12-20 Target positioning method and system based on deep neural network Active CN111161227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911326585.5A CN111161227B (en) 2019-12-20 2019-12-20 Target positioning method and system based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911326585.5A CN111161227B (en) 2019-12-20 2019-12-20 Target positioning method and system based on deep neural network

Publications (2)

Publication Number Publication Date
CN111161227A true CN111161227A (en) 2020-05-15
CN111161227B CN111161227B (en) 2022-09-06

Family

ID=70557493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911326585.5A Active CN111161227B (en) 2019-12-20 2019-12-20 Target positioning method and system based on deep neural network

Country Status (1)

Country Link
CN (1) CN111161227B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797832A (en) * 2020-07-14 2020-10-20 成都数之联科技有限公司 Automatic generation method and system of image interesting region and image processing method
CN112052946A (en) * 2020-07-21 2020-12-08 重庆邮电大学移通学院 Neural network training method and related product

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106597425A (en) * 2016-11-18 2017-04-26 中国空间技术研究院 Radar object positioning method based on machine learning
US20170337682A1 (en) * 2016-05-18 2017-11-23 Siemens Healthcare Gmbh Method and System for Image Registration Using an Intelligent Artificial Agent
CN108229252A (en) * 2016-12-15 2018-06-29 腾讯科技(深圳)有限公司 A kind of pupil positioning method and system
CN108960134A (en) * 2018-07-02 2018-12-07 广东容祺智能科技有限公司 A kind of patrol UAV image mark and intelligent identification Method
CN109190625A (en) * 2018-07-06 2019-01-11 同济大学 A kind of container number identification method of wide-angle perspective distortion
CN109446973A (en) * 2018-10-24 2019-03-08 中车株洲电力机车研究所有限公司 A kind of vehicle positioning method based on deep neural network image recognition
CN109685066A (en) * 2018-12-24 2019-04-26 中国矿业大学(北京) A kind of mine object detection and recognition method based on depth convolutional neural networks
CN110059676A (en) * 2019-04-03 2019-07-26 北京航空航天大学 A kind of aviation plug hole location recognition methods based on deep learning Yu multiple target distribution sorting
JP2019139346A (en) * 2018-02-07 2019-08-22 シャープ株式会社 Image recognition device, image recognition system and program
CN110223352A (en) * 2019-06-14 2019-09-10 浙江明峰智能医疗科技有限公司 A kind of medical image scanning automatic positioning method based on deep learning
CN110321750A (en) * 2019-04-23 2019-10-11 成都数之联科技有限公司 Two-dimensional code identification method and system in a kind of picture
JP2019185665A (en) * 2018-04-17 2019-10-24 日本電信電話株式会社 Three-dimensional point cloud label learning device, three-dimensional point cloud label estimation device, three-dimensional point cloud label learning method, three-dimensional point cloud label estimation method, and program

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170337682A1 (en) * 2016-05-18 2017-11-23 Siemens Healthcare Gmbh Method and System for Image Registration Using an Intelligent Artificial Agent
CN106597425A (en) * 2016-11-18 2017-04-26 中国空间技术研究院 Radar object positioning method based on machine learning
CN108229252A (en) * 2016-12-15 2018-06-29 腾讯科技(深圳)有限公司 A kind of pupil positioning method and system
JP2019139346A (en) * 2018-02-07 2019-08-22 シャープ株式会社 Image recognition device, image recognition system and program
JP2019185665A (en) * 2018-04-17 2019-10-24 日本電信電話株式会社 Three-dimensional point cloud label learning device, three-dimensional point cloud label estimation device, three-dimensional point cloud label learning method, three-dimensional point cloud label estimation method, and program
CN108960134A (en) * 2018-07-02 2018-12-07 广东容祺智能科技有限公司 A kind of patrol UAV image mark and intelligent identification Method
CN109190625A (en) * 2018-07-06 2019-01-11 同济大学 A kind of container number identification method of wide-angle perspective distortion
CN109446973A (en) * 2018-10-24 2019-03-08 中车株洲电力机车研究所有限公司 A kind of vehicle positioning method based on deep neural network image recognition
CN109685066A (en) * 2018-12-24 2019-04-26 中国矿业大学(北京) A kind of mine object detection and recognition method based on depth convolutional neural networks
CN110059676A (en) * 2019-04-03 2019-07-26 北京航空航天大学 A kind of aviation plug hole location recognition methods based on deep learning Yu multiple target distribution sorting
CN110321750A (en) * 2019-04-23 2019-10-11 成都数之联科技有限公司 Two-dimensional code identification method and system in a kind of picture
CN110223352A (en) * 2019-06-14 2019-09-10 浙江明峰智能医疗科技有限公司 A kind of medical image scanning automatic positioning method based on deep learning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
HASMILA A.OMAR: "QUANTIFICATION OF CARDIAC BULL"S-EYE MAP BASED ON PRICIPAL STRAIN ANYLYSIS FOR MYOCARDIAL WALL MOTION ASSESSMENT IN STRESS ECHOCARDIOGRPHY", 《2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018)》 *
SAINING XIE: "Aggregated Residual Transformations for Deep Neural Networks", 《2017 IEEE CVPR》 *
SMALLACTIVE: "基于深度神经网络的室内可见光定位算法", 《HTTPS://BLOG.CSDN.NET/QQ_41149269/ARTICLE/DETAILS/82918587》 *
卢朝阳: "一种采用图象识别的射击竞赛自动判靶系统", 《计算机工程与科学》 *
尚春红等: "复杂背景图像中军用靶子识别算法研究", 《计算机应用》 *
李秋洁等: "基于变尺度格网索引与机器学习的行道树靶标点云识别", 《农业机械学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797832A (en) * 2020-07-14 2020-10-20 成都数之联科技有限公司 Automatic generation method and system of image interesting region and image processing method
CN111797832B (en) * 2020-07-14 2024-02-02 成都数之联科技股份有限公司 Automatic generation method and system for image region of interest and image processing method
CN112052946A (en) * 2020-07-21 2020-12-08 重庆邮电大学移通学院 Neural network training method and related product

Also Published As

Publication number Publication date
CN111161227B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN113450307B (en) Product edge defect detection method
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN105740899B (en) A kind of detection of machine vision image characteristic point and match compound optimization method
CN111640157B (en) Checkerboard corner detection method based on neural network and application thereof
CN104331699B (en) A kind of method that three-dimensional point cloud planarization fast search compares
CN106446894B (en) A method of based on outline identification ball-type target object location
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
CN110210463A (en) Radar target image detecting method based on Precise ROI-Faster R-CNN
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN109344882A (en) Robot based on convolutional neural networks controls object pose recognition methods
CN111507976B (en) Defect detection method and system based on multi-angle imaging
CN109118528A (en) Singular value decomposition image matching algorithm based on area dividing
CN111062915A (en) Real-time steel pipe defect detection method based on improved YOLOv3 model
CN108229458A (en) A kind of intelligent flame recognition methods based on motion detection and multi-feature extraction
CN108985337A (en) A kind of product surface scratch detection method based on picture depth study
CN110232389A (en) A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance
CN107230203A (en) Casting defect recognition methods based on human eye vision attention mechanism
CN110334727B (en) Intelligent matching detection method for tunnel cracks
CN111161227B (en) Target positioning method and system based on deep neural network
CN108090434A (en) A kind of ore method for quickly identifying
CN105957116B (en) A kind of design and coding/decoding method of the dynamic coding point based on frequency
CN110866915A (en) Circular inkstone quality detection method based on metric learning
CN114998308A (en) Defect detection method and system based on photometric stereo
CN109726660A (en) A kind of remote sensing images ship identification method
CN114758222A (en) Concrete pipeline damage identification and volume quantification method based on PointNet ++ neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 610000 No. 270, floor 2, No. 8, Jinxiu street, Wuhou District, Chengdu, Sichuan

Applicant after: Chengdu shuzhilian Technology Co.,Ltd.

Address before: 610000 No.2, 4th floor, building 1, Jule Road intersection, West 1st section of 1st ring road, Wuhou District, Chengdu City, Sichuan Province

Applicant before: CHENGDU SHUZHILIAN TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant