CN109635661A - A kind of far field wireless charging reception object detection method based on convolutional neural networks - Google Patents

A kind of far field wireless charging reception object detection method based on convolutional neural networks Download PDF

Info

Publication number
CN109635661A
CN109635661A CN201811346972.0A CN201811346972A CN109635661A CN 109635661 A CN109635661 A CN 109635661A CN 201811346972 A CN201811346972 A CN 201811346972A CN 109635661 A CN109635661 A CN 109635661A
Authority
CN
China
Prior art keywords
neural networks
convolutional neural
wireless charging
far field
field wireless
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811346972.0A
Other languages
Chinese (zh)
Other versions
CN109635661B (en
Inventor
吴敖洲
刘庆文
方稳
张清清
邓浩
刘明清
姜赛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201811346972.0A priority Critical patent/CN109635661B/en
Publication of CN109635661A publication Critical patent/CN109635661A/en
Application granted granted Critical
Publication of CN109635661B publication Critical patent/CN109635661B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of, and the far field wireless charging based on convolutional neural networks receives object detection method, this method increases camera by the transmitting terminal in far field wireless charging, it obtains receiving terminal region image and inputs in depth convolutional neural networks the receiving end detected in picture, improve the efficiency of transmitting terminal and receiving end pairing, this method is characterized mainly in that: depth convolutional neural networks are used for target detection, camera is only needed to obtain picture, accuracy Dependent Algorithm in Precision is without worrying equipment error;Using detection time as index, optimization network structure improves detection efficiency;Using the thinking of image pyramid, considers the connection of different characteristic figure, improve the ability to small target deteection;From MaskRCNN network frame, ResNet101 backbone network takes into account the accuracy and speed of detection for modification;It is pre-positioned using first vision, then the method for laser scanning fine positioning, the time that traverse scanning positions has been shortened into one third.

Description

A kind of far field wireless charging reception object detection method based on convolutional neural networks
Technical field
The present invention relates to computer vision fields, more particularly, to a kind of far field wireless charging based on convolutional neural networks Receiving end object detection method.
Background technique
The power demands of electronic equipment constantly increased in recent years, and the cruising ability of battery arrived bottleneck, wireless charging Thus it gets growing concern for.But traditional wireless charging scheme, such as: magnetic resonance, magnetic induction and microwave charging face The challenge of distance, power, safety and mobility etc..Different and traditional charging scheme, RBC (Resonant Beam Charging) wireless charging technology in far field uses laser as transmission medium, is able to achieve meter level distance, the safety of watt grade power, Movable energy transmission.
Before energy transmission link establishment, the transmitting terminal of RBC must be known by the location information of receiving end, i.e., clear toward which Transmit energy in direction.Existing receiving end locating scheme is the process of a traverse scanning: transmitting terminal is entire by laser scanning Scene sends feedback signal to transmitting terminal after receiving end receives laser signal, completes pairing.The efficiency of traverse scanning process is very Low, a kind of more efficient scheme is first to do pre-determined bit to receiving end to improve matching efficiency.
Summary of the invention
It is an object of the present invention to overcome the above-mentioned drawbacks of the prior art and provide will be in computer vision Object detection method is applied to the positioning of far field wireless charging receiving terminal, improves the efficiency of hair transmitting terminal and receiving end pairing: logical It crosses camera and obtains receiving end scene picture, be passed to the detection of depth convolutional neural networks, obtain the region there may be receiving end, The region laser scanning of pre-detection is confirmed again, so that the one kind for improving the efficiency positioned to the receiving end RBC is based on convolutional Neural The far field wireless charging receiving terminal object detection method of network.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of far field wireless charging receiving terminal object detection method based on convolutional neural networks, comprising the following steps:
Step 1: establish data in mobile phone collection, and data in mobile phone collection will be inputted depth convolutional neural networks by design into Row training obtains the depth convolutional network that the training for detecting mobile phone target is completed;
Step 2: the depth convolutional network that training is completed is embedded in the wireless charging RBC control system of far field;
Step 3: the depth convolutional network that the picture input training of camera shooting is completed, obtaining multiple mobile phone targets can Position existing for energy;
Step 4: laser scanning being carried out to the position got, determines whether positioning result is accurate.
Further, which is characterized in that include RPN in the model framework of the depth convolutional neural networks in the step 1 Layer, RoIAlign layers, Lossfunction layers and ResNet101+FPN layers, depth convolutional neural networks in the step 1 Model framework is based on ResNet101 and extracts characteristics of image, enhances in conjunction with FPN to small target deteection, the RPN layer comprising special designing With RoIAlign layers.
Further, RPN layers of the input is characterized figure Feature Maps, exports as k Anchor of classification number The transformation rectangle parameter and target score of boxes, the transformation rectangle parameter includes rectangular centre coordinate (x, y) and rectangle Long h and width w, wherein k is natural number.
Further, described Lossfunction layers includes multitask loss function, specific formula are as follows:
L=Lcls+Lbox+Lmask
In formula, L is multitask loss function, LclsFor error in classification, LboxFor detection error, LmaskTo divide error.
Further, the data set of the step 1 includes training set and test set, and the training set and the test set are equal Including the picture under being respectively combined with each other under the conditions of random different posture different scenes different models, the quantity of the picture is 2000, the picture number of the training set is 1200, and the picture number of the test set is 800, the different scenes Including classroom, office and dormitory, the different model is 8 sections of mobile phone models comprising Iphone and Android, and the picture is equal Location tags are also provided with by Labelme tool.
Further, the operating rate of the depth convolutional neural networks of the step 1 is 5fps, and accuracy rate is more than or equal to 60%.
Compared with prior art, the invention has the following advantages that
(1) it is fixed that traditional traverse scanning locating scheme, this patent elder generation vision-based detection pre-determined bit, then laser confirmation are different from Position improves the efficiency of transmitting terminal and receiving end pairing;
(2) it is different from traditional microwave positioning, acoustic location scheme, this patent uses depth convolutional neural networks, only needs As soon as increasing a camera, not needing additional equipment can be detected, and core relies on vision algorithm, and Detection accuracy is high and will not Detection effect is influenced because of Aging equipment.
(3) difference and traditional target detection network, this patent, which devises, is specially adapted for the reception that far field is infinitely charged Target detection network is held, detection and the speed of service to Small object are focused on.
Detailed description of the invention
Fig. 1 is the model framework schematic diagram of depth convolutional neural networks in the present invention;
Fig. 2 is method flow schematic diagram of the invention;
Fig. 3 is the test result schematic diagram of the Detection accuracy of depth convolutional neural networks of the invention;
Fig. 4 is the test result schematic diagram of detection positioning time of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiment is a part of the embodiments of the present invention, rather than whole embodiments.Based on this hair Embodiment in bright, those of ordinary skill in the art's every other reality obtained without making creative work Example is applied, all should belong to the scope of protection of the invention.
Embodiment
The receiving end of far field wireless charging system RBC can be computer, desk lamp or various internet of things equipment, if training mind Considered that the receiving end of all kinds is a huge engineering when network.And mobile phone is most common and most important charging pair As, therefore in implementation using mobile phone as the representative of the receiving end RBC, it is illustrated in figure 2 the realization process of the method for the present invention: 1) data in mobile phone collection is initially set up;2) data set is put into training in the depth convolutional neural networks designed, is specially examined Survey the depth convolutional network of mobile phone target;3) internet startup disk for completing training is into the control system of RBC;4) camera is clapped The picture taken the photograph, which is put into network, to be analyzed, several mobile phone target positions that may be present are obtained;5) position that pre-determined bit is obtained Laser scanning is carried out, confirms positioning result.
The present embodiment is made an amendment based on using for reference Mask RCNN frame, and backbone network uses ResNet101+FPN, is made With self-control data set training, the depth convolution model for the far field wireless charging mobile phone target detection being suitable for, operating rate 5fps, accuracy rate make the Scan orientation time narrow down to the one third of traverse scanning 60% or more.
For the process of mobile phone detection:
1. the transmitting terminal that charges opens camera, the image of receiving end region is obtained;
2. an image data is passed to trained depth convolutional neural networks detection;
3. having detected to obtain several include mobile phone mesh target area;
4. transmitting terminal, to each field emission laser scanning detected, confirmation obtains receiving end position.
Convolutional neural networks structure:
The present embodiment makes adjustment to the parameter in network based on MaskRCNN, whole model framework such as Fig. 1 institute Show, including RPN layers, RoIAlign layers, Lossfunction layers (later further combined with and export as Fixed Feature Map layers) and ResNet101+FPN layers, wherein Mask branch is helpful for the accuracy rate promotion of target detection, therefore structure On remain Mask branch, the building particularly for every layer is described as follows:
RPN layers:
RPN network originating in Mask RCNN is in Faster RCNN, input feature vector figure Feature Maps, output class Not Shuo k Anchor boxes transformation rectangular centre coordinate (x, y) and the long h of rectangle and width w, wherein k is natural number.
Anchor box is that RPN network is the design that predicting candidate region rectangle frame is done, each sliding window of RPN network One piece of region of original picture is all corresponded to, Anchor box is exactly classification number k of the unified initialization on the original picture region Box, RPN network are by predicting possible target area to Anchor box translation, scaling.
RoIAlign layers:
RoIAlign layers of input is change of the Anchor box of the characteristic pattern and RPN output by convolution relative to original image Change matrix and corresponding region is intercepted on characteristic pattern by the processing to Anchor box, exports the region RoI of fixed size.
RoIAlign layers are improvement to RoIPooling layers, area caused by solving RoIPooling in quantization operation The unmatched problem in domain.RoIAlign eliminates quantization operation, and obtaining coordinate using the method for bilinear interpolation is floating number Image values in pixel, to say that entire characteristic pattern accumulation process is converted into a continuous operation.It is more noticeable Be, the mismatch of RoIPooling for target detection, caused by error can receive, and mainly prediction to Mask There is significant impact.Therefore by the improvement quantified to RoI, the prediction result of Mask is improved.
Lossfunction layers:
Lossfunction layers include multitask loss function, specific formula are as follows:
L=Lcls+Lbox+Lmask
In formula, L is multitask loss function, LclsFor error in classification, LboxFor detection error, LmaskTo divide error.
Each RoI generates the mask of k m*m of classification number, and m corresponds to the pond resolution ratio of RoIAlign, LmaskBy by The Sigmoid of pixel is calculated, and combines after this layer with previous input Feature Maps, and generate Fixed Map layers of Feature.
ResNet101+FPN layers:
Backbone network is made of ResNet101 combination FPN, and ResNet extracts characteristics of image, from original image to network depths Characteristic pattern, advanced features are more and more abundant, and spatial information is fewer and fewer, therefore difficult to realize for the detection of Small object. The introducing of FPN be exactly be to solve the problems, such as multiple scale detecting, by being simply connected to the network change, do not increasing original model substantially In the case where calculation amount, the performance of wisp detection is significantly improved.
FPN network is directly made an amendment on original single network, and the feature map of each resolution ratio introduces latter resolution The feature map that rate scales twice does the operation of element-wise addition.Connection in this way, each layer prediction used in Feature map all merged the feature of different resolution, different semantic intensity, the feature of the different resolution of fusion Map does the object detection of corresponding resolution sizes respectively.This ensure that each layer has suitable resolution ratio and strong semanteme Feature significantly improves the performance of wisp detection.
The present embodiment is for the charging receiving end target detection in real scene, and there are two innovative points: 1) focusing on to small mesh Target detection, because receiving end can very little in the picture that camera is shot in real scene;2) real-time of system is good, Speed block is detected, allows charging system that can quickly position target, user experience is improved, compared to traditional microwave positioning, sound wave The methods of positioning, the mobile phone coarse positioning scheme of view-based access control model need to only ensure that transmitting terminal has camera, not need to increase in receiving end Redundant equipment, the accuracy of detection do not have to worry hardware bring error dependent on deep learning algorithm.In the long run, far field The combination of wireless charging and vision is not limited to mobile phone positioning, can be realization intelligent charge more greatly, has major application value.
The specific experiment verification process of the present embodiment is as follows:
Make data set:
The problem of present invention is paid close attention to is mobile phone target detection, and background and mobile phone are only distinguished in image, and general target is examined Measured data collection such as VOC, COCO are suitable for the detection of multiple target, and mobile phone target detection is the service of far field wireless charging, are applied to Indoor real scene, therefore the picture for multiple and different indoor scenes such as acquire classroom, office, dormitory, in addition to energy Detect the smart phone of different styles, data picture contains 8 sections of mobile phone models of Iphone and Android (certainly now outside mobile phone Shape is similar, although 8 sections not will cause serious over-fitting less), mobile phone is remained as much as possible in collection process Different postures are held in the hand such as face camera lens, and side is to camera lens, and plurality of mobile phones, which is stacked together, waits scenes, and experiment finally has taken 1000 initial data pictures, in addition image enhancement is done in bilateral symmetry transformation, one is obtained 2000 data pictures, then by Labelme tool is the label that every width picture stamps mobile phone location, finally takes 1200 pictures for training at random, is left 800 Picture is for testing.
The training of depth convolution model:
The weight of model is initialized according to the MaskRCNN result that training obtains on MS COCO, uses transfer learning Thought, it means that parameter has been after having trained many days on tens of thousands of pictures at the time of initialization as a result, net Network itself already has powerful feature extraction and classification capacity, and 1200 on the basis of original for finely tuning, and allows defeated Result is more directed to mobile phone detection out, and training process is proceeded in two phases, instructed first to head branches several layers of before network model Practice, enhances to mobile phone clarification of objective extractability, then all layers are finely tuned, learning rate chooses smaller value, in training set Upper training reduces loss function, is switched on test set and tests later, records the loss function value on test set instantly, then cut Training on training set is changed to, suitable training result is finally taken, there is good performance on training set and test set, prevent from owing Fitting and over-fitting.
The experiment of model inspection accuracy rate:
By taking the data set of above-mentioned production and training process as an example, the average detected accuracy rate AP (Average of test model Precision), the test set of experiment includes 800 pictures, tests different IoU (Intersection over Union) threshold The accuracy rate detected under value (from 0.5 to 0.95, step-length 0.05, totally 10 values), concrete outcome is as shown in figure 3, as seen from the figure:
MAP (mean Average Precision) in figure be 10 times detection average values, mAP 57.66%, than 38.2% accuracy rate of the MaskRCNN on COCO test set is much higher, and there are three the high detection rates that reason results in model:
One, the classification task of model belongs to two classification, and common target classification be mostly classify, therefore preset task It is simpler, if necessary to the receiving end of all kinds of detection charging, then the Detection accuracy of model can correspondingly decline.
Two, test set production when comprising more big target (accounting for the ratio of image area, be relative size), and Be easier to detection in big target, thus model it is last AP value it is higher, the subsequent detection for needing to be more concerned about Small object increases more More Small object image datas.
Three, for data set when acquisition, affiliated scene is more dull, mostly based on classroom, office and dormitory.Cause This training set and test set have redundancy, cause the standard of detection that rate is gone to rise.
Detect positioning time comparative experiments:
By taking a practical application as an example, needed for being detected the time required to comparing traditional traverse scanning with this model inspection scheme Scanning area is divided into N number of minizone, scanned one by one by time, traditional traverse scanning scheme, does not scan what a section needed Time is Ts, therefore the average time T detected1Such as following formula, wherein N is natural number:
The average detected time of the present embodiment is T2, 1) and firstly the need of TdTime detect picture, the accuracy rate of detection is AP;2) it is first scanned in the target area detected, if it is confirmed that detecting that target is then completed to detect;If 3) do not detect mesh Mark, then traverse remaining region, T2Expression are as follows:
In RBC system, N usually takes 64, and each region needs about TsThe time sweep of=2s, the model inspection of the present embodiment It surveys every figure of time and needs Td=0.2s, Fig. 4 illustrate under different IoU (Intersection over Union) (AP is different) Detection time T2, while being also labelled with T1
T165s is calculated to obtain, is 0.7 citing, T with AP2Equal to 21.4 seconds, therefore the model of the present embodiment was by traditional scanning Positioning time has narrowed down to one third.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or replace It changes, these modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with right It is required that protection scope subject to.

Claims (6)

1. a kind of far field wireless charging based on convolutional neural networks receives object detection method, which is characterized in that including following Step:
Step 1: establishing data in mobile phone collection, and the depth convolutional neural networks that data in mobile phone collection is inputted by design are instructed Practice, obtains the depth convolutional network that the training for detecting mobile phone target is completed;
Step 2: the depth convolutional network that training is completed is embedded in the wireless charging RBC control system of far field;
Step 3: the depth convolutional network that the picture input training of camera shooting is completed, obtaining multiple mobile phone targets may deposit Position;
Step 4: laser scanning being carried out to the position got, determines whether positioning result is accurate.
2. a kind of far field wireless charging based on convolutional neural networks according to claim 1 receives object detection method, It is characterized in that, the depth convolutional neural networks in the step 1 model framework in include RPN layers, RoIAlign layers, Lossfunction layers and ResNet101+FPN layers.
3. a kind of far field wireless charging based on convolutional neural networks according to claim 2 receives object detection method, It is characterized in that, RPN layers of the input is characterized figure Feature Maps, export as k Anchor boxes's of classification number It converts rectangle parameter and target score, the transformation rectangle parameter includes rectangular centre coordinate (x, y) and the long h of rectangle and width W, wherein k is natural number.
4. a kind of far field wireless charging based on convolutional neural networks according to claim 2 receives object detection method, It is characterized in that, described Lossfunction layers includes multitask loss function, specific formula are as follows:
L=Lcls+Lbox+Lmask
In formula, L is multitask loss function, LclsFor error in classification, LboxFor detection error, LmaskTo divide error.
5. a kind of far field wireless charging based on convolutional neural networks according to claim 1 receives object detection method, It is characterized in that, the data set of the step 1 includes training set and test set, the training set and the test set include with Picture under being respectively combined with each other under the conditions of the different posture different scenes different models of machine, the quantity of the picture are 2000 , the picture number of the training set is 1200, and the picture number of the test set is 800, and the different scenes include Classroom, office and dormitory, the different model are 8 sections of mobile phone models comprising Iphone and Android, and the picture is also logical It crosses Labelme tool and is provided with location tags.
6. a kind of far field wireless charging based on convolutional neural networks according to claim 1 receives object detection method, The operating rate of the depth convolutional neural networks of the step 1 is 5fps, and accuracy rate is more than or equal to 60%.
CN201811346972.0A 2018-11-13 2018-11-13 Far-field wireless charging receiving target detection method based on convolutional neural network Active CN109635661B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811346972.0A CN109635661B (en) 2018-11-13 2018-11-13 Far-field wireless charging receiving target detection method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811346972.0A CN109635661B (en) 2018-11-13 2018-11-13 Far-field wireless charging receiving target detection method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN109635661A true CN109635661A (en) 2019-04-16
CN109635661B CN109635661B (en) 2023-07-07

Family

ID=66067824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811346972.0A Active CN109635661B (en) 2018-11-13 2018-11-13 Far-field wireless charging receiving target detection method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN109635661B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110048486A (en) * 2019-05-14 2019-07-23 南京信息工程大学 Intelligent multipoint wireless charging unit and its implementation based on the identification of mobile phone feature
CN110414492A (en) * 2019-08-29 2019-11-05 广东工业大学 A kind of crystalline material image-recognizing method and device
CN111444973A (en) * 2020-03-31 2020-07-24 西安交通大学 Method for detecting commodities on unmanned retail shopping table
CN111950550A (en) * 2020-08-13 2020-11-17 王宗尧 Vehicle frame number identification system based on deep convolutional neural network
CN114143872A (en) * 2021-11-25 2022-03-04 同济大学 Multi-mobile-device positioning method based on unmanned aerial vehicle-mounted WiFi probe
CN115184744A (en) * 2022-06-27 2022-10-14 上海格鲁布科技有限公司 GIS ultrahigh frequency discharge signal detection device and method based on fast-RCNN

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016206648A1 (en) * 2015-06-26 2016-12-29 苏州宝时得电动工具有限公司 Autonomous mobile device and wireless charging system thereof
CN107947390A (en) * 2017-11-22 2018-04-20 青岛众海汇智能源科技有限责任公司 A kind of smart home wireless power supply system and method
CN108284761A (en) * 2018-01-17 2018-07-17 中惠创智无线供电技术有限公司 A kind of wireless charging vehicle, wireless charging intelligence control system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016206648A1 (en) * 2015-06-26 2016-12-29 苏州宝时得电动工具有限公司 Autonomous mobile device and wireless charging system thereof
CN107947390A (en) * 2017-11-22 2018-04-20 青岛众海汇智能源科技有限责任公司 A kind of smart home wireless power supply system and method
CN108284761A (en) * 2018-01-17 2018-07-17 中惠创智无线供电技术有限公司 A kind of wireless charging vehicle, wireless charging intelligence control system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUIYING LI等: "Relief R-CNN : Utilizing Convolutional Features for Fast Object Detection", 《ARXIV:1601.06719V4[CS.CV]》 *
李紫薇: "无线可充电传感器节点开发与能量管理研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110048486A (en) * 2019-05-14 2019-07-23 南京信息工程大学 Intelligent multipoint wireless charging unit and its implementation based on the identification of mobile phone feature
CN110414492A (en) * 2019-08-29 2019-11-05 广东工业大学 A kind of crystalline material image-recognizing method and device
CN111444973A (en) * 2020-03-31 2020-07-24 西安交通大学 Method for detecting commodities on unmanned retail shopping table
CN111444973B (en) * 2020-03-31 2022-05-20 西安交通大学 Method for detecting commodities on unmanned retail shopping table
CN111950550A (en) * 2020-08-13 2020-11-17 王宗尧 Vehicle frame number identification system based on deep convolutional neural network
CN114143872A (en) * 2021-11-25 2022-03-04 同济大学 Multi-mobile-device positioning method based on unmanned aerial vehicle-mounted WiFi probe
CN115184744A (en) * 2022-06-27 2022-10-14 上海格鲁布科技有限公司 GIS ultrahigh frequency discharge signal detection device and method based on fast-RCNN
CN115184744B (en) * 2022-06-27 2023-09-05 上海格鲁布科技有限公司 GIS ultrahigh frequency discharge signal detection device and method based on fast-RCNN

Also Published As

Publication number Publication date
CN109635661B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN109635661A (en) A kind of far field wireless charging reception object detection method based on convolutional neural networks
Ates et al. Path loss exponent and shadowing factor prediction from satellite images using deep learning
CN110263705A (en) Towards two phase of remote sensing technology field high-resolution remote sensing image change detecting method
CN111899172A (en) Vehicle target detection method oriented to remote sensing application scene
CN109635748B (en) Method for extracting road characteristics in high-resolution image
CN109858563B (en) Self-supervision characterization learning method and device based on transformation recognition
CN106127204A (en) A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks
CN110188611A (en) A kind of pedestrian recognition methods and system again introducing visual attention mechanism
CN104346608A (en) Sparse depth map densing method and device
CN113420819B (en) Lightweight underwater target detection method based on CenterNet
CN114066831B (en) Remote sensing image mosaic quality non-reference evaluation method based on two-stage training
CN111369539B (en) Building facade window detecting system based on multi-feature image fusion
CN110390308B (en) Video behavior identification method based on space-time confrontation generation network
CN104063686A (en) System and method for performing interactive diagnosis on crop leaf segment disease images
CN115908517B (en) Low-overlapping point cloud registration method based on optimization of corresponding point matching matrix
Tang et al. Sonar image mosaic based on a new feature matching method
CN115311502A (en) Remote sensing image small sample scene classification method based on multi-scale double-flow architecture
CN109920050A (en) A kind of single-view three-dimensional flame method for reconstructing based on deep learning and thin plate spline
CN113902792A (en) Building height detection method and system based on improved RetinaNet network and electronic equipment
He et al. Automatic detection and mapping of solar photovoltaic arrays with deep convolutional neural networks in high resolution satellite images
CN116503418A (en) Crop three-dimensional target detection method under complex scene
CN116152633A (en) Detection method and system of target detection network based on spatial feature representation
CN116630828A (en) Unmanned aerial vehicle remote sensing information acquisition system and method based on terrain environment adaptation
CN116402690A (en) Road extraction method, system, equipment and medium in high-resolution remote sensing image based on multi-head self-attention mechanism
CN116129118A (en) Urban scene laser LiDAR point cloud semantic segmentation method based on graph convolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant