CN109448061A - A kind of underwater binocular visual positioning method without camera calibration - Google Patents

A kind of underwater binocular visual positioning method without camera calibration Download PDF

Info

Publication number
CN109448061A
CN109448061A CN201811174593.8A CN201811174593A CN109448061A CN 109448061 A CN109448061 A CN 109448061A CN 201811174593 A CN201811174593 A CN 201811174593A CN 109448061 A CN109448061 A CN 109448061A
Authority
CN
China
Prior art keywords
particle
neural network
dimensional coordinate
target point
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811174593.8A
Other languages
Chinese (zh)
Inventor
高剑
封磊
李晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201811174593.8A priority Critical patent/CN109448061A/en
Publication of CN109448061A publication Critical patent/CN109448061A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes a kind of underwater binocular visual positioning method without camera calibration, the invention obtains the image system two-dimensional coordinate of target point in the camera of left and right as input using underwater binocular vision system, world system three-dimensional coordinate of the target point with respect to camera is obtained as desired output using 3 D positioning system, it is optimized using initial weight and threshold value of the particle swarm algorithm to BP neural network, BP neural network is trained to the mean square error convergence of output with multi-group data, establish the vision measurement model of binocular camera, vision measurement model is fitted with the training result of neural network.This method does not need in advance to demarcate vision system, and using based on the neural network after particle group optimizing, system, the world three-dimensional coordinate of target point is directly obtained by the image system two-dimensional coordinate of underwater binocular vision system or so camera subject point.This method can accurately position underwater characteristic target.

Description

A kind of underwater binocular visual positioning method without camera calibration
Technical field
The present invention relates to submarine navigation device vision technique fields.Specially a kind of underwater binocular vision without camera calibration Feel that localization method, underwater binocular vision system position characteristic target using vision measurement, with the left and right figure of binocular vision system As input, the neural network by machine learning exports the 3 d space coordinate of the relatively left camera of target.
Background technique
For a long time, people have been devoted to the research of acoustic positioning technique, and the target of target middle and long distance is fixed under water Position aspect achieves good research achievement, but since acoustic positioning system data renewal frequency is low, in close-in measurement It still needs further improvement for stability and precision.For the needs for meeting underwater operation, people need to realize that close-in target is fixed Position.And visual sensor, it is suitable for short distance, the positioning of high-precision target.
Traditional vision positioning mainly uses the principle of three-dimensional reconstruction, realizes two-dimensional coordinate of the target object under image system Conversion to system, world three-dimensional coordinate.Traditional vision positioning has some limitations and defect, is mainly shown as: first The acquisition of " parallax " is related with the accuracy that camera model is established, in engineering, the foundation of camera model and camera distortion Removal is all to be determined by the result of camera calibration, but the calibration process of camera is more complicated, and there is also one for calibration result Fixed error rate;Secondly, existing carry out the method for vision positioning as at the beginning of initial threshold and neural network based on neural network The disadvantages of randomness of beginning Weight selected, it is slow that there are training speeds, and convergence rate is slow.
Summary of the invention
In view of the deficienciess of the prior art, the present invention proposes a kind of underwater binocular visual positioning without camera calibration The characteristics of method, this method is not need in advance to demarcate vision system, using based on the nerve net after particle group optimizing Network, directly obtaining the world of target point by the image system two-dimensional coordinate of underwater binocular vision system or so camera subject point is three Tie up coordinate.This method can accurately position underwater characteristic target.
General principles are:
The image system two-dimensional coordinate for obtaining target point in the camera of left and right using underwater binocular vision system is utilized as input 3 D positioning system obtains world system three-dimensional coordinate of the target point with respect to camera as desired output, using particle swarm algorithm pair The initial weight and threshold value of BP neural network optimize, and are trained with multi-group data to BP neural network equal to output Square error convergence establishes the vision measurement model of binocular camera, is fitted vision measurement model with the training result of neural network.
The technical solution of the present invention is as follows:
A kind of underwater binocular visual positioning method without camera calibration, it is characterised in that: the following steps are included:
Step 1: establishing underwater binocular vision system, wherein left and right camera is placed in parallel, and the pixel of the every frame image of underwater camera is high In 2,000,000 pixels;The multiple groups target point in space is shot using underwater binocular vision system, for each group of target point, benefit System, target point world three-dimensional coordinate P (X, Y, Z), and the figure obtained using underwater binocular vision system are obtained with 3 D positioning system measurement Picture obtains the two-dimensional coordinate (u that target point is located in the camera image system of left and right according to Corner Detection Algorithmdl,vdl),(udr,vdr);
Step 2: establish neural network:
Three layers of BP neural network are established, BP neural network has N number of input quantity, and i-th of input is xi, obtain N number of input section Point;Hidden layer output are as follows:
Wherein zkIt is exported for k-th of hidden layer node, f1It (s) is hidden layer activation primitive, vkiIndicate input layer to imply The weight of layer, xiIt is inputted for i-th, bkFor offset threshold;Q is hidden layer node number;
Output layer output are as follows:
Wherein yjIt is exported for j-th of output node layer, f2It (s) is the activation primitive of output layer, wjkIndicate hidden layer to defeated Weight between layer out, bjFor offset threshold;M is output layer node number;
Step 3: the initial weight and threshold value of BP neural network are chosen using particle swarm algorithm:
Step 3.1: initialization population
Random two be located in the camera image system of left and right with system, the world three-dimensional coordinate and this group of target point of one group of target point Coordinate is tieed up as training population is inputted, by the connection weight v between neural network nodeki、wjkWith offset threshold bk、 bjInto The coding of row vector mode, enabling all particles number of population is n, and population search space is D, and D takes connection weight and biasing threshold It is worth dimension summation;For q-th of particle, position Xq=[xq1,xq2,...,xqD]T, speed Vq=[vq1,vq2,..., vqD]T, it is distributed in section [- Vmax,Vmax] among, individual extreme value Pbest space vector is Pq=[pq1,pq2,...,pqD]T, group Extreme value Gbest space vector Pg=[pg1,pg2,...,pgn]T
Step 3.2: calculating particle adaptive value;
For q-th of particle, taking the mean square deviation between the output and desired output of q-th of particle is fitness F [q]:
Wherein tqSystem, world three-dimensional coordinate for this group of target point obtained by 3 D positioning system measurement;yqTo pass through The position of the q particle obtains connection weight vki、wjkWith offset threshold bk、bjAfterwards, it is calculated using before neural network to transmitting System, the world three-dimensional coordinate of this group of obtained target point;
The individual extreme value Pbest for comparing fitness F [q] and particle, if F [q] < Pbest, with this result F [q] generation For Pbest;Comparison fitness F [q] replaces Gbest with this result F [q] if F [q] < Gbest with group's extreme value Gbest;
Step 3.3: modify the position and speed of particle:
For q-th of particle, in+1 iterative process of kth, the position and speed of particle more new formula is;
Wherein, ω is Inertia Weight, d ∈ (1, D), q ∈ (1, n), VqdFor particle rapidity, XqdFor particle position, c1,c2For Studying factors, r1,r2For in the random number of section [0-1];
Step 3.4:
Step 3.2 and step 3.3 are repeated, until mean square deviation is less than setting value or reaches maximum cycle, obtains the group The corresponding particle position of target point;
Step 3.5: repeat step 3.1~3.4, be trained by multiple groups target point, the particle position obtained it is equal It is worth the connection weight v as BP neural networkki、wjkWith offset threshold bk、bjInitial value;
Step 4: training neural network:
Step 4.1: determine error:
Equipped with p input sample, each input sample is that a N-dimensional inputs p sample χ12,...χh...,χp It indicates, wherein the Square-type error of h-th of sample are as follows:
For desired output, the corresponding output of h-th of sample is
The global error of P sample are as follows:
Step 4.2: study adjustment output layer weight wjk:
The calculation formula of output layer weighed value adjusting amount is as follows:
Wherein, η is learning efficiency,It is local derviation of the global error to output layer weight;
The h+1 times weightThe formula of adjustment are as follows:
Wherein f2'(sj) be output layer activation primitive derivative;
Step 4.3: adjustment hidden layer weight:
The then formula of the h+1 times hidden layer weighed value adjusting are as follows:
Wherein f1'(sk) be hidden layer activation primitive derivative;
Step 4-4: repeat step 4-1~4-3 terminates to training, obtains the neural network of training completion;
Step 5: two-dimensional coordinate of the certain point in the camera image system of left and right is obtained, and using the two-dimensional coordinate as input, The neural network that input training is completed obtains system, the world three-dimensional coordinate of the point.
Beneficial effect
The present invention does not need in advance to demarcate binocular vision system, utilizes the nerve after optimizing based on particle swarm algorithm It is relatively left to directly obtain target point by the image system two-dimensional coordinate of target point in underwater binocular vision system or so camera for network System, the world three-dimensional coordinate of camera.This method can accurately position submarine target.
Additional aspect and advantage of the invention will be set forth in part in the description, and will partially become from the following description Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect of the invention and advantage will become from the description of the embodiment in conjunction with the following figures Obviously and it is readily appreciated that, in which:
Fig. 1: Qualisys 3 D positioning system schematic diagram;
Fig. 2: particle swarm algorithm performance curve schematic diagram;
Fig. 3: neural network performance curve schematic diagram.
Specific embodiment
The embodiment of the present invention is described below in detail, the embodiment is exemplary, it is intended to it is used to explain the present invention, And it is not considered as limiting the invention.
The principle of this instance method is: obtaining the image system two of target point in the camera of left and right using underwater binocular vision system Coordinate is tieed up as input, obtains system, the world three-dimensional coordinate of the relatively left camera of target point as expectation using 3 D positioning system Output, is optimized using initial weight and threshold value of the particle swarm algorithm to BP neural network, with multi-group data to BP nerve net Network is trained to the mean square error convergence of output.Specific step is as follows:
Step 1: acquisition multi-group data:
Underwater binocular vision system is established, the characteristics of underwater binocular vision system, is, left and right camera is placed in parallel, The pixel of the every frame image of underwater camera is higher than 2,000,000 pixels.For each group of target point, caught using Qualisys three-dimensional localization System system, the world three-dimensional coordinate (as shown in Figure 1) that obtain the relatively left camera of target point that measures is caught as real world system three-dimensional Coordinate is exploitation environment with opencv, detects to obtain the two dimension that target point is located in the camera image system of left and right using Surf algorithm Coordinate.As above, the data of 1050 groups of target points are acquired.Wherein it is used to train neural network for 1000 groups, 50 groups are used to verify study Result.
Step 2: establish neural network:
A BP neural network is chosen, BP neural network has 4 input quantities, respectively the two-dimensional coordinate u in image systemdl, vdl,udr,vdr, i-th of input is xi, i.e. 4 input nodes;BP neural network has 10 hidden layer nodes, k-th of hidden layer Node is zk;There are 3 output nodes, j-th of output is yj.The mapping process of BP neural network is as follows:
Input layer information-hidden layer activation primitive-hidden layer output:
Wherein, zkFor k-th of hidden layer node, k is taken as 10, f1(s) it is hidden layer activation primitive, is chosen for Sigmoid Function, xiIt is inputted for i-th, vkiIndicate weight of the input layer to hidden layer, bkFor offset threshold, chosen using particle swarm algorithm.
Hidden layer output-output layer activation primitive-output layer output:
yjIt is exported for j-th, m is taken as 3, f2(s) it is the activation primitive of output layer, is chosen for Sigmoid function, wjkTable Show hidden layer to the weight between output layer, bjFor offset threshold, chosen using particle swarm algorithm.
Using neural network as above go to be fitted target point as above left images image system two-dimensional coordinate to system, the world Mapping P (X, Y, Z)=T (u of three-dimensional coordinatedl,vdl,udr,vdr);Wherein P (X, Y, Z) is system, target point world three-dimensional coordinate, (udl,vdl),(udr,vdr) be left and right camera in original picture point two-dimensional coordinate.
Step 3: choosing initial weight and threshold value using particle swarm algorithm
Step 3-1: initialization population
It is located at left and right camera figure with the real world system three-dimensional coordinate and target point of the relatively left camera of one group of target point at random Training population is inputted as the two-dimensional coordinate in system is used as, by the connection weight v between neuronki、wjkWith offset threshold bk、 bjThe coding for carrying out vector mode, enabling all particles number of population is n, is taken as 40 herein, population search space is D (connection Weight and offset threshold dimension summation), neural network input is 4 dimensions, exports and ties up for 3, has 10 hidden nodes, therefore D is 83, it takes maximum number of iterations 500 times.The position of so q-th particle is set as Xq=[xq1, xq2.., xqD]T, q-th particle Speed is set as Vq=[vq1, vq2.., vqD]T, it is distributed in section [- Vmax, Vmax] among, individual extreme value Pbest space vector is Pq=[pq1, pq2..., pqD]T, group extreme value Gbest space vector Pg=[pg1, pg2..., pgn]T
Step 3-2: particle adaptive value is calculated
For q-th of particle, taking the mean square deviation between the output and desired output of q-th of particle is fitness F [q]:
Wherein tqSystem, world three-dimensional coordinate for this group of target point obtained by 3 D positioning system measurement;yqTo pass through The position of the q particle obtains connection weight vki、wjkWith offset threshold bk、bjAfterwards, it is calculated using before neural network to transmitting System, the world three-dimensional coordinate of this group of obtained target point;
Individual optimal value and globally optimal solution: the individual extreme value Pbest of comparison fitness F [q] and particle are then found, If F [q] < Pbest, Pbest is replaced with this result F [q];Fitness F [q] and group's extreme value Gbest are compared, if F [q] < Gbest then replaces Gbest with this result F [q];
Step 3.3: modify the position and speed of particle:
For q-th of particle, in+1 iterative process of kth, the position and speed of particle more new formula is;
Wherein, ω is Inertia Weight, d ∈ (1, D), q ∈ (1, n), VqdFor particle rapidity, XqdFor particle position, c1,c2For Studying factors, r1,r2For in the random number of section [0-1];
Step 3.4:
Step 3.2 and step 3.3 are repeated, (mean square deviation is followed less than 0.002 or greater than 500 times until reaching termination condition Ring), obtain the corresponding particle position of this group of target point;
Step 3.5: repeat step 3.1~3.4, be trained by ten groups of target points, the particle position obtained it is equal It is worth the connection weight v as BP neural networkki、wjkWith offset threshold bk、bjInitial value.
Step 4: training neural network:
Step 4.1: determine error:
Equipped with 1000 input samples, each input sample is one 4 dimension input, this 1000 sample χ1, χ2,...χh...,χpIt indicates.The then Square-type error of h-th of sample are as follows:
For desired output, the corresponding output of h-th of sample is
The global error of P sample are as follows:
Step 4.2: study adjustment output layer weight wjk:
The calculation formula of output layer weighed value adjusting amount is as follows:
Wherein, η is learning efficiency, is chosen for 0.5,It is local derviation of the global error to output layer weight;
The h+1 times weightThe formula of adjustment are as follows:
Wherein f2'(sj) be output layer activation primitive derivative;
Step 4.3: adjustment hidden layer weight:
The then formula of the h+1 times hidden layer weighed value adjusting are as follows:
Wherein f1'(sk) be hidden layer activation primitive derivative;
Step 4-4: repeat step 4-1~4-3 terminates to training, obtains the neural network of training completion;Mind after training Weight matrix and threshold matrix through network are as follows:
Weight matrix V of the input layer to hidden layer:
The offset threshold matrix B 1 of hidden layer neuron:
B1=[2.8356 0.7696-0.3883-0.4583-0.3455 0.7653-1.6648-1.6775 1.6563 -0.7056]THidden layer is to output layer weight matrix W
The offset threshold matrix B 2 of output layer neuron:
B2=[- 0.17786 1.944591 2.684266]
The performance curve of neural network such as Fig. 3.
Step 5: test neural network
For a target point original graph picpointed coordinate be (udl,vdl),(udr,vdr) as input, it is completed by training Neural network obtain system, the world three-dimensional coordinate of target point.This test chooses 50 groups of data to test neural network, wherein 20 groups of data are as follows:
1 neural network desired output of table and reality output comparison
It is demonstrated experimentally that this method can obtain higher precision and good reality under conditions of training sample abundance Shi Xing.
Although the embodiments of the present invention has been shown and described above, it is to be understood that above-described embodiment is example Property, it is not considered as limiting the invention, those skilled in the art are not departing from the principle of the present invention and objective In the case where can make changes, modifications, alterations, and variations to the above described embodiments within the scope of the invention.

Claims (1)

1. a kind of underwater binocular visual positioning method without camera calibration, it is characterised in that: the following steps are included:
Step 1: establishing underwater binocular vision system, wherein left and right camera is placed in parallel, and the pixel of the every frame image of underwater camera is high In 2,000,000 pixels;The multiple groups target point in space is shot using underwater binocular vision system, for each group of target Point obtains system, target point world three-dimensional coordinate P (X, Y, Z) using 3 D positioning system measurement, and utilizes underwater binocular vision system The image that system obtains, obtains the two-dimensional coordinate (u that target point is located in the camera image system of left and right according to Corner Detection Algorithmdl, vdl),(udr,vdr);
Step 2: establish neural network:
Three layers of BP neural network are established, BP neural network has N number of input quantity, and i-th of input is xi, obtain N number of input node;It is hidden It is exported containing layer are as follows:
Wherein zkIt is exported for k-th of hidden layer node, f1It (s) is hidden layer activation primitive, vkiPower of the expression input layer to hidden layer Value, xiIt is inputted for i-th, bkFor offset threshold;Q is hidden layer node number;
Output layer output are as follows:
Wherein yjIt is exported for j-th of output node layer, f2It (s) is the activation primitive of output layer, wjkIndicate hidden layer to output layer it Between weight, bjFor offset threshold;M is output layer node number;
Step 3: the initial weight and threshold value of BP neural network are chosen using particle swarm algorithm:
Step 3.1: initialization population
Random system, the world three-dimensional coordinate and this group of target point of one group of target point are located at the two dimension in the camera image system of left and right and sit It is denoted as to input training population, by the connection weight v between neural network nodeki、wjkWith offset threshold bk、bjCarry out vector The coding of mode, enabling all particles number of population is n, and population search space is D, and D takes connection weight and offset threshold dimension Summation;For q-th of particle, position Xq=[xq1,xq2,...,xqD]T, speed Vq=[vq1,vq2,...,vqD]T, point Cloth is in section [- Vmax,Vmax] among, individual extreme value Pbest space vector is Pq=[pq1,pq2,...,pqD]T, group's extreme value Gbest space vector Pg=[pg1,pg2,...,pgn]T
Step 3.2: calculating particle adaptive value;
For q-th of particle, taking the mean square deviation between the output and desired output of q-th of particle is fitness F [q]:
Wherein tqSystem, world three-dimensional coordinate for this group of target point obtained by 3 D positioning system measurement;yqTo pass through q-th The position of particle obtains connection weight vki、wjkWith offset threshold bk、bjIt afterwards, should using what is be calculated before neural network to transmitting System, the world three-dimensional coordinate of group target point;
The individual extreme value Pbest of comparison fitness F [q] and particle are replaced if F [q] < Pbest with this result F [q] Pbest;Comparison fitness F [q] replaces Gbest with this result F [q] if F [q] < Gbest with group's extreme value Gbest;
Step 3.3: modify the position and speed of particle:
For q-th of particle, in+1 iterative process of kth, the position and speed of particle more new formula is;
Wherein, ω is Inertia Weight, d ∈ (1, D), q ∈ (1, n), VqdFor particle rapidity, XqdFor particle position, c1,c2For study The factor, r1,r2For in the random number of section [0-1];
Step 3.4:
Step 3.2 and step 3.3 are repeated, until mean square deviation is less than setting value or reaches maximum cycle, obtains this group of target The corresponding particle position of point;
Step 3.5: repeating step 3.1~3.4, be trained by multiple groups target point, the mean value of the particle position obtained is made For the connection weight v of BP neural networkki、wjkWith offset threshold bk、bjInitial value;
Step 4: training neural network:
Step 4.1: determine error:
Equipped with p input sample, each input sample is that a N-dimensional inputs p sample χ12,...χh...,χpIt indicates, The wherein Square-type error of h-th of sample are as follows:
For desired output, the corresponding output of h-th of sample is
The global error of P sample are as follows:
Step 4.2: study adjustment output layer weight wjk:
The calculation formula of output layer weighed value adjusting amount is as follows:
Wherein, η is learning efficiency,It is local derviation of the global error to output layer weight;
The h+1 times weightThe formula of adjustment are as follows:
Wherein f2'(sj) be output layer activation primitive derivative;
Step 4.3: adjustment hidden layer weight:
The then formula of the h+1 times hidden layer weighed value adjusting are as follows:
Wherein f1'(sk) be hidden layer activation primitive derivative;
Step 4-4: repeat step 4-1~4-3 terminates to training, obtains the neural network of training completion;
Step 5: obtaining two-dimensional coordinate of the certain point in the camera image system of left and right, and using the two-dimensional coordinate as input, input The neural network that training is completed obtains system, the world three-dimensional coordinate of the point.
CN201811174593.8A 2018-10-09 2018-10-09 A kind of underwater binocular visual positioning method without camera calibration Pending CN109448061A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811174593.8A CN109448061A (en) 2018-10-09 2018-10-09 A kind of underwater binocular visual positioning method without camera calibration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811174593.8A CN109448061A (en) 2018-10-09 2018-10-09 A kind of underwater binocular visual positioning method without camera calibration

Publications (1)

Publication Number Publication Date
CN109448061A true CN109448061A (en) 2019-03-08

Family

ID=65546305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811174593.8A Pending CN109448061A (en) 2018-10-09 2018-10-09 A kind of underwater binocular visual positioning method without camera calibration

Country Status (1)

Country Link
CN (1) CN109448061A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222606A (en) * 2019-05-24 2019-09-10 电子科技大学 Electronic system fault forecast method based on tree search extreme learning machine
CN110232372A (en) * 2019-06-26 2019-09-13 电子科技大学成都学院 Gait recognition method based on particle group optimizing BP neural network
CN110334701A (en) * 2019-07-11 2019-10-15 郑州轻工业学院 Collecting method based on deep learning and multi-vision visual under the twin environment of number
CN110595468A (en) * 2019-09-25 2019-12-20 中国地质科学院地球物理地球化学勘查研究所 Three-component induction coil attitude measurement system and method based on deep learning
CN110706291A (en) * 2019-09-26 2020-01-17 哈尔滨工程大学 Visual measurement method suitable for three-dimensional trajectory of moving object in pool experiment
CN110702066A (en) * 2019-10-15 2020-01-17 哈尔滨工程大学 Underwater binocular camera vision positioning method
CN110781746A (en) * 2019-09-23 2020-02-11 安徽农业大学 Wolfberry identification and positioning method
CN111429761A (en) * 2020-02-28 2020-07-17 中国人民解放军陆军军医大学第二附属医院 Artificial intelligent simulation teaching system and method for bone marrow cell morphology
CN111563878A (en) * 2020-03-27 2020-08-21 中国科学院西安光学精密机械研究所 Space target positioning method
CN112070764A (en) * 2020-09-22 2020-12-11 南昌智能新能源汽车研究院 Binocular vision positioning system of teleoperation engineering robot
CN112102414A (en) * 2020-08-27 2020-12-18 江苏师范大学 Binocular telecentric lens calibration method based on improved genetic algorithm and neural network
CN112700500A (en) * 2020-12-08 2021-04-23 中大检测(湖南)股份有限公司 Binocular camera calibration method and device and readable storage medium
CN113554700A (en) * 2021-07-26 2021-10-26 贵州电网有限责任公司 Invisible light aiming method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101907448A (en) * 2010-07-23 2010-12-08 华南理工大学 Depth measurement method based on binocular three-dimensional vision
CN104700385A (en) * 2013-12-06 2015-06-10 广西大学 Binocular vision positioning device based on FPGA
CN105138717A (en) * 2015-07-09 2015-12-09 上海电力学院 Transformer state evaluation method by optimizing neural network with dynamic mutation particle swarm
CN106503790A (en) * 2015-08-28 2017-03-15 余学飞 A kind of Pressure wire temperature compensation of Modified particle swarm optimization neutral net
CN108564565A (en) * 2018-03-12 2018-09-21 华南理工大学 A kind of power equipment infrared image multi-target orientation method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101907448A (en) * 2010-07-23 2010-12-08 华南理工大学 Depth measurement method based on binocular three-dimensional vision
CN104700385A (en) * 2013-12-06 2015-06-10 广西大学 Binocular vision positioning device based on FPGA
CN105138717A (en) * 2015-07-09 2015-12-09 上海电力学院 Transformer state evaluation method by optimizing neural network with dynamic mutation particle swarm
CN106503790A (en) * 2015-08-28 2017-03-15 余学飞 A kind of Pressure wire temperature compensation of Modified particle swarm optimization neutral net
CN108564565A (en) * 2018-03-12 2018-09-21 华南理工大学 A kind of power equipment infrared image multi-target orientation method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杜艺: "基于改进神经网络的热轧厚度控制方法研究", 《中国优秀硕士学位论文全文数据库工程科技I辑》 *
王秋滢: "《船用调制型惯性导航及其组合导航技术》", 31 January 2017 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222606A (en) * 2019-05-24 2019-09-10 电子科技大学 Electronic system fault forecast method based on tree search extreme learning machine
CN110222606B (en) * 2019-05-24 2022-09-06 电子科技大学 Early failure prediction method of electronic system based on tree search extreme learning machine
CN110232372A (en) * 2019-06-26 2019-09-13 电子科技大学成都学院 Gait recognition method based on particle group optimizing BP neural network
CN110232372B (en) * 2019-06-26 2021-09-24 电子科技大学成都学院 Gait recognition method based on particle swarm optimization BP neural network
CN110334701A (en) * 2019-07-11 2019-10-15 郑州轻工业学院 Collecting method based on deep learning and multi-vision visual under the twin environment of number
CN110334701B (en) * 2019-07-11 2020-07-31 郑州轻工业学院 Data acquisition method based on deep learning and multi-vision in digital twin environment
CN110781746A (en) * 2019-09-23 2020-02-11 安徽农业大学 Wolfberry identification and positioning method
CN110595468A (en) * 2019-09-25 2019-12-20 中国地质科学院地球物理地球化学勘查研究所 Three-component induction coil attitude measurement system and method based on deep learning
CN110595468B (en) * 2019-09-25 2021-05-07 中国地质科学院地球物理地球化学勘查研究所 Three-component induction coil attitude measurement system and method based on deep learning
CN110706291A (en) * 2019-09-26 2020-01-17 哈尔滨工程大学 Visual measurement method suitable for three-dimensional trajectory of moving object in pool experiment
CN110702066A (en) * 2019-10-15 2020-01-17 哈尔滨工程大学 Underwater binocular camera vision positioning method
CN110702066B (en) * 2019-10-15 2022-03-18 哈尔滨工程大学 Underwater binocular camera vision positioning method
CN111429761A (en) * 2020-02-28 2020-07-17 中国人民解放军陆军军医大学第二附属医院 Artificial intelligent simulation teaching system and method for bone marrow cell morphology
CN111563878A (en) * 2020-03-27 2020-08-21 中国科学院西安光学精密机械研究所 Space target positioning method
CN111563878B (en) * 2020-03-27 2023-04-11 中国科学院西安光学精密机械研究所 Space target positioning method
CN112102414A (en) * 2020-08-27 2020-12-18 江苏师范大学 Binocular telecentric lens calibration method based on improved genetic algorithm and neural network
CN112070764A (en) * 2020-09-22 2020-12-11 南昌智能新能源汽车研究院 Binocular vision positioning system of teleoperation engineering robot
CN112700500A (en) * 2020-12-08 2021-04-23 中大检测(湖南)股份有限公司 Binocular camera calibration method and device and readable storage medium
CN113554700A (en) * 2021-07-26 2021-10-26 贵州电网有限责任公司 Invisible light aiming method
CN113554700B (en) * 2021-07-26 2022-10-25 贵州电网有限责任公司 Invisible light aiming method

Similar Documents

Publication Publication Date Title
CN109448061A (en) A kind of underwater binocular visual positioning method without camera calibration
CN109345507B (en) Dam image crack detection method based on transfer learning
CN108416840A (en) A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera
CN110378844A (en) Motion blur method is gone based on the multiple dimensioned Image Blind for generating confrontation network is recycled
CN110163974B (en) Single-image picture reconstruction method based on undirected graph learning model
CN108510535A (en) A kind of high quality depth estimation method based on depth prediction and enhancing sub-network
CN110135386B (en) Human body action recognition method and system based on deep learning
CN106097322A (en) A kind of vision system calibration method based on neutral net
CN108171249B (en) RGBD data-based local descriptor learning method
CN108280814A (en) Light field image angle super-resolution rate method for reconstructing based on perception loss
CN112766315B (en) Method and system for testing robustness of artificial intelligence model
CN110634108A (en) Composite degraded live webcast video enhancement method based on element-cycle consistency countermeasure network
CN110457515A (en) The method for searching three-dimension model of the multi-angle of view neural network of polymerization is captured based on global characteristics
CN115100574A (en) Action identification method and system based on fusion graph convolution network and Transformer network
CN110222784A (en) Fusion in short-term with it is long when depth characteristic solar battery sheet defect inspection method
CN109146937A (en) A kind of electric inspection process image dense Stereo Matching method based on deep learning
CN108924148A (en) A kind of source signal collaborative compression perception data restoration methods
CN110517309A (en) A kind of monocular depth information acquisition method based on convolutional neural networks
CN110490968A (en) Based on the light field axial direction refocusing image super-resolution method for generating confrontation network
CN115484410B (en) Event camera video reconstruction method based on deep learning
CN108111860A (en) Video sequence lost frames prediction restoration methods based on depth residual error network
WO2023284070A1 (en) Weakly paired image style transfer method based on pose self-supervised generative adversarial network
CN107085835A (en) Color image filtering method based on quaternary number Weighted Kernel Norm minimum
CN113793261A (en) Spectrum reconstruction method based on 3D attention mechanism full-channel fusion network
CN111882516B (en) Image quality evaluation method based on visual saliency and deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190308

WD01 Invention patent application deemed withdrawn after publication