CN109472826A - Localization method and device based on binocular vision - Google Patents

Localization method and device based on binocular vision Download PDF

Info

Publication number
CN109472826A
CN109472826A CN201811259210.7A CN201811259210A CN109472826A CN 109472826 A CN109472826 A CN 109472826A CN 201811259210 A CN201811259210 A CN 201811259210A CN 109472826 A CN109472826 A CN 109472826A
Authority
CN
China
Prior art keywords
view
visible light
saliency maps
component
left view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811259210.7A
Other languages
Chinese (zh)
Inventor
陈缨
彭倩
常政威
陈少卿
崔弘
彭倍
刘静
葛森
包杨川
何明
郑翔
杨枭
刘海龙
何玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of State Grid Sichuan Electric Power Co Ltd
Original Assignee
Electric Power Research Institute of State Grid Sichuan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Sichuan Electric Power Co Ltd filed Critical Electric Power Research Institute of State Grid Sichuan Electric Power Co Ltd
Priority to CN201811259210.7A priority Critical patent/CN109472826A/en
Publication of CN109472826A publication Critical patent/CN109472826A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Abstract

The invention discloses a kind of localization method and device based on binocular vision, is related to technical field of image processing.This method includes;Acquire the left view and right view of scene;Conspicuousness is carried out to the left view and right view respectively to calculate to obtain and the corresponding visible light Saliency maps of left view and near-infrared Saliency maps corresponding with the right view;Conspicuousness mean value is carried out to visible light Saliency maps and near-infrared Saliency maps respectively to compare, and the area to be targeted in left view and the right view is determined according to comparison result;The area to be targeted of area to be targeted and right view to left view is matched to determine target area in left view and the right view;Disparity computation is carried out to determine the position of target area to target area.Localization method and device provided by the invention based on binocular vision can be accurately detected the position of target, and only carry out the matching analysis to small part region in the view of left and right, and computation complexity is effectively reduced, and detection efficiency is high.

Description

Localization method and device based on binocular vision
Technical field
The present invention relates to technical field of image processing, more particularly, to a kind of localization method and dress based on binocular vision It sets.
Background technique
Currently, binocular stereo vision has become a research hotspot of computer vision processing, it is based on human eye binocular Parallax mode calculates correspondence of the spatial scene in different images by matching the Same Scene searched out in binocular image Relationship obtains the D coordinates value of the point.
However, existing method would become hard to accomplish to be accurately positioned, leads to measurement inaccuracy, instrument under substation is read For several and device temperature detection, the position of target area how is accurately detected out to realize to meter reading and equipment temperature The detection of degree is a urgent problem to be solved in the prior art.
Summary of the invention
To solve the above-mentioned problems, the invention proposes a kind of localization methods based on binocular vision, comprising the following steps:
Step S1, the left view and right view of scene are acquired;
Step S2, the left view and right view progress conspicuousness are calculated respectively corresponding with the left view to obtain Saliency maps picture and Saliency maps picture corresponding with the right view;
Step S3, respectively to the corresponding Saliency maps picture of the left view and Saliency maps picture corresponding with the right view It carries out conspicuousness mean value to compare, and the area to be targeted in the left view and right view is determined according to comparison result;
Step S4, the area to be targeted to the left view and the area to be targeted of the right view are matched with determination Target area in the left view and the right view;
Step S5, disparity computation is carried out to determine the position of target area to the target area in the left view and right view It sets.
Preferably, any view is visible light view, another view in the left view and right view acquired in the step S1 Figure is near-infrared view.
Preferably, the step S2 is specifically included:
Step S2.1 is smoothed the visible light view by difference of Gaussian filtering;After smoothing processing Visible light view transformation is the visible light view based on Lab color model;According to the visible light view based on Lab color model The color mean value computation of the L * component of figure, a component and b component obtains the visible light Saliency maps;
Step S2.2, by the near-infrared view publishing at the mutually disjoint multiple segmentation blocks being made of 7*7 pixel P, and using the pixel value of the central pixel point in each segmentation block as the pixel value of its place segmentation block;According to difference point Cut the distance between the distance between the pixel value of block location of pixels, the diversity of more different segmentation blocks;By choose with The immediate K pixel of current pixel point calculates the conspicuousness of the current pixel point in the near-infrared view, and circulation is held Row is until obtain the near-infrared Saliency maps.
Preferably, the color mean value of the L * component, a component and b component passes through following formula respectively and obtains:
Wherein, m_L, m_a, m_b are respectively the color mean value of L * component, a component and b component, Iv_L, Iv_a, Iv_b difference For the L * component, a component and b component of the visible light view, N is the line number of the visible light view, and M is visible light view The columns of figure;
The visible light Saliency maps are obtained by following formula:
Sv(i, j)=[Iv_L (i, j)-m_L]2+[Iv_a(i,j)-m_a]2+[Iv_b(i,j)-m_b]2
Wherein, Sv(i, j) is the visible light conspicuousness of a pixel, and it is significant that the conspicuousness of all pixels point constitutes visible light Property figure.
Preferably, the step S3 is specifically included:
Step S3.1 carries out conspicuousness mean value to the visible light Saliency maps and the near-infrared Saliency maps and compares Compared with;
Step S3.2, according to the comparison result root to the visible light Saliency maps and the near-infrared Saliency maps Image segmentation is carried out to obtain the area to be targeted in the left view and the right view.
Preferably, the step S5 is specifically included:
Step S5.1, the target area chosen in the left view and right view carry out disparity computation, obtain target area Distance apart from collection point;
The visible light view transformation is the visible light view based on RGB color model by step S5.2;
Step S5.3 spatially to HVS by the visible light view transformation based on RGB color model extracts V component, And the edge feature of the V component is matched to obtain the shape of the target area.
Further, this method further includes step S7, is analyzed the distance and shape of the target area to determine The width for stating target area is dominant to be dominant with height.
The invention also provides a kind of positioning devices based on binocular vision, comprising:
Image collecting device, for acquiring the left view and right view of scene;
Conspicuousness image collection module, for the left view and right view carry out conspicuousness calculate with obtain with it is described The corresponding Saliency maps of left view and Saliency maps picture corresponding with the right view;
Area to be targeted determining module, for the corresponding Saliency maps picture of the left view and corresponding with the right view Saliency maps picture carry out conspicuousness mean value and compare, and determined according to comparison result to be positioned in the left view and right view Region;
Target area determining module, the area to be targeted for area to be targeted and the right view to the left view It is matched with the target area in the determination left view and the right view;
Target-region locating module, for carrying out disparity computation to the target area in the left view and right view with true Set the goal the position in region.
Preferably, any view is visible light view in the left view and right view, and another view is near-infrared view.
Preferably, shown conspicuousness image collection module is filtered by difference of Gaussian and is carried out smoothly to the visible light view Processing;It is the visible light view based on Lab color model by the visible light view transformation after smoothing processing;It is based on according to described The color mean value computation of the L * component of the visible light view of Lab color model, a component and b component obtains the visible light conspicuousness Figure;By the near-infrared view publishing at the mutually disjoint multiple segmentation block p being made of 7*7 pixel, and with described each Divide pixel value of the pixel value of the central pixel point in block as segmentation block where it;According to it is different segmentation blocks pixel values it Between distance and the distance between location of pixels, the diversities of more different segmentation blocks;By choosing and the near-infrared view The middle immediate K pixel of current pixel point calculates the conspicuousness of the current pixel point, and circulation executes until obtaining described Near-infrared Saliency maps.
The present invention has the advantage that and the utility model has the advantages that
The device of the invention structure is simple, versatile, and to good environmental adaptability, because near-infrared is in mist and low illumination Imaging effect is better than visible light in degree.Above method step of the invention is simple, it is easy to accomplish, various barriers can be effectively treated Hinder analyte detection problem.
Localization method and device provided by the invention based on binocular vision can be accurately detected the position of target, and The matching analysis only is carried out to small part region in the view of left and right, computation complexity is effectively reduced, detection efficiency is high.
Detailed description of the invention
Attached drawing described herein is used to provide to further understand the embodiment of the present invention, constitutes one of the application Point, do not constitute the restriction to the embodiment of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of the localization method provided in an embodiment of the present invention based on binocular vision;
Fig. 2 is the functional block diagram of the positioning device provided in an embodiment of the present invention based on binocular vision.
Description of symbols:
Positioning device of the 600- based on binocular vision;610- image collecting device;620- conspicuousness image collection module; The area to be targeted 630- determining module;The target area 640- determining module;650- target-region locating module.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below with reference to embodiment and attached drawing, to this Invention is described in further detail, and exemplary embodiment of the invention and its explanation for explaining only the invention, are not made For limitation of the invention.
The component of embodiments of the present invention, which are generally described and illustrated herein in the accompanying drawings can be come with a variety of different configurations Arrangement and design.
Therefore, the detailed description of the embodiment of the present invention provided in the accompanying drawings is not intended to limit below claimed The scope of the present invention, but be merely representative of selected embodiment of the invention.Based on the embodiments of the present invention, this field is common Technical staff's every other embodiment obtained without creative efforts belongs to the model that the present invention protects It encloses.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.
Term " first ", " second ", " third " etc. are only used for distinguishing description, are not understood to indicate or imply relatively heavy The property wanted.
In addition, term " parallel ", " vertical " etc. are not offered as requiring component absolute parallel or vertical, but can slightly incline Tiltedly.It is not to indicate that the structure has to put down completely if " parallel " only refers to that its direction is more parallel with respect to for " vertical " Row, but can be slightly tilted.
In the description of the present invention, it is also necessary to which explanation is unless specifically defined or limited otherwise, term " setting ", " installation ", " connected ", " connection " shall be understood in a broad sense, for example, it may be fixedly connected, may be a detachable connection or one Connect to body;It can be directly connected, the connection inside two elements can also be can be indirectly connected through an intermediary. For the ordinary skill in the art, the concrete meaning of above-mentioned term in the present invention can be understood with concrete condition.
As shown in Figure 1, the flow chart of the localization method based on binocular vision for the embodiment of the present invention.The embodiment of the present invention The localization method based on binocular vision, comprising the following steps:
Step S101, acquires the left view and right view of scene, and in the present embodiment, the left view is visible images, The right view is near-infrared image.For example, passing through hybrid binocular vision (left view and right view of scene) acquisition system Acquire the left and right view of scene.The acquisition system is that visible light and near infrared spectrum imaging device is respectively adopted in left and right visual angle, is had Parallel optical axis imaging can be used in body imaging device.As current general silicon CCD can incude near infrared band, it is possible to filter Except the optical filtering that visible light remains near-infrared loads the acquisition of the realization near-infrared image before general black and white camera.
It is corresponding with the left view to obtain to carry out conspicuousness calculating to the left view and right view respectively by step S102 Visible light Saliency maps and near-infrared Saliency maps corresponding with the right view.
Generally, for calculating visible light Saliency maps, because visible images have colouring information abundant, Therefore it can be estimated according to it in current real-time image acquisition relative to background color region more outstanding as approximate human eye Saliency maps, be denoted as Sv, and then distinguish the salient region in visible images as possible area to be targeted, be denoted as Rv。
Specifically, calculating visible light Saliency maps in the following manner: being filtered by difference of Gaussian to the left view It is smoothed.Then it is the left view based on Lab color model by the left view transformation after smoothing processing, i.e., becomes image Change to the L*a*b color space for being more in line with human vision rule.The last left view based on Lab color model according to The color mean value computation of L * component, a component and b component obtains the visible light Saliency maps, i.e., according in L*a*b color space Under, each component is worth variation to be used as color conspicuousness component color, calculates final color Saliency maps.
In this embodiment, the color mean value of L * component, a component and b component passes through following formula respectively and obtains:
Wherein, m_L, m_a, m_b are respectively the color mean value of L * component, a component and b component, Iv_L, Iv_a, Iv_b difference For the L * component, a component and b component of the visible images, N is the line number of the visible images, and M is the visible light figure The columns of picture;I and j is respectively the position of corresponding pixel.
The visible light Saliency maps are obtained by following formula:
Sv(i, j)=[Iv_L (i, j)-m_L]2+[Iv_a(i,j)-m_a]2+[Iv_b(i,j)-m_b]2
Wherein, Sv(i, j) is the visible light conspicuousness of a pixel, and it is significant that the conspicuousness of all pixels point constitutes visible light Property figure.
In another example of the invention, for calculating near-infrared Saliency maps because its there are also good high frequencies Detailed information, it is possible to salient region is detected according to the similitude of each pixel and peripheral region, is denoted as Snir, Important region is found out from Snir as area to be targeted, is denoted as Rnir.
Specifically, near-infrared Saliency maps are calculated in the following manner:
The right view is divided into the mutually disjoint multiple segmentation blocks being made of 7*7 pixel, and with described each Divide pixel value of the pixel value of the central pixel point in block as segmentation block where it, i.e., takes 7* centered on each pixel Representative of 7 adjacent domain as the point.The distance between pixel value then according to different segmentation blocks is between location of pixels Distance, the diversities of more different segmentation blocks, wherein the distance between pixel value of different segmentation block is with each segmentation block The Euclidean distance d between pixel vector is formed by columnv(pi,pj), the distance between described location of pixels is with different central pixel points Between coordinate Euclidean distance dp(pi,pj), wherein piAnd pjFor different segmentation blocks, that is to say, that according between pixel value Distance and the distance between location of pixels, compare diversity between points.Wherein the distance between pixel value is pressed with block Column composition vector, calculates the Euclidean distance d being worth between pixel vectorv(pi,pj), and positional distance, then directly between two points Coordinate Euclidean distance indicate dp(pi,pj).It is a with the immediate K of current pixel point in the right view finally by choosing Pixel calculates the conspicuousness of the current pixel point, and circulation is executed up to obtaining the near-infrared Saliency maps, wherein K= 30。
In the above-described embodiments, diversity (different degree) is obtained by following formula:
Wherein, d (pi,pj) it is segmentation block piAnd pjBetween diversity, d (pi,pj) the distance that both shows more greatly of value Bigger, then also bigger with regard to difference, similitude is smaller.
The calculation formula of the conspicuousness are as follows:
Wherein, SniriTo divide block piConspicuousness.
Step S103 carries out conspicuousness average ratio to the visible light Saliency maps and the near-infrared Saliency maps respectively Compared with, and the area to be targeted in the left view and the right view is determined according to comparison result.In other words, based on having obtained The visible light Saliency maps and the near-infrared Saliency maps, the mean value using Saliency maps itself obtained compare, and carry out image point It cuts, obtains the area to be targeted of left and right view.In an example of the invention, it is contemplated that evacuation safety, it can all Energy region is first taken by the ranks maximum difference of pixel coordinate rectangular.
Specifically, conspicuousness average ratio is carried out to the visible light Saliency maps and the near-infrared Saliency maps first Compared with, then according to comparison result respectively to the visible light Saliency maps and the near-infrared Saliency maps carry out image segmentation with Obtain the area to be targeted in the left view and the right view.
Step S104, the area to be targeted of area to be targeted and the right view to the left view matched with Target area is determined in the left view and the right view.
Specifically, using area to be targeted obtained in above-described embodiment, respectively with the area to be positioned in the view of left and right On the basis of domain, matching area is found in another figure.If it find that matching area itself is also area to be targeted in another figure, Then taking the region is final target area.
Step S105 carries out disparity computation to the target area to determine that (collection point is i.e. apart from collection point for target area Binocular) distance, and to the area to be targeted carry out edge matching with the shape of the determination target area.
Specifically, it is determined that the left view transformation is first based on RGB color model by the shape of the target area Left view;By it is described based on the left view transformation of RGB color model to HVS spatially, V component is extracted, and to the V component Edge feature is matched to obtain the shape of the target area.
More specifically, the final target area of binocular image centering is chosen, disparity computation is carried out, obtains target area Current distance.Left view is transformed to RGB color image first, then transforms to HVS spatially, V component is extracted, is regarded as Visible images and infrared image associated picture.At this point, binocular image can be understood as area to be targeted V component image, it is denoted as Rv and near-infrared area to be targeted, are denoted as Rnir.Because near-infrared image largely reflects the high-frequency information of image, So being matched using edge feature, target area shape is acquired.
In one embodiment of the invention, after determining the distance and shape in final goal region, to final goal The distance and shape in region are further analyzed, and are dominant with determining that the width of target area is dominant with height.
As shown in Fig. 2, another aspect of the present invention correspondingly proposes a kind of positioning device based on binocular vision, with reference to figure 2, the positioning device 600 based on binocular vision of the embodiment of the present invention includes image collecting device 610, Saliency maps picture acquisition mould Block 620, area to be targeted determining module 630, target area determining module 640 and target-region locating module 650.
Image collecting device 610 is used to acquire the left view and right view of scene, wherein the left view is visible light figure Picture, the right view are near-infrared image.Conspicuousness image collection module 620 is for showing the left view and right view Write property calculate with obtain and the corresponding visible light Saliency maps of the left view and near-infrared corresponding with the right view it is significant Property figure.Area to be targeted determining module 630 is for showing the visible light Saliency maps and the near-infrared Saliency maps Work property mean value compares, and determines the area to be targeted in the left view and the right view respectively according to comparison result.Mesh Mark area determination module 640 is for matching the area to be targeted of the left view and the area to be targeted of the right view To determine target area in the left view and the right view.Target-region locating module 650 is used for the target Region carries out disparity computation to determine distance of the target area apart from collection point, and carries out edge matching to the area to be targeted With the shape of the determination target area.
In one embodiment of the invention, conspicuousness image collection module 620 to left view carry out conspicuousness calculate with Obtain visible light Saliency maps, comprising: be smoothed to the left view by difference of Gaussian filtering;After smoothing processing Left view transformation be the left view based on Lab color model;According to the L * component of the left view based on Lab color model, The color mean value computation of a component and b component obtains the visible light Saliency maps.
In the above-described embodiments, the color mean value of L * component, a component and b component passes through following formula respectively and obtains:
Wherein, m_L, m_a, m_b are respectively the color mean value of L * component, a component and b component, Iv_L, Iv_a, Iv_b difference For the L * component, a component and b component of the visible images, N is the line number of the visible images, and M is the visible light figure The columns of picture;I and j is respectively the position of corresponding pixel.
The visible light Saliency maps are obtained by following formula:
Sv(i, j)=[Iv_L (i, j)-m_L]2+[Iv_a(i,j)-m_a]2+[Iv_b(i,j)-m_b]2
Wherein, Sv(i, j) is the visible light conspicuousness of a pixel, and it is significant that the conspicuousness of all pixels point constitutes visible light Property figure.
In another embodiment of the invention, conspicuousness image collection module 620 to right view carry out conspicuousness calculate with Obtain near-infrared Saliency maps, comprising: first by the right view be divided into it is mutually disjoint be made of 7*7 pixel it is multiple Divide block, and using the pixel value of segmentation block where the pixel value as its of the central pixel point in each segmentation block, then The distance between the distance between pixel value according to different segmentation blocks location of pixels, the diversity of more different segmentation blocks, Wherein the distance between pixel value of different segmentation blocks is the Euclidean distance d formed with each segmentation block by column between pixel vectorv (pi,pj), the distance between described location of pixels is with the Euclidean distance d of the coordinate between different central pixel pointsp(pi,pj), Wherein pi and pj is different segmentation block, finally by selection and the immediate K pixel of current pixel point in the right view Point calculates the conspicuousness of the current pixel point, and circulation is executed up to obtaining the near-infrared Saliency maps, wherein K=30.
In the above-described embodiments, diversity (different degree) is obtained by following formula:
Wherein, d (pi,pj) it is segmentation block piAnd pjBetween diversity, d (pi,pj) the distance that both shows more greatly of value Bigger, then also bigger with regard to difference, similitude is smaller.
The calculation formula of conspicuousness are as follows:
Wherein, SniriTo divide block piConspicuousness.
In one embodiment of the invention, area to be targeted determining module 630 is used for the visible light Saliency maps Conspicuousness mean value is carried out with the near-infrared Saliency maps to compare;And according to comparison result respectively to the visible light Saliency maps Image segmentation is carried out with the near-infrared Saliency maps to obtain the area to be targeted in the left view and the right view.
In specific example of the invention, target-region locating module 650 is used to the left view transformation be based on RGB The left view of color model;And by it is described based on the left view transformation of RGB color model to HVS spatially, extract V component, and The edge feature of the V component is matched to obtain the shape of the target area.
Embodiment according to the present invention can be accurately detected the position of target area, and only to portion few in the view of left and right Subregion carries out the matching analysis, and computation complexity is effectively reduced, and detection efficiency is high.
Specifically, the above embodiment of the present invention has the advantages that
The device of the invention structure is simple, versatile, and to good environmental adaptability, because near-infrared is in mist and low illumination Imaging effect is better than visible light in degree.Above method step of the invention is simple, it is easy to accomplish, various barriers can be effectively treated Hinder analyte detection problem.In addition, this method only carries out the matching analysis to salient region of image, therefore processing capability in real time is improved, And method of the invention is only handled visual pattern, independent of other knowledge, therefore the independence of improvement method and logical It is applied widely with property.
Above-described specific embodiment has carried out further the purpose of the present invention, technical scheme and beneficial effects It is described in detail, it should be understood that being not intended to limit the present invention the foregoing is merely a specific embodiment of the invention Protection scope, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should all include Within protection scope of the present invention.

Claims (10)

1. the localization method based on binocular vision, which comprises the following steps:
Step S1, the left view and right view of scene are acquired;
Step S2, the left view and right view progress conspicuousness are calculated respectively corresponding with the left view significant to obtain Property image and Saliency maps picture corresponding with the right view;
Step S3, the corresponding Saliency maps picture of the left view and Saliency maps picture corresponding with the right view are carried out respectively Conspicuousness mean value compares, and determines the area to be targeted in the left view and right view according to comparison result;
Step S4, the area to be targeted to the left view and the area to be targeted of the right view are matched described in determination Target area in left view and the right view;
Step S5, disparity computation is carried out to determine the position of target area to the target area in the left view and right view.
2. the localization method according to claim 1 based on binocular vision, which is characterized in that acquired in the step S1 Any view is visible light view in left view and right view, and another view is near-infrared view.
3. the localization method according to claim 2 based on binocular vision, which is characterized in that the step S2 is specifically wrapped It includes:
Step S2.1 is smoothed the visible light view by difference of Gaussian filtering;It will be visible after smoothing processing Light view transformation is the visible light view based on Lab color model;According to the visible light view based on Lab color model The color mean value computation of L * component, a component and b component obtains the visible light Saliency maps;
Step S2.2, by the near-infrared view publishing at the mutually disjoint multiple segmentation block p being made of 7*7 pixel, and Using the pixel value of segmentation block where the pixel value as its of the central pixel point in each segmentation block;According to different segmentation blocks The distance between the distance between pixel value location of pixels, the diversities of more different segmentation blocks;By choose with it is described The immediate K pixel of current pixel point calculates the conspicuousness of the current pixel point in near-infrared view, and circulation executes straight To obtaining the near-infrared Saliency maps.
4. the localization method according to claim 3 based on binocular vision, which is characterized in that the L * component, a component and b The color mean value of component passes through following formula respectively and obtains:
Wherein, m_L, m_a, m_b are respectively the color mean value of L * component, a component and b component, and Iv_L, Iv_a, Iv_b are respectively institute The L * component, a component and b component of visible light view are stated, N is the line number of the visible light view, and M is the visible light view Columns;
The visible light Saliency maps are obtained by following formula:
Sv(i, j)=[Iv_L (i, j)-m_L]2+[Iv_a(i,j)-m_a]2+[Iv_b(i,j)-m_b]2
Wherein, Sv(i, j) is the visible light conspicuousness of a pixel, and the conspicuousness of all pixels point constitutes visible light Saliency maps.
5. the localization method according to claim 3 based on binocular vision, which is characterized in that the step S3 is specifically wrapped It includes:
Step S3.1 carries out conspicuousness mean value to the visible light Saliency maps and the near-infrared Saliency maps and is compared;
Step S3.2 carries out the visible light Saliency maps and the near-infrared Saliency maps according to the comparison result root Image segmentation is to obtain the area to be targeted in the left view and the right view.
6. the localization method according to claim 2 based on binocular vision, which is characterized in that the step S5 is specifically wrapped It includes:
Step S5.1, the target area chosen in the left view and right view carry out disparity computation, obtain target area distance The distance of collection point;
The visible light view transformation is the visible light view based on RGB color model by step S5.2;
Step S5.3 spatially to HVS by the visible light view transformation based on RGB color model extracts V component, and right The edge feature of the V component is matched to obtain the shape of the target area.
7. the localization method according to claim 1 based on binocular vision, which is characterized in that this method further includes step S7, distance and shape to the target area are analyzed to be dominant and height is dominant with the width of the determination target area.
8. the positioning device based on binocular vision characterized by comprising
Image collecting device, for acquiring the left view and right view of scene;
Conspicuousness image collection module is calculated for carrying out conspicuousness to the left view and right view to obtain and the left view Scheme corresponding Saliency maps and Saliency maps picture corresponding with the right view;
Area to be targeted determining module, for the corresponding Saliency maps picture of the left view and corresponding with the right view aobvious Work property image carries out conspicuousness mean value and compares, and the area to be positioned in the left view and right view is determined according to comparison result Domain;
Target area determining module, the area to be targeted for area to be targeted and the right view to the left view carry out Matching is with the target area in the determination left view and the right view;
Target-region locating module, for carrying out disparity computation to the target area in the left view and right view to determine mesh Mark the position in region.
9. the positioning device according to claim 8 based on binocular vision, which is characterized in that the left view and right view In any view be visible light view, another view be near-infrared view.
10. the positioning device according to claim 9 based on binocular vision, which is characterized in that shown Saliency maps picture obtains Modulus block is smoothed the visible light view by difference of Gaussian filtering;Visible light view after smoothing processing is become It is changed to the visible light view based on Lab color model;According to L * component, a of the visible light view based on Lab color model The color mean value computation of component and b component obtains the visible light Saliency maps;By the near-infrared view publishing at mutually not phase Hand over the multiple segmentation block p being made of 7*7 pixel, and using it is described it is each segmentation block in central pixel point pixel value as The pixel value of segmentation block where it;The distance between the distance between pixel value according to different segmentation blocks location of pixels, than The diversity of more different segmentation blocks;By choosing and the immediate K pixel meter of current pixel point in the near-infrared view The conspicuousness of the current pixel point is calculated, circulation executes until obtaining the near-infrared Saliency maps.
CN201811259210.7A 2018-10-26 2018-10-26 Localization method and device based on binocular vision Pending CN109472826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811259210.7A CN109472826A (en) 2018-10-26 2018-10-26 Localization method and device based on binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811259210.7A CN109472826A (en) 2018-10-26 2018-10-26 Localization method and device based on binocular vision

Publications (1)

Publication Number Publication Date
CN109472826A true CN109472826A (en) 2019-03-15

Family

ID=65666063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811259210.7A Pending CN109472826A (en) 2018-10-26 2018-10-26 Localization method and device based on binocular vision

Country Status (1)

Country Link
CN (1) CN109472826A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583342A (en) * 2020-05-14 2020-08-25 中国科学院空天信息创新研究院 Target rapid positioning method and device based on binocular vision
CN112504472A (en) * 2020-11-26 2021-03-16 浙江大华技术股份有限公司 Thermal imager, thermal imaging method and storage medium
CN113834571A (en) * 2020-06-24 2021-12-24 杭州海康威视数字技术股份有限公司 Target temperature measurement method, device and temperature measurement system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102435174A (en) * 2011-11-01 2012-05-02 清华大学 Method and device for detecting barrier based on hybrid binocular vision
CN106780476A (en) * 2016-12-29 2017-05-31 杭州电子科技大学 A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic
CN108573221A (en) * 2018-03-28 2018-09-25 重庆邮电大学 A kind of robot target part conspicuousness detection method of view-based access control model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102435174A (en) * 2011-11-01 2012-05-02 清华大学 Method and device for detecting barrier based on hybrid binocular vision
CN106780476A (en) * 2016-12-29 2017-05-31 杭州电子科技大学 A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic
CN108573221A (en) * 2018-03-28 2018-09-25 重庆邮电大学 A kind of robot target part conspicuousness detection method of view-based access control model

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583342A (en) * 2020-05-14 2020-08-25 中国科学院空天信息创新研究院 Target rapid positioning method and device based on binocular vision
CN111583342B (en) * 2020-05-14 2024-02-23 中国科学院空天信息创新研究院 Target rapid positioning method and device based on binocular vision
CN113834571A (en) * 2020-06-24 2021-12-24 杭州海康威视数字技术股份有限公司 Target temperature measurement method, device and temperature measurement system
WO2021259365A1 (en) * 2020-06-24 2021-12-30 杭州海康威视数字技术股份有限公司 Target temperature measurement method and apparatus, and temperature measurement system
CN112504472A (en) * 2020-11-26 2021-03-16 浙江大华技术股份有限公司 Thermal imager, thermal imaging method and storage medium

Similar Documents

Publication Publication Date Title
CN107463918B (en) Lane line extraction method based on fusion of laser point cloud and image data
CN106485275B (en) A method of realizing that cover-plate glass is bonded with liquid crystal display positioning
CN102435174B (en) Method and device for detecting barrier based on hybrid binocular vision
CN104848851B (en) Intelligent Mobile Robot and its method based on Fusion composition
CN107907048A (en) A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
CA2867365C (en) Method, system and computer storage medium for face detection
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
CN110647850A (en) Automatic lane deviation measuring method based on inverse perspective principle
CN107084680B (en) A kind of target depth measurement method based on machine monocular vision
CN109472826A (en) Localization method and device based on binocular vision
CN104361314A (en) Method and device for positioning power transformation equipment on basis of infrared and visible image fusion
CN109961399B (en) Optimal suture line searching method based on image distance transformation
CN105913013A (en) Binocular vision face recognition algorithm
CN103927758B (en) Saliency detection method based on contrast ratio and minimum convex hull of angular point
CN102999916A (en) Edge extraction method of color image
JPH0877334A (en) Automatic feature point extracting method for face image
CN104268853A (en) Infrared image and visible image registering method
CN104764407B (en) A kind of fine measuring method of thickness of cable sheath
CN111462503A (en) Vehicle speed measuring method and device and computer readable storage medium
CN106970620A (en) A kind of robot control method based on monocular vision
CN110119190A (en) Localization method, device, recognition and tracking system and computer-readable medium
CN117036641A (en) Road scene three-dimensional reconstruction and defect detection method based on binocular vision
CN109389165A (en) Oil level gauge for transformer recognition methods based on crusing robot
CN105654479A (en) Multispectral image registering method and multispectral image registering device
CN113344990B (en) Hole site representation projection system and self-adaptive fitting hole site alignment method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190315

RJ01 Rejection of invention patent application after publication