CN107025665A - With reference to the image defogging method and device and electronic installation of depth information - Google Patents

With reference to the image defogging method and device and electronic installation of depth information Download PDF

Info

Publication number
CN107025665A
CN107025665A CN201710138687.9A CN201710138687A CN107025665A CN 107025665 A CN107025665 A CN 107025665A CN 201710138687 A CN201710138687 A CN 201710138687A CN 107025665 A CN107025665 A CN 107025665A
Authority
CN
China
Prior art keywords
image
scene
depth information
mist
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710138687.9A
Other languages
Chinese (zh)
Inventor
曾元清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710138687.9A priority Critical patent/CN107025665A/en
Publication of CN107025665A publication Critical patent/CN107025665A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a kind of image defogging method of combination depth information, the contextual data for handling electronic installation collection.The contextual data includes scene master image.Described image defogging method includes:The depth information of scene is obtained according to the contextual data;The depth information is handled to obtain mist and/or haze (mist/haze) concentration profile;The scene master image and the mist/haze concentration profile is handled to obtain mist elimination image.The invention also discloses a kind of image demister and electronic installation.Image defogging method, the depth information of image demister and electronic installation based on scene of the combination depth information of embodiment of the present invention carry out different degrees of defogging to the scene in different depth position and handled, and lift the effect of image defogging.

Description

With reference to the image defogging method and device and electronic installation of depth information
Technical field
The present invention relates to image processing techniques, more particularly to a kind of combination depth information image defogging method and device and Electronic installation.
Background technology
Mist and/or haze (mist/haze) are often non-uniform Distribution, and existing image defogging method can only be removed in image Influence of the mist/haze of even distribution to image definition, it is impossible to which intelligently mist/haze respective degrees to different scenes depth are gone Mist processing.
The content of the invention
It is contemplated that at least solving one of technical problem present in prior art.Therefore, the present invention provides a kind of knot Close the image defogging method and device and electronic installation of depth information.
The image defogging method of the combination depth information of embodiment of the present invention, the scene for handling electronic installation collection Data, the contextual data includes scene master image, and described image defogging method comprises the following steps:
The depth information of scene is obtained according to the contextual data;
The depth information is handled to obtain mist/haze concentration profile;With
The scene master image and the mist/haze concentration profile is handled to obtain mist elimination image.
In some embodiments, the contextual data includes depth image corresponding with the scene master image, described The step of obtaining the depth information of scene according to the contextual data includes following sub-step:
The depth image is handled to obtain the depth information of the scene.
In some embodiments, the contextual data includes scene sub-picture corresponding with the scene master image, institute Stating the step of obtaining the depth information of scene according to the contextual data includes following sub-step:
The scene master image and the scene sub-picture is handled to obtain the depth information of the scene.
In some embodiments, it is described to handle the scene master image and the mist/haze concentration profile to be gone The step of mist image, includes following sub-step:
Obtain the half-tone information of the scene master image and the strength information of the mist/haze concentration profile;With
The half-tone information and the strength information is handled to obtain the mist elimination image.
The image demister of the combination depth information of embodiment of the present invention, the scene for handling electronic installation collection Data, the contextual data includes scene master image, and described image demister includes acquisition module, first processing module and the Two processing modules.The acquisition module is used for the depth information that scene is obtained according to the contextual data;The first processing mould Block is used to handle the depth information to obtain mist/haze concentration profile;The Second processing module is used to handle the scene Master image and the mist/haze concentration profile are to obtain mist elimination image.
In some embodiments, the contextual data includes depth image corresponding with the scene master image, described Acquisition module includes first processing units, and the first processing units are used to handle the depth image to obtain the scene Depth information.
In some embodiments, the contextual data includes scene sub-picture corresponding with the scene master image, institute Stating acquisition module includes second processing unit, and the second processing unit is used to handle the scene master image and the scene pair Image is to obtain the depth information of the scene.
In some embodiments, the Second processing module includes acquiring unit and the 3rd processing unit.It is described to obtain Unit is used to obtain the half-tone information of the scene master image and the strength information of the mist/haze concentration profile;Described 3rd Processing unit is used to handle the half-tone information and the strength information to obtain the mist elimination image.
The electronic installation of embodiment of the present invention includes imaging device and above-mentioned image demister, described image defogging Device is electrically connected with the imaging device.
In some embodiments, the electronic installation includes mobile phone and/or tablet personal computer.
In some embodiments, the imaging device includes main camera and secondary camera.
In some embodiments, the imaging device includes depth camera.
Image defogging method, image demister and the electronic installation of the combination depth information of embodiment of the present invention are based on The depth information of scene carries out different degrees of defogging to the scene in different depth position and handled, and lifts the effect of image defogging Really.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
The above-mentioned and/or additional aspect and advantage of the present invention will become from description of the accompanying drawings below to embodiment is combined Obtain substantially and be readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet of the image defogging method of the combination depth information of embodiment of the present invention;
Fig. 2 is the high-level schematic functional block diagram of the electronic installation of embodiment of the present invention;
Fig. 3 is the view of the image defogging method of the combination depth information of embodiment of the present invention;
Fig. 4 is the view of the image defogging method of the combination depth information of some embodiments of the invention;
Fig. 5 is the view of the image defogging method of the combination depth information of some embodiments of the invention;
Fig. 6 is the schematic flow sheet of the image defogging method of the combination depth information of some embodiments of the invention;
Fig. 7 is the high-level schematic functional block diagram of the electronic installation of some embodiments of the invention;
Fig. 8 is the schematic flow sheet of the image defogging method of the combination depth information of some embodiments of the invention;
Fig. 9 is the high-level schematic functional block diagram of the electronic installation of some embodiments of the invention;
Figure 10 is the schematic flow sheet of the image defogging method of the combination depth information of some embodiments of the invention;
Figure 11 is the high-level schematic functional block diagram of the electronic installation of some embodiments of the invention;With
Figure 12 is the view of the image defogging method of the combination depth information of some embodiments of the invention.
Embodiment
Embodiments of the present invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning Same or similar element or element with same or like function are represented to same or similar label eventually.Below by ginseng The embodiment for examining accompanying drawing description is exemplary, is only used for explaining the present invention, and is not considered as limiting the invention.
Also referring to Fig. 1 to 2, the image defogging method of the combination depth information of embodiment of the present invention, for handling The contextual data that electronic installation 100 is gathered.Image defogging method comprises the following steps:
S11:The depth information of scene is obtained according to the contextual data;
S12:The depth information is handled to obtain mist/haze concentration profile;With
S13:The scene master image and the mist/haze concentration profile is handled to obtain mist elimination image.
The image defogging method of embodiment of the present invention can be realized by the image demister 10 of embodiment of the present invention. Image demister 10 includes acquisition module 11, first processing module 12 and Second processing module 13.Step S11 can be by obtaining Module 11 realizes that step S12 can be realized by first processing module 12, and step S13 can be realized by Second processing module 13.
In other words, acquisition module 11 is used for the depth information that scene is obtained according to the contextual data;First processing mould Block 12 is used to handle the depth information to obtain mist/haze concentration profile;Second processing module 13 is used to handle the scene Master image and the mist/haze concentration profile are to obtain mist elimination image.
The image demister 10 of embodiment of the present invention can apply to the electronic installation 100 of embodiment of the present invention. In other words, the electronic installation 100 of embodiment of the present invention includes the image demister 10 that the present invention is embodiment.Certainly The electronic installation 100 of embodiment of the present invention also includes imaging device 20.Wherein, image demister 10 and the electricity of imaging device 20 Connection.
It is appreciated that mist/haze is typically non-uniform Distribution, but existing image defogging method typically uses global contrast The scene master image that the image defogging method of degree stretching is gathered to imaging device 20 does defogging processing, therefore can only handle uniform point Mist/haze in the scene of cloth, preferable defog effect is unable to reach for mist/haze of non-uniform Distribution.Due to the concentration of mist/haze Typically increase with the increase of depth, depth information of the image defogging method based on scene of embodiment of the present invention in The scene of different depth position carries out different degrees of defogging processing, has preferable defogging to imitate for mist/haze of non-uniform Distribution Really, picture quality is lifted.
Referring to Fig. 3, specifically, in figure using in the mist elimination image after global contrast stretch processing, scene master image The middle less part of depth is that flower nursery part is more clear, and defog effect is preferable;But the larger part of depth is partial building Color contrast and definition are relatively low, and defog effect is poor.The image defogging method pair of embodiment of the present invention is used in figure Scene master image is carried out after defogging processing, and the contrast and definition of mist elimination image are higher, and image defog effect is preferable.
Also referring to Fig. 4 to 5, further, after the depth information of scene is obtained, mist/haze concentration profile can Calculated and obtained by below equation:T (x)=e-βd(x).Wherein, x is some pixel in scene master image;T (x) is that air is saturating Penetrate rate;D (x) is the corresponding depth of field data of pixel x;E is constant;β is atmospheric scattering coefficient, and β is a constant.Depth of field d (x) Bigger, atmospheric transmissivity t (x) is smaller, and mist/haze concentration 1-t (x) is bigger.
Referring to Fig. 6, in some embodiments, the contextual data includes depth corresponding with the scene master image Image, the depth information that step S11 obtains scene according to the contextual data includes following sub-step:
S111:The depth image is handled to obtain the depth information of the scene.
Referring to Fig. 7, in some embodiments, acquisition module 11 includes first processing units 111, and step S111 can be with Realized by first processing units 111.
In other words, first processing units 111 are used for the depth information that scene is obtained according to the contextual data.
It is appreciated that contextual data includes depth image corresponding with scene master image.Wherein, scene master image is RGB Coloured image, depth image includes the depth information of each personal or object in current scene.Because the color of scene master image is believed The depth information of breath and depth image is one-to-one relation, therefore, it can get scene master image according to depth image Image depth information.
In some embodiments, imaging device 20 includes depth camera.Depth camera can be used to obtain depth map Picture.Wherein, depth camera includes the depth camera based on structure light Range finder and the depth camera based on TOF rangings Head.
Specifically, the depth camera based on structure light Range finder includes camera and the projector.The projector will be certain The photo structure of pattern is projected in current scene to be captured, and each personal or body surface formation in the scene is in the scene People or object modulation after striation 3-D view, then above-mentioned striation 3-D view is detected by camera can obtain striation Two-dimentional fault image.The distortion degree of striation depends on relative position between the projector and camera and current to be captured The surface shape exterior feature or height of each personal or object in scene.Due to relative between the camera and the projector in depth camera Position be it is certain, therefore, the surface three of each personal or object in the two-dimentional optical strip image coordinate that distorts just reproducible scene Profile is tieed up, so as to obtain image depth information.Structure light Range finder has higher resolution ratio and measurement accuracy, can be with Lift the accuracy of the image depth information obtained.
Depth camera based on TOF (time of flight) ranging is to be sent by sensor record from luminescence unit Modulation infrared light emission to object, then the phase place change reflected from object, according to the light velocity in the range of a wavelength, The depth distance of whole scene can be obtained in real time.Depth location in current scene to be captured residing for each personal or object It is different, therefore modulation infrared light is different from the reception time used is issued to, in this way, the picture depth of scene just can be obtained Information.Depth camera based on TOF Range finders is when calculating image depth information not by the gray scale and feature on object surface Influence, and image depth information can be rapidly calculated, with very high real-time.
Referring to Fig. 8, in some embodiments, the contextual data includes scene corresponding with the scene master image Sub-picture, the step of step S11 obtains the depth information of scene according to the contextual data includes following sub-step:
S112:The scene master image and the scene sub-picture is handled to obtain the depth information of the scene.
Referring to Fig. 9, in some embodiments, acquisition module 11 includes second processing unit 112.Step S112 can be with Realized by second processing unit 112.
In other words, second processing unit 112 is used to handle the scene master image and the scene sub-picture to obtain The depth information of the scene.
It is appreciated that depth information can be obtained by the method for binocular stereo vision ranging, now contextual data Including scene master image and scene sub-picture.Wherein, scene master image and scene sub-picture are RGB color image.Binocular is stood Body vision ranging is that Same Scene is imaged from diverse location with two specification identical cameras to obtain the solid of scene Image pair, then go out by algorithmic match the corresponding picture point of stereo pairs, so as to calculate parallax.Finally surveyed using based on triangle The method of amount recovers depth information.In this way, by being matched to scene master image and scene sub-picture this stereo pairs Just the image depth information of current scene can be obtained.
In some embodiments, imaging device 20 includes main camera and secondary camera.
It is appreciated that when obtaining depth information using binocular stereo vision distance-finding method, need to be carried out using two cameras Imaging.Scene master image can be shot by main camera and be obtained, and scene sub-picture can be shot by secondary camera and be obtained.Wherein, master takes the photograph As head is identical with the specification of secondary camera.Obtained in this way, shooting obtained stereo pairs according to main camera and secondary camera The image depth information of current scene.
Referring to Fig. 10, in some embodiments, the step S13 processing scene master image and the mist/haze concentration Distribution map includes following sub-step to obtain mist elimination image:
S131:Obtain the half-tone information of the scene master image and the strength information of the mist/haze concentration profile;With
S132:The half-tone information and the strength information is handled to obtain the mist elimination image.
Figure 11 is referred to, in some embodiments, Second processing module includes the processing unit of acquiring unit 131 and the 3rd 132.Step S131 can be realized that step S132 can be realized by the 3rd processing unit 132 by acquiring unit 131.
In other words, acquiring unit 131 is used for the half-tone information for obtaining the scene master image and the mist/haze concentration point The strength information of Butut;3rd processing unit 132 is used to handle the half-tone information and the strength information to obtain described go Mist image.
In this way, according to the half-tone information of scene master image and the strength information of mist/haze concentration profile, in different depths The scene for spending position carries out different degrees of defogging processing, improves the defog effect of image.
Figure 12 is referred to, specifically, the equation of structure of the scene master image with mist/haze is:I (x)=J (x) t (x)+A [1-t (x)], wherein, I (x) is scene master image, and J (x) is preferable mist elimination image, and A is the atmosphere light in the range of whole scene Value, represents the intensity level i.e. gray value of pixel most bright in view picture scene master image.After the depth information of scene is obtained, greatly Gas transmissivity t (x) can be by formula t (x)=e-βd(x)Try to achieve, 1-t (x) is the strength information of mist/haze concentration profile.Atmosphere light Value A is the pixel of gray value maximum in scene master image, therefore, can obtain each according to the R, G, B data of scene master image The gray value Y of pixel, i.e. Y=0.299R+0.587G+0.114B, and find out from all gray value Y the Y of maximum Value, the Y value is air light value A value.Because I (x) is known, directly it can be collected by imaging device 20.In this way, Under the conditions of I (x), air light value A and atmospheric transmissivity t (x) are known, you can try to achieve J (x) value, final go is obtained Mist image.
Electronic installation 100 also includes housing, memory, circuit board and power circuit.Wherein, circuit board is placed in housing and enclosed Into interior volume, processor and memory are set on circuit boards;Power circuit is used for each circuit for electronic installation 100 Or device is powered;Memory is used to store executable program code;What image demister 10 was stored by reading in memory Executable program code is to run program corresponding with executable program code to realize above-mentioned any embodiment of the present invention Image defogging method.
In the description of this specification, reference term " embodiment ", " some embodiments ", " schematically implementation The description of mode ", " example ", " specific example " or " some examples " etc. means with reference to the embodiment or example description Specific features, structure, material or feature are contained at least one embodiment of the present invention or example.In this specification In, identical embodiment or example are not necessarily referring to the schematic representation of above-mentioned term.Moreover, the specific spy of description Levy, structure, material or feature can in an appropriate manner be combined in any one or more embodiments or example.
Any process described otherwise above or method description are construed as in flow chart or herein, represent to include Module, fragment or the portion of the code of one or more executable instructions for the step of realizing specific logical function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not be by shown or discussion suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Represent in flow charts or logic and/or step described otherwise above herein, for example, being considered use In the order list for the executable instruction for realizing logic function, it may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress for combining these instruction execution systems, device or equipment and using Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wirings Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits Reservoir (CDROM).In addition, can even is that can be in the paper of printing described program thereon or other are suitable for computer-readable medium Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, the software that multiple steps or method can in memory and by suitable instruction execution system be performed with storage Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal Discrete logic, the application specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method is carried Rapid to can be by program to instruct the hardware of correlation to complete, described program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, it would however also be possible to employ the form of software function module is realized.The integrated module is such as Fruit is realized using in the form of software function module and as independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although having been shown and retouching above Embodiments of the present invention are stated, it is to be understood that above-mentioned embodiment is exemplary, it is impossible to be interpreted as to the present invention's Limitation, one of ordinary skill in the art can be changed to above-mentioned embodiment, change, replaces within the scope of the invention And modification.

Claims (12)

1. a kind of image defogging method of combination depth information, the contextual data for handling electronic installation collection, its feature exists In the contextual data includes scene master image, and described image defogging method comprises the following steps:
The depth information of scene is obtained according to the contextual data;
The depth information is handled to obtain mist/haze concentration profile;With
The scene master image and the mist/haze concentration profile is handled to obtain mist elimination image.
2. image defogging method as claimed in claim 1, it is characterised in that the contextual data includes and the scene master map As corresponding depth image, it is described scene is obtained according to the contextual data depth information the step of include following sub-step:
The depth image is handled to obtain the depth information of the scene.
3. image defogging method as claimed in claim 1, it is characterised in that the contextual data includes and the scene master map As corresponding scene sub-picture, it is described scene is obtained according to the contextual data depth information the step of include following sub-step Suddenly:
The scene master image and the scene sub-picture is handled to obtain the depth information of the scene.
4. image defogging method as claimed in claim 1, it is characterised in that the processing scene master image and described The step of mist/haze concentration profile is to obtain mist elimination image includes following sub-step:
Obtain the half-tone information of the scene master image and the strength information of the mist/haze concentration profile;With
The half-tone information and the strength information is handled to obtain the mist elimination image.
5. a kind of image demister of combination depth information, the contextual data for handling electronic installation collection, its feature exists In the contextual data includes scene master image, and described image demister includes:
Acquisition module, the acquisition module is used for the depth information that scene is obtained according to the contextual data;
First processing module, the first processing module is used to handle the depth information to obtain mist/haze concentration profile;With
Second processing module, the Second processing module is used to handle the scene master image and the mist/haze concentration profile To obtain mist elimination image.
6. image demister as claimed in claim 5, it is characterised in that the contextual data includes and the scene master map As corresponding depth image, the acquisition module includes:
First processing units, the depth that the first processing units are used to handle the depth image to obtain the scene is believed Breath.
7. image demister as claimed in claim 5, it is characterised in that the contextual data includes and the scene master map As corresponding scene sub-picture, the acquisition module includes:
Second processing unit, the second processing unit is used to handle the scene master image and the scene sub-picture to obtain The depth information of the scene.
8. image demister as claimed in claim 5, it is characterised in that the Second processing module includes:
Acquiring unit, the acquiring unit is used for the half-tone information and the mist/haze concentration distribution for obtaining the scene master image The strength information of figure;With
3rd processing unit, the 3rd processing unit is described to obtain for handling the half-tone information and the strength information Mist elimination image.
9. a kind of electronic installation, it is characterised in that the electronic installation includes:
Imaging device;With
Image demister as described in claim 5 to 8 any one, described image demister and imaging device electricity Connection.
10. electronic installation as claimed in claim 9, it is characterised in that the electronic installation includes mobile phone and/or flat board electricity Brain.
11. electronic installation as claimed in claim 9, it is characterised in that the imaging device includes main camera and secondary shooting Head.
12. electronic installation as claimed in claim 9, it is characterised in that the imaging device includes depth camera.
CN201710138687.9A 2017-03-09 2017-03-09 With reference to the image defogging method and device and electronic installation of depth information Pending CN107025665A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710138687.9A CN107025665A (en) 2017-03-09 2017-03-09 With reference to the image defogging method and device and electronic installation of depth information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710138687.9A CN107025665A (en) 2017-03-09 2017-03-09 With reference to the image defogging method and device and electronic installation of depth information

Publications (1)

Publication Number Publication Date
CN107025665A true CN107025665A (en) 2017-08-08

Family

ID=59525293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710138687.9A Pending CN107025665A (en) 2017-03-09 2017-03-09 With reference to the image defogging method and device and electronic installation of depth information

Country Status (1)

Country Link
CN (1) CN107025665A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411774A (en) * 2011-08-08 2012-04-11 安防科技(中国)有限公司 Processing method, device and system based on single-image defogging
CN103188433A (en) * 2011-12-30 2013-07-03 株式会社日立制作所 Image demisting device and image demisting method
CN104408757A (en) * 2014-11-07 2015-03-11 吉林大学 Method and system for adding haze effect to driving scene video
CN205230349U (en) * 2015-12-24 2016-05-11 北京万集科技股份有限公司 Traffic speed of a motor vehicle detects and snapshot system based on TOF camera
CN106327439A (en) * 2016-08-16 2017-01-11 华侨大学 Rapid fog and haze image sharpening method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411774A (en) * 2011-08-08 2012-04-11 安防科技(中国)有限公司 Processing method, device and system based on single-image defogging
CN103188433A (en) * 2011-12-30 2013-07-03 株式会社日立制作所 Image demisting device and image demisting method
CN104408757A (en) * 2014-11-07 2015-03-11 吉林大学 Method and system for adding haze effect to driving scene video
CN205230349U (en) * 2015-12-24 2016-05-11 北京万集科技股份有限公司 Traffic speed of a motor vehicle detects and snapshot system based on TOF camera
CN106327439A (en) * 2016-08-16 2017-01-11 华侨大学 Rapid fog and haze image sharpening method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张东香 等: "基于双目视觉算法的图像清晰化算法研究", 《山东师范大学学报(自然科学版)》 *
王多超: "图像去雾算法及其应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
范风兵 等: "基于雾气深度的快速单幅图像去雾算法", 《四川理工学院学报(自然科学版)》 *

Similar Documents

Publication Publication Date Title
CN106993112A (en) Background-blurring method and device and electronic installation based on the depth of field
CN106909911A (en) Image processing method, image processing apparatus and electronic installation
CN106991654A (en) Human body beautification method and apparatus and electronic installation based on depth
CN105279372B (en) A kind of method and apparatus of determining depth of building
CN103914802B (en) For the image selection using the depth information imported and the System and method for of masking
CN106851238A (en) Method for controlling white balance, white balance control device and electronic installation
CN107018323B (en) Control method, control device and electronic device
CN106991688A (en) Human body tracing method, human body tracking device and electronic installation
US20180262737A1 (en) Scan colorization with an uncalibrated camera
US10482347B2 (en) Inspection of the contoured surface of the undercarriage of a motor vehicle
CN111462128B (en) Pixel-level image segmentation system and method based on multi-mode spectrum image
CN107016348A (en) With reference to the method for detecting human face of depth information, detection means and electronic installation
CN105450933B (en) The restoring means of blurred picture in a kind of aero-optical effect
CN106991378A (en) Facial orientation detection method, detection means and electronic installation based on depth
CN110956642A (en) Multi-target tracking identification method, terminal and readable storage medium
CN106997457A (en) Human limbs recognition methods, human limbs identifying device and electronic installation
EP2600314A1 (en) Simulation of three-dimensional (3d) cameras
CN112070889A (en) Three-dimensional reconstruction method, device and system, electronic equipment and storage medium
WO2021181647A1 (en) Image processing device, image processing method, and computer-readable medium
CN106991376A (en) With reference to the side face verification method and device and electronic installation of depth information
CN107025636A (en) With reference to the image defogging method and device and electronic installation of depth information
CN106991379A (en) Human body skin recognition methods and device and electronic installation with reference to depth information
CN111866490A (en) Depth image imaging system and method
US20230245396A1 (en) System and method for three-dimensional scene reconstruction and understanding in extended reality (xr) applications
KR101707939B1 (en) Device and method for obtaining accurate 3d information using depth sensor and infrared shading cues

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170808

RJ01 Rejection of invention patent application after publication