CN105894466A - Image data processing method and apparatus and terminal device - Google Patents

Image data processing method and apparatus and terminal device Download PDF

Info

Publication number
CN105894466A
CN105894466A CN201610188290.6A CN201610188290A CN105894466A CN 105894466 A CN105894466 A CN 105894466A CN 201610188290 A CN201610188290 A CN 201610188290A CN 105894466 A CN105894466 A CN 105894466A
Authority
CN
China
Prior art keywords
laser
atmospheric
image data
sub
fog dispersal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610188290.6A
Other languages
Chinese (zh)
Other versions
CN105894466B (en
Inventor
李兵
王克强
谢志宇
孔志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN201610188290.6A priority Critical patent/CN105894466B/en
Publication of CN105894466A publication Critical patent/CN105894466A/en
Application granted granted Critical
Publication of CN105894466B publication Critical patent/CN105894466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/73

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image data processing method and apparatus and a terminal device. The terminal device comprises a laser sensor, a camera and an image processing assembly, wherein the laser sensor is used for emitting and receiving laser. The image processing assembly comprises a characteristic information obtaining module for obtaining the characteristic information of the laser emitted and received by the laser sensor, an atmosphere information detection module for calculating current atmosphere information according to the characteristic information, a shooting module for carrying out shooting operation and obtaining original image data, and a fog clearing processing module for conducting fog clearing processing for the original image data according to the atmosphere information to obtain fog-cleared image data. The fog clearing processing quality is improved, the image definition is improved, and the image color fidelity is also improved.

Description

The processing method of a kind of view data, device and terminal unit
Technical field
The present invention relates to terminal unit technical field, particularly relate to a kind of view data processing method, The processing means of a kind of view data and a kind of terminal unit.
Background technology
Along with the fast development of science and technology, terminal unit is widely available, in the work of people, study, day The utilization rate of each side such as often exchange is more and more higher.
At present, terminal unit is commonly configured with photographic head, to carry out clapping photo, video.
In the area that Atmospheric components are more complicated, such as haze weather, sandstorm, air exists too much Liquid particles, the material such as solid granulates, these materials can produce various to propagating light therein Impact.
On the one hand, when light is propagated among these materials, can reflect, scatter or diffraction etc. so that The direction of light changes, and causes propagating the adjacent ray come from object and occurs mixing, therefore, Shooting in this case, photographic head receives the image blur of object.
On the other hand, it is made up of the light mixing of different wavelength from the light of object reflection or transmitting , different mixed proportions correspond to different colors.
When light is propagated among these materials, the light of different wave length can be led by absorption in various degree The mixed proportion causing the light that photographic head receives changes, i.e. the character of light changes, So, photographic head receives the color of the image of object can distortion.
Especially Current high resolution sensor devices is more and more universal, such as 16,000,000 pixels, 21,000,000 pictures Element or the sensor devices of higher resolution, in the case of haze, affect particularly evident.
Summary of the invention
In view of the above problems, in order to solve above-mentioned image blurring, the problem of color distortion, the present invention is real Execute example propose the processing method of a kind of view data and the processing means of corresponding a kind of view data, one Plant terminal unit.
In order to solve the problems referred to above, the embodiment of the invention discloses a kind of terminal unit, described terminal unit Including laser sensor, photographic head and image processing modules;Wherein,
Described laser sensor is used for launching and receiving laser;
Described image processing modules includes:
Characteristic information acquisition module, for obtaining the feature of the laser launched from laser sensor and receive Information;
Atmospheric information detection module, for calculating current atmospheric information according to described characteristic information;
Taking module, is used for carrying out shooting operation, it is thus achieved that raw image data;
Fog dispersal processing module, for carrying out at fog dispersal described raw image data according to described atmospheric information Reason, it is thus achieved that fog dispersal view data.
Preferably, described laser sensor includes generating laser, beam splitter, laser pickoff and laser Decline and swing optical cavity;Wherein,
Described generating laser, is used for launching laser;
Described beam splitter, for being divided into the first sub-laser and the second sub-laser by described laser;Described first Sub-laser penetrates the air outside terminal unit, and the described second sub-laser described laser of importing declines and swings optical cavity In;
Laser pickoff, is used for receiving described first sub-laser;
Described laser declines and swings optical cavity and connect the air outside described terminal unit.
Preferably, the characteristic information of described first sub-laser includes from the time difference being transmitted between reception; Decline at laser when the characteristic information of described second sub-laser includes laser gross energy, concussion and swing optical cavity sidewall and adopt The lasertron energy that collection arrives and concussion number of times;
Described atmospheric information detection module includes:
Focal distance calculating sub module, for calculating focal distance according to described time difference;
Atmospheric scattering coefficient calculations submodule, for calculating between described lasertron energy and target component Ratio, as atmospheric scattering coefficient, described target component is described concussion number of times and described laser gross energy Between product.
Preferably, described taking module includes:
Focusing submodule, for focusing according to described focal distance;
Raw image data obtains submodule, for when focusing successfully, carries out shooting operation, it is thus achieved that former Beginning view data.
Preferably, described fog dispersal processing module includes:
Atmospheric transmittans calculating sub module, is used for using described atmospheric scattering coefficient and described focal distance Calculate atmospheric transmittans;
Model inversion submodule, for carrying out instead described raw image data according to atmospheric transmission model Drill, it is thus achieved that eliminate the fog dispersal view data of described atmospheric transmittans impact.
Preferably, described fog dispersal processing module also includes:
Coefficient judges submodule, for judging that whether described atmospheric transmittans is less than the coefficient threshold preset Value;The most then call described model inversion submodule.
Preferably, described model inversion submodule includes:
Atmosphere light intensity detection unit, for calculating atmosphere light intensity in described raw image data;
Data input cell, for by the pixel of described raw image data, described atmospheric transmittans In the atmospheric transmission model default with the input of described atmosphere light intensity;
Pixel inverting unit, for carrying out inverting to described atmospheric transmission model, it is thus achieved that fog dispersal picture number According to pixel.
Preferably, described image processing modules also includes:
Optimize processing module, for described fog dispersal view data being carried out parametric calibration and/or form conversion, Obtain destination image data.
The embodiment of the invention also discloses the processing method of a kind of terminal unit view data, including:
Obtain the characteristic information of the laser launched from laser sensor and receive;
Current atmospheric information is calculated according to described characteristic information;
Carry out shooting operation, it is thus achieved that raw image data;
According to described atmospheric information, described raw image data is carried out fog dispersal process, it is thus achieved that fog dispersal picture number According to.
Preferably, described laser sensor has beam splitter and laser declines and swings optical cavity, and described laser declines and swings light Chamber connects the air outside described terminal unit, and laser is divided into the first sub-laser and the second son through described beam splitter Laser;
Wherein, the air outside described first sub-laser injection to terminal unit, described second sub-laser imports Described laser declines and swings in optical cavity;
The characteristic information of described first sub-laser includes from the time difference being transmitted between reception;Described second Decline swashing of swinging that optical cavity sidewall collects at laser when the characteristic information of sub-laser includes laser gross energy, concussion Photon energy and concussion number of times;
The described step calculating current atmospheric information according to described characteristic information includes:
Focal distance is calculated according to described time difference;
Calculate the ratio between described lasertron energy and target component, as atmospheric scattering coefficient, described Target component is the product between described concussion number of times and described laser gross energy.
Preferably, shooting operation is carried out described in, it is thus achieved that the step of raw image data includes:
Focus according to described focal distance;
When focusing successfully, carry out shooting operation, it is thus achieved that raw image data.
Preferably, described according to described atmospheric information, described raw image data is carried out fog dispersal process, obtain The step of the view data that must disperse fog includes:
Described atmospheric scattering coefficient and described focal distance is used to calculate atmospheric transmittans;
According to atmospheric transmission model, described raw image data is carried out inverting, it is thus achieved that eliminate described air saturating Penetrate the fog dispersal view data of index impacts.
Preferably, described according to described atmospheric scattering coefficient and described focal distance to described original image number According to carrying out fog dispersal process, it is thus achieved that the step of fog dispersal view data also includes:
Judge that whether described atmospheric transmittans is less than the coefficient threshold preset;
According to atmospheric transmission model, described raw image data is carried out inverting described in the most then performing, obtain The step of the fog dispersal view data of described atmospheric transmittans impact must be eliminated.
Preferably, described according to atmospheric transmission model, described raw image data is carried out inverting, it is thus achieved that disappear The step of the fog dispersal view data affected except described atmospheric transmittans includes:
Atmosphere light intensity is calculated in described raw image data;
By defeated to the pixel of described raw image data, described atmospheric transmittans and described atmosphere light intensity Enter in default atmospheric transmission model;
Described atmospheric transmission model is carried out inverting, it is thus achieved that the pixel of fog dispersal view data.
Preferably, also include:
Described fog dispersal view data is carried out parametric calibration and/or form conversion, it is thus achieved that destination image data.
The embodiment of the invention also discloses the processing means of a kind of view data, including:
Characteristic information acquisition module, for obtaining the feature letter of the laser launched from terminal unit and receive Breath;
Atmospheric information detection module, for calculating current atmospheric information according to described characteristic information;
Taking module, for row shooting operation, it is thus achieved that raw image data;
Fog dispersal processing module, for carrying out at fog dispersal described raw image data according to described atmospheric information Reason, it is thus achieved that fog dispersal view data.
Preferably, described laser sensor has beam splitter and laser declines and swings optical cavity, and described laser declines and swings light Chamber connects the air outside described terminal unit, and laser is divided into the first sub-laser and the second son through described beam splitter Laser;
Wherein, the air outside described first sub-laser injection to terminal unit, described second sub-laser imports Described laser declines and swings in optical cavity;
The characteristic information of described first sub-laser includes from the time difference being transmitted between reception;Described second Decline swashing of swinging that optical cavity sidewall collects at laser when the characteristic information of sub-laser includes laser gross energy, concussion Photon energy and concussion number of times;
Described atmospheric information detection module includes:
Focal distance calculating sub module, for calculating focal distance according to described time difference;
Atmospheric scattering coefficient calculations submodule, for calculating between described lasertron energy and target component Ratio, as atmospheric scattering coefficient, described target component is described concussion number of times and described laser gross energy Between product.
Preferably, described taking module includes:
Focusing submodule, for focusing according to described focal distance;
Raw image data obtains submodule, for when focusing successfully, carries out shooting operation, it is thus achieved that former Beginning view data.
Preferably, described fog dispersal processing module includes:
Atmospheric transmittans calculating sub module, is used for using described atmospheric scattering coefficient and described focal distance Calculate atmospheric transmittans;
Model inversion submodule, for carrying out instead described raw image data according to atmospheric transmission model Drill, it is thus achieved that eliminate the fog dispersal view data of described atmospheric transmittans impact.
Preferably, described fog dispersal processing module also includes:
Coefficient judges submodule, for judging that whether described atmospheric transmittans is less than the coefficient threshold preset Value;The most then call described model inversion submodule.
Preferably, described model inversion submodule includes:
Atmosphere light intensity detection unit, for calculating atmosphere light intensity in described raw image data;
Data input cell, for by the pixel of described raw image data, described atmospheric transmittans In the atmospheric transmission model default with the input of described atmosphere light intensity;
Pixel inverting unit, for carrying out inverting to described atmospheric transmission model, it is thus achieved that fog dispersal picture number According to pixel.
Preferably, also include:
Optimize processing module, for described fog dispersal view data being carried out parametric calibration and/or form conversion, Obtain destination image data.
The embodiment of the present invention includes advantages below:
The embodiment of the present invention passes through the air such as laser detection such as atmospheric scattering coefficient in real time in photographing phase Information, carries out fog dispersal process with the raw image data to shooting, due to the situation of air in different regions, Different time all it may happen that change, by measuring atmospheric information in real time, has adapted to the situation of air in real time, Based on the atmospheric information mated with the situation of air, the photo of shooting under the situation of this air is carried out Fog dispersal processes, and improves the quality that fog dispersal processes, and the light that effectively elimination mist is brought changes, leads Cause adjacent ray and the problem of mixing occurs, and then improve the definition of image, effectively eliminate meanwhile The light of the different wave length that mist is brought is absorbed, is caused what the mixed proportion of light changed in various degree Problem, and then improve the real colour degree of image.
Accompanying drawing explanation
Fig. 1 is the flow chart of steps of the processing method embodiment of a kind of view data of the present invention;
Fig. 2 is the structural representation of a kind of laser sensor of the embodiment of the present invention;
Fig. 3 is the structured flowchart of the processing means embodiment of a kind of view data of the present invention;
Fig. 4 is the structured flowchart of a kind of terminal unit embodiment of the present invention.
Detailed description of the invention
Understandable, below in conjunction with the accompanying drawings for enabling the above-mentioned purpose of the present invention, feature and advantage to become apparent from The present invention is further detailed explanation with detailed description of the invention.
With reference to Fig. 1, it is shown that the steps flow chart of the processing method embodiment of a kind of view data of the present invention Figure, specifically may include steps of:
Step 101, obtains the characteristic information of the laser launched from laser sensor and receive;
It should be noted that the embodiment of the present invention can apply to terminal unit, this terminal unit can be carried out Take pictures, the camera operation such as video recording, such as, the mobile device such as mobile phone, panel computer, intelligent watch, or Person, the video monitor such as Train-borne recorder, place (such as doorway, automatic teller machine etc.) watch-dog, this This is not any limitation as by bright embodiment.
The device such as photographic head, laser sensor it is configured with, such as, after this photographic head is in terminal unit Put photographic head (being i.e. arranged on terminal unit back), then laser can be set on this post-positioned pick-up head side Sensor.
As in figure 2 it is shown, laser sensor 200 includes generating laser 201, beam splitter 202, laser Receptor 203 and laser decline and swing optical cavity 204;Wherein,
Generating laser 201, is used for launching laser 205, such as the light of lower powered infrared wavelength;
Beam splitter 202 is positioned at the outlet of generating laser 201, swashs for laser 205 is divided into the first son Light 2051 and the second sub-laser 2052;
First sub-laser 2501 penetrates the air outside terminal unit, (i.e. enters it through reference object 206 The object that row is taken pictures, recorded a video, such as people, scenery etc.) reflection.
Second sub-laser 2052 deflect 90 ° angularly, import laser and decline and swing in optical cavity 204, the second son Laser 2052 declines at laser and swings in optical cavity 204 such as dotted arrow direction roundtrip,
Laser pickoff 203, for receiving the first sub-laser 2051 through reference object 206 reflection;
Laser declines and swings optical cavity 204 and have one or more intercommunicating pore, such as intercommunicating pore 2041, intercommunicating pore 2042, Air outside connection terminal unit, extraneous air is as such as intercommunicating pore 2041, the arrow of intercommunicating pore 2042 Direction enters laser and declines to swing optical cavity 204 and overflow laser and decline and swing optical cavity 204, keeps circulation penetrating so that Laser declines and swings in optical cavity 204 Atmospheric components and keep consistent with extraneous, and then makes the atmospheric information detected Keep consistent with the external world.
Additionally, the operating system of terminal unit can include Android (Android), IOS, Windows Phone, Windows etc., generally can run shooting application, drive photographic head, laser sensor Carry out taking pictures, recording a video Deng device.
In embodiments of the present invention, user can be by clicking on the control of taking pictures in the interface that shooting is applied Part, video recording control, or, press the modes such as the physical button for taking pictures and send the instruction of shooting, Then terminal unit starts the flow process of shooting, drives laser transmitter projects and receives laser signal, thus Extract its characteristic information, i.e. characterize laser data of transmission feature during launching, receiving.
Step 102, calculates current atmospheric information according to described characteristic information;
In practice, air includes multiple gases and swims in some of which solid-state, liquid suspension Grain;Wherein, gas componant mainly has nitrogen (N2), oxygen (O2), argon (Ar), carbon dioxide (CO2), steam (HO2), ozone (O3), hydrogen (H2), helium (He) etc., suspend Grain have flue dust (as sandstorm, the vehicle exhaust of burn incompletely and chemical combustion reaction produce smog), The globule, ice crystal, pollen etc..
In atmospheric science, generally chemical examination there is suspended solid, that the air of liquid particles calls air gas is molten Glue.
In embodiments of the present invention, for the ease of describing, outstanding by the liquid included in air or solid-state Floating particles is referred to as mist.
These mists are distributed the most within the specific limits, propagate light and form complicated impact, The view data making in the environment of mist shooting is fuzzy, color distortion.
Therefore, the embodiment of the present invention, in photographing phase, can detect atmospheric information, i.e. characterize atmospheric condition Data, these atmospheric information feature based on laser signal can detect, generally include following two Individual parameter:
1, focal distance;
In embodiments of the present invention, can based on laser signal by pulse ranging method, interfeerometry ranging method, The focal distance between mode computing terminal equipment and reference object such as triangulation.
As a example by pulse ranging method, the characteristic information of the first sub-laser include from launch, anti-through reference object Penetrate, to the time difference between receiving, then can according to time difference computing terminal equipment and reference object it Between focal distance:
L=C*t/2;
Wherein, L represent focal distance, C represent the light velocity, t represent Laser emission and reflection between time Between poor.
Certainly, in order to improve the degree of accuracy of focal distance, can measure repeatedly between Laser emission and reflection Time difference, take the focal distance between its mean value calculation terminal unit and reference object
2, atmospheric scattering coefficient;
Atmospheric scattering coefficient can describe strong to radiant flux scattering process of various particles in air Weak.
It is said that in general, the biquadratic of scattering coefficient and incident radiation wavelength is inversely proportional to, and when particle with Incident radiation wavelength can be comparable or during much larger than incident radiation wavelength, scattering coefficient and wavelength relationship are not Greatly.
In an example of the embodiment of the present invention, the characteristic information of the second sub-laser includes laser total energy Decline at laser when amount, concussion and swing lasertron energy and the concussion number of times that optical cavity sidewall collects, then can count Calculate the ratio between lasertron energy and target component, as atmospheric scattering coefficient;
Wherein, this target component is concussion number of times and the product of laser gross energy.
Wherein, laser declines and swings the sidewall in chamber and have a sensor, for gathering and accumulation laser shakes every time Through the lasertron energy that sidewall scatters.
In implementing, can be by equation below calculating atmospheric scattering coefficient:
P a l l = 2 π R · L A P s
T n u m = t c L
β = P a l l T n u m · P
Wherein, PallDecline the sidewall swinging chamber accumulative lasertron energy collected within the t time for laser;
TnumIt is that the second sub-laser declines at laser and swings the concussion number of times of light intracavity.
β is the energy proportion of the second sub-laser one way scattering, namely atmospheric scattering coefficient;
Further, P is that the second sub-laser enters laser and declines and swing the laser gross energy of optical cavity, and L is laser Declining and swing the length (internal) of optical cavity, R is that laser declines and swings the radius (internal) of optical cavity cross section, and t is Two sub-laser enter laser and decline the concussion time after swinging optical cavity, and A is to decline at laser to swing the sensing of optical cavity sidewall The area of device, c is light beam, and π is pi, PsThe energy detected for this sensor.
Step 103, carries out shooting operation, it is thus achieved that raw image data;
In actual applications, can call photographic head and shoot, reference object (SCENE) is by taking the photograph The optical imagery generated as the camera lens (LENS) of head projects on imageing sensor (Sensor) surface, Then turn to the signal of telecommunication, after A/D (analog digital conversion) changes, become data image signal.
In one embodiment of the invention, step 103 can include following sub-step:
Sub-step S11, focuses according to described focal distance;
Sub-step S12, when focusing successfully, carries out shooting operation, it is thus achieved that raw image data.
In implementing, photographic head generally comprises camera lens (Lens), pedestal (Holder), infrared absorption filter The parts such as wave plate (IR), image sensing processor (Sensor), motor, circuit board.
Although in the photographic head of different model, the structure of camera lens is not quite similar, but great majority can regard For a piece of convex lens, the light come beyond convex lens axle center, after by convex lens, mostly can be refracted, And it being intersected in a bit, the plotted point of these light is referred to as focus, generally can be on blur-free imaging position The plane formed a little is called focal plane, the distance between focal plane and photographic head be referred to as scenery away from From, i.e. terminal unit (camera lens) and the focal distance of reference object.
Being in the reference object of focal plane for those, photographic head can be filmed clearly, and out of focus Scenery the most remote before and after plane, image is the fuzzyyest.
For from the far and near different reference object of camera lens, by after camera lens at fixing position blur-free imaging just Needs carry out focus (focusing), directly perceived for, after camera lens is brought into focus, reference object typically can be clapped It is clear to take the photograph.
In terminal unit (such as mobile phone), typically photo-sensitive cell can not be adjusted, therefore, it can Focusing is realized by the position of motor regulation camera lens.
As a example by voice coil motor (VCM), it is mainly made up of parts such as coil, group of magnets and shell fragments, Being locked in coil by camera lens module (including the parts such as camera lens, outer filter plate), coil passes through upper and lower two Shell fragment is fixed in group of magnets, and when to coil electricity, coil can produce magnetic field, coil magnetic field and Magnetitum Group interacts, and coil can move up, and the camera lens module being locked in coil moves the most together, when disconnected During electricity, coil returns under shell fragment elastic force, realizes focusing by changing the position of camera lens.
Step 104, carries out fog dispersal process according to described atmospheric information to described raw image data, it is thus achieved that Fog dispersal view data.
In embodiments of the present invention, can according to the situation of air, raw image data be dispersed fog in real time Processing, i.e. eliminate the impact of mist in raw image data, the raw image data after fog dispersal processes is Not having the view data of mist, can referred to as disperse fog view data.
In one embodiment of the invention, step 104 can include following sub-step:
Sub-step S21, uses described atmospheric scattering coefficient and described focal distance to calculate atmospheric transmittans;
Air projection coefficient defines the transfer function of air dielectric, describes object reflection light through aerial The scattering of particle left behind and reaches the ratio of terminal unit or eyes later, indicates how many things Body launches light energy incoming terminal equipment or eyes after atmospheric attenuation, is one and more than 0 and is less than 1 Data.
In implementing, can be by equation below calculating atmospheric transmittans:
T=e(-dβ)
Wherein, d is focal distance, β atmospheric scattering coefficient.
Sub-step S22, carries out inverting according to atmospheric transmission model to described raw image data, it is thus achieved that disappear The fog dispersal view data affected except described atmospheric transmittans.
Atmospheric transmission model, describes when having particle in air, and terminal unit photographs image Data or eye-observation are to the optical principle of object.
Method based on atmospheric transmission model, can build the image degeneration model produced by mist, this image Degeneration model mathematical formulae describes mist and acts on the mistake originally not having the image of mist to produce atomization image Journey, using the raw image data (being i.e. atomized view data) not dispersed fog as known quantity, substitutes into degeneration mould In type, the best estimate of the view data after contact fog dispersal, as fog dispersal view data.
It should be noted that before carrying out inverting, in order to reduce system resources consumption, accelerate shooting effect Rate, it can be determined that whether atmospheric transmittans is less than the coefficient threshold preset;If, then it represents that air feelings Condition is poor, performs sub-step S22, otherwise, represent that atmospheric condition is preferable, fog dispersal process can not be carried out.
In an example of the embodiment of the present invention, sub-step S22 can include following sub-step further Rapid:
Sub-step S221, calculates atmosphere light intensity in described raw image data;
Atmosphere light intensity is the estimated value of the mist to any that in raw image data, mist is the denseest, and this value determines The scope of mistiness degree in raw image data, the denseest cloud point is in most of the cases positioned at original image On the horizon of data, near horizontal position, or the elevation angle is not the biggest aerial.
In implementing, each pixel of raw image data is by RGB (RGB) three chrominance channel structure Becoming, x and y is the horizontal stroke of raw image data, vertical coordinate.
Dark value by each pixel of equation below calculating raw image data:
J d a r k ( x , y ) = m i n c ∈ { r , g , b } ( m i n ( x 1 , y 1 ) ∈ Ω ( x , y ) ( J c ( x 1 , y 1 ) ) )
Wherein, Jc(x1,y1) represent raw image data a certain pixel (x1,y1) the value of Color Channel c, (x, y) expression is positioned at raw image data pixel (x to Ω1,y1Regional area around).
This regional area can be defined by the user, and such as the rectangular area of 15*15 pixel, this regional area is also Can be calculated by formula, if width is Image_width/ratio, height is Image_height/ratio, Wherein, Image_width is the picture traverse of raw image data, and Image_height is original image number According to picture altitude, ratio is constant.
Therefore, pixel (x1,y1) dark value Jdark(x y) is pixel (x1,y1Partial zones around) Minima in all colours passage of all pixels in territory.
After the dark value of all pixels calculating raw image data according to above-mentioned formula, original image The dark value of all pixels of data constitutes a width monochromatic image data, referred to as dark channel image data.
All pixels in dark channel image data are sorted according to the order of descending luminance, before then selecting The pixel of N% (N is positive number, such as 0.1), then in raw image data and from dark channel image data In the corresponding pixel of pixel selected, the brightness value of that pixel that brightness value is maximum is as atmosphere light intensity.
Sub-step S222, by the pixel of described raw image data, described atmospheric transmittans and institute State in the atmospheric transmission model that the input of atmosphere light intensity is preset;
Sub-step S223, carries out inverting to described atmospheric transmission model, it is thus achieved that the picture of fog dispersal view data Vegetarian refreshments.
In this example, atmospheric transmission model can be indicated with equation below:
J=It+A (1-t)
Wherein, J is raw image data, and I is fog dispersal view data, and t is atmospheric transmittans, and A is Atmosphere light intensity.
Include that two parts, Part I are that object reflection light hangs in air at this air projection model The light It remained after floating particles scattering, Part II is made by Scattering From Suspended Particles light in air Atmospheric environment light A (1-t) become.
Definition t=e in conjunction with atmospheric transmittans(-dβ)Understand, the intensity of object reflection light incoming terminal equipment Focal distance d between It to object to terminal unit is directly proportional, i.e. distance is the most remote, then light is the strongest, Therefore, white is at infinity presented.
Therefore, it can carry out inverting by equation below:
I = J - A ( 1 - t ) t
In actual applications, after inverting, the picture format of fog dispersal view data and raw image data Picture format keeps consistent.
If J is Bayer picture element matrix, then after inverting, I is also Bayer picture element matrix, if J is Rgb pixel matrix, then, after inverting, I is also rgb pixel matrix.
As a example by Bayer picture element matrix, (x is y) that in Bayer picture element matrix is positioned at (x, y) position to J Putting the pixel at place, the pixel being finally inversed by by equation below is the pixel of co-located Bayer matrix I(x,y)。
I ( x , y ) = J ( x , y ) - A ( 1 - t ) t
Wherein, 1≤x≤width, 0≤y≤height, width, height are the figure of raw image data Image width degree, length.
In one embodiment of the invention, the method can also comprise the steps:
Step 105, carries out parametric calibration and/or form conversion, it is thus achieved that target to described fog dispersal view data View data.
Raw image data generally obtains from the photoreceptors device of photographic head, generally Bayer image, Namely RAW image, the image of this form typically cannot directly display and preserve, therefore, The embodiment of the present invention can carry out parametric calibration, form conversion further.
In implementing, parametric calibration can include following at least one:
Change a detection, noise reduction process, Lens correction, Gamma correction, white balance correction, color school Just;
Form is changed and is included following at least one:
Be converted to rgb image data from Bayer view data, be converted to YUV from rgb image data View data, YUV image data compression is become Jpeg image.
Certainly, above-mentioned parameter calibration, form conversion are intended only as example, when implementing the embodiment of the present invention, Can arrange other parametric calibrations, form conversion according to practical situation, the embodiment of the present invention is to this most in addition Limit.It addition, in addition to above-mentioned parameter calibration, form conversion, those skilled in the art can also basis Being actually needed other parametric calibrations of employing, form conversion, this is not any limitation as by the embodiment of the present invention.
The embodiment of the present invention passes through the air such as laser detection such as atmospheric scattering coefficient in real time in photographing phase Information, carries out fog dispersal process with the raw image data to shooting, due to the situation of air in different regions, Different time all it may happen that change, by measuring atmospheric information in real time, has adapted to the situation of air in real time, Based on the atmospheric information mated with the situation of air, the photo of shooting under the situation of this air is carried out Fog dispersal processes, and improves the quality that fog dispersal processes, and the light that effectively elimination mist is brought changes, leads Cause adjacent ray and the problem of mixing occurs, and then improve the definition of image, effectively eliminate meanwhile The light of the different wave length that mist is brought is absorbed, is caused what the mixed proportion of light changed in various degree Problem, and then improve the real colour degree of image.
It should be noted that for embodiment of the method, in order to be briefly described, therefore it is all expressed as one it be The combination of actions of row, but those skilled in the art should know, and the embodiment of the present invention is not by described The restriction of sequence of movement because according to the embodiment of the present invention, some step can use other orders or Person is carried out simultaneously.Secondly, those skilled in the art also should know, embodiment described in this description Belong to preferred embodiment, necessary to the involved action not necessarily embodiment of the present invention.
With reference to Fig. 3, it is shown that the structured flowchart of the processing means embodiment of a kind of view data of the present invention, Specifically can include such as lower module:
Characteristic information acquisition module 301, launches and the laser that receives for obtaining from laser sensor Characteristic information;
Atmospheric information detection module 302, for calculating current atmospheric information according to described characteristic information;
Taking module 303, for carrying out shooting operation, it is thus achieved that raw image data to reference object;
Fog dispersal processing module 304, for disappearing to described raw image data according to described atmospheric information Mist processes, it is thus achieved that fog dispersal view data.
In one embodiment of the invention, described laser sensor has beam splitter and laser declines and swings light Chamber, described laser declines and swings optical cavity and connect the air outside described terminal unit, and laser is divided into through described beam splitter First sub-laser and the second sub-laser;
Wherein, the air outside described first sub-laser injection to terminal unit, described second sub-laser imports Described laser declines and swings in optical cavity;
The characteristic information of described first sub-laser includes from the time difference being transmitted between reception;Described second Decline swashing of swinging that optical cavity sidewall collects at laser when the characteristic information of sub-laser includes laser gross energy, concussion Photon energy and concussion number of times;
Described atmospheric information detection module 302 can include following submodule:
Focal distance calculating sub module, for calculating focal distance according to described time difference;
Atmospheric scattering coefficient calculations submodule, for calculating between described lasertron energy and target component Ratio, as atmospheric scattering coefficient, described target component is described concussion number of times and described laser gross energy Between product.
In one embodiment of the invention, described taking module 303 can include following submodule:
Focusing submodule, for focusing according to described focal distance;
Raw image data obtains submodule, for when focusing successfully, carries out shooting operation, it is thus achieved that former Beginning view data.
In one embodiment of the invention, described fog dispersal processing module 304 can include following submodule:
Atmospheric transmittans calculating sub module, is used for using described atmospheric scattering coefficient and described focal distance Calculate atmospheric transmittans;
Model inversion submodule, for carrying out instead described raw image data according to atmospheric transmission model Drill, it is thus achieved that eliminate the fog dispersal view data of described atmospheric transmittans impact.
In one embodiment of the invention, described fog dispersal processing module 304 can also include following submodule Block:
Coefficient judges submodule, for judging that whether described atmospheric transmittans is less than the coefficient threshold preset Value;The most then call described model inversion submodule.
In an example of the embodiment of the present invention, described model inversion submodule can include as placed an order Unit:
Atmosphere light intensity detection unit, for calculating atmosphere light intensity in described raw image data;
Data input cell, for by the pixel of described raw image data, described atmospheric transmittans In the atmospheric transmission model default with the input of described atmosphere light intensity;
Pixel inverting unit, for carrying out inverting to described atmospheric transmission model, it is thus achieved that fog dispersal picture number According to pixel.
In one embodiment of the invention, this device can also include such as lower module:
Optimize processing module, for described fog dispersal view data being carried out parametric calibration and/or form conversion, Obtain destination image data.
With reference to Fig. 4, it is shown that the structured flowchart of a kind of terminal unit embodiment of the present invention, described terminal Equipment can include laser sensor 410, photographic head 420 and image processing modules 430;Wherein,
Described laser sensor 410 is used for launching and receiving laser;
Described image processing modules 430 can include such as lower module:
Characteristic information acquisition module 431, for obtain launch from laser sensor 410 and receive swash The characteristic information of light;
Atmospheric information detection module 432, for calculating current atmospheric information according to described characteristic information;
Taking module 433, is used for carrying out shooting operation, it is thus achieved that raw image data;
Fog dispersal processing module 434, for disappearing to described raw image data according to described atmospheric information Mist processes, it is thus achieved that fog dispersal view data.
In one embodiment of the invention, described laser sensor include generating laser, beam splitter, Laser pickoff and laser decline and swing optical cavity;Wherein,
Described generating laser, is used for launching laser;
Described beam splitter, for being divided into the first sub-laser and the second sub-laser by described laser;Described first Sub-laser penetrates the air outside terminal unit, and the described second sub-laser described laser of importing declines and swings optical cavity In;
Laser pickoff, is used for receiving described first sub-laser;
Described laser declines and swings optical cavity and connect the air outside described terminal unit.
In one embodiment of the invention, the characteristic information of described first sub-laser includes connecing from being transmitted into Time difference between receipts;Swashing when the characteristic information of described second sub-laser includes laser gross energy, concussion Light decay swings lasertron energy and the concussion number of times that optical cavity sidewall collects;
Described atmospheric information detection module 432 can include following submodule:
Focal distance calculating sub module, for calculating focal distance according to described time difference;
Atmospheric scattering coefficient calculations submodule, for calculating between described lasertron energy and target component Ratio, as atmospheric scattering coefficient, described target component is described concussion number of times and described laser gross energy Between product.
In one embodiment of the invention, described taking module 433 can include following submodule:
Focusing submodule, for focusing according to described focal distance;
Raw image data obtains submodule, for when focusing successfully, carries out shooting operation, it is thus achieved that former Beginning view data.
In one embodiment of the invention, described fog dispersal processing module 434 can include following submodule:
Atmospheric transmittans calculating sub module, is used for using described atmospheric scattering coefficient and described focal distance Calculate atmospheric transmittans;
Model inversion submodule, for carrying out instead described raw image data according to atmospheric transmission model Drill, it is thus achieved that eliminate the fog dispersal view data of described atmospheric transmittans impact.
In one embodiment of the invention, described fog dispersal processing module 434 can also include following submodule Block:
Coefficient judges submodule, for judging that whether described atmospheric transmittans is less than the coefficient threshold preset Value;The most then call described model inversion submodule.
In an example of the embodiment of the present invention, described model inversion submodule can include as placed an order Unit:
Atmosphere light intensity detection unit, for calculating atmosphere light intensity in described raw image data;
Data input cell, for by the pixel of described raw image data, described atmospheric transmittans In the atmospheric transmission model default with the input of described atmosphere light intensity;
Pixel inverting unit, for carrying out inverting to described atmospheric transmission model, it is thus achieved that fog dispersal picture number According to pixel.
In one embodiment of the invention, described image processing modules 430 can also include such as lower module:
Optimize processing module, for described fog dispersal view data being carried out parametric calibration and/or form conversion, Obtain destination image data.
For device, terminal unit embodiment, due to itself and embodiment of the method basic simlarity, so Describe is fairly simple, and relevant part sees the part of embodiment of the method and illustrates.
Each embodiment in this specification all uses the mode gone forward one by one to describe, and each embodiment stresses Be all the difference with other embodiments, between each embodiment, identical similar part sees mutually ?.
Those skilled in the art are it should be appreciated that the embodiment of the embodiment of the present invention can be provided as method, dress Put or computer program.Therefore, the embodiment of the present invention can use complete hardware embodiment, completely Software implementation or the form of the embodiment in terms of combining software and hardware.And, the embodiment of the present invention Can use and can be situated between with storage at one or more computers wherein including computer usable program code The upper computer journey implemented of matter (including but not limited to disk memory, CD-ROM, optical memory etc.) The form of sequence product.
The embodiment of the present invention is with reference to method according to embodiments of the present invention, terminal unit (system) and meter The flow chart of calculation machine program product and/or block diagram describe.It should be understood that can be by computer program instructions Each flow process in flowchart and/or block diagram and/or square frame and flow chart and/or square frame Flow process in figure and/or the combination of square frame.Can provide these computer program instructions to general purpose computer, The processor of special-purpose computer, Embedded Processor or other programmable data processing terminal equipment is to produce One machine so that performed by the processor of computer or other programmable data processing terminal equipment Instruction produce for realizing at one flow process of flow chart or multiple flow process and/or one square frame of block diagram or The device of the function specified in multiple square frames.
These computer program instructions may be alternatively stored in and computer or other programmable datas can be guided to process In the computer-readable memory that terminal unit works in a specific way so that be stored in this computer-readable Instruction in memorizer produces the manufacture including command device, and this command device realizes flow chart one The function specified in flow process or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions also can be loaded into computer or other programmable data processing terminals set Standby upper so that on computer or other programmable terminal equipment, to perform sequence of operations step in terms of producing The process that calculation machine realizes, thus the instruction performed on computer or other programmable terminal equipment provides and uses In realizing in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame The step of the function specified.
Although having been described for the preferred embodiment of the embodiment of the present invention, but those skilled in the art being once Know basic creative concept, then these embodiments can be made other change and amendment.So, Claims are intended to be construed to include preferred embodiment and fall into the institute of range of embodiment of the invention There are change and amendment.
Finally, in addition it is also necessary to explanation, in this article, the relational terms of such as first and second or the like It is used merely to separate an entity or operation with another entity or operating space, and not necessarily requires Or imply relation or the order that there is any this reality between these entities or operation.And, art Language " includes ", " comprising " or its any other variant are intended to comprising of nonexcludability, so that Process, method, article or terminal unit including a series of key elements not only include those key elements, and Also include other key elements being not expressly set out, or also include for this process, method, article or The key element that person's terminal unit is intrinsic.In the case of there is no more restriction, statement " include one It is individual ...... " key element that limits, it is not excluded that including the process of described key element, method, article or end End equipment there is also other identical element.
Processing method, the process of a kind of view data to a kind of view data provided by the present invention above Device and a kind of terminal unit, be described in detail, and specific case used herein is to the present invention's Principle and embodiment are set forth, and the explanation of above example is only intended to help to understand the present invention's Method and core concept thereof;Simultaneously for one of ordinary skill in the art, according to the thought of the present invention, The most all will change, in sum, this specification content is not It is interpreted as limitation of the present invention.

Claims (12)

1. a terminal unit, it is characterised in that described terminal unit includes laser sensor, shooting Head and image processing modules;Wherein,
Described laser sensor is used for launching and receiving laser;
Described image processing modules includes:
Characteristic information acquisition module, for obtaining the feature of the laser launched from laser sensor and receive Information;
Atmospheric information detection module, for calculating current atmospheric information according to described characteristic information;
Taking module, is used for carrying out shooting operation, it is thus achieved that raw image data;
Fog dispersal processing module, for carrying out at fog dispersal described raw image data according to described atmospheric information Reason, it is thus achieved that fog dispersal view data.
Terminal unit the most according to claim 1, it is characterised in that described laser sensor bag Include generating laser, beam splitter, laser pickoff and laser to decline and swing optical cavity;Wherein,
Described generating laser, is used for launching laser;
Described beam splitter, for being divided into the first sub-laser and the second sub-laser by described laser;Described first Sub-laser penetrates the air outside terminal unit, and the described second sub-laser described laser of importing declines and swings optical cavity In;
Laser pickoff, is used for receiving described first sub-laser;
Described laser declines and swings optical cavity and connect the air outside described terminal unit.
3. the processing method of a terminal unit view data, it is characterised in that including:
Obtain the characteristic information of the laser launched from laser sensor and receive;
Current atmospheric information is calculated according to described characteristic information;
Carry out shooting operation, it is thus achieved that raw image data;
According to described atmospheric information, described raw image data is carried out fog dispersal process, it is thus achieved that fog dispersal picture number According to.
Method the most according to claim 3, it is characterised in that described laser sensor has point Light device and laser decline and swing optical cavity, and described laser declines and swings optical cavity and connect the air outside described terminal unit, laser It is divided into the first sub-laser and the second sub-laser through described beam splitter;
Wherein, the air outside described first sub-laser injection to terminal unit, described second sub-laser imports Described laser declines and swings in optical cavity;
The characteristic information of described first sub-laser includes from the time difference being transmitted between reception;Described second Decline swashing of swinging that optical cavity sidewall collects at laser when the characteristic information of sub-laser includes laser gross energy, concussion Photon energy and concussion number of times;
The described step calculating current atmospheric information according to described characteristic information includes:
Focal distance is calculated according to described time difference;
Calculate the ratio between described lasertron energy and target component, as atmospheric scattering coefficient, described Target component is the product between described concussion number of times and described laser gross energy.
Method the most according to claim 4, it is characterised in that described in carry out shooting operation, obtain The step obtaining raw image data includes:
Focus according to described focal distance;
When focusing successfully, carry out shooting operation, it is thus achieved that raw image data.
Method the most according to claim 4, it is characterised in that described according to described atmospheric information Described raw image data is carried out fog dispersal process, it is thus achieved that the step of fog dispersal view data includes:
Described atmospheric scattering coefficient and described focal distance is used to calculate atmospheric transmittans;
According to atmospheric transmission model, described raw image data is carried out inverting, it is thus achieved that eliminate described air saturating Penetrate the fog dispersal view data of index impacts.
Method the most according to claim 6, it is characterised in that described according to described atmospheric scattering Coefficient and described focal distance carry out fog dispersal process to described raw image data, it is thus achieved that fog dispersal view data Step also include:
Judge that whether described atmospheric transmittans is less than the coefficient threshold preset;
According to atmospheric transmission model, described raw image data is carried out inverting described in the most then performing, obtain The step of the fog dispersal view data of described atmospheric transmittans impact must be eliminated.
Method the most according to claim 6, it is characterised in that described according to atmospheric transmission model Described raw image data is carried out inverting, it is thus achieved that eliminate the fog dispersal image of described atmospheric transmittans impact The step of data includes:
Atmosphere light intensity is calculated in described raw image data;
By defeated to the pixel of described raw image data, described atmospheric transmittans and described atmosphere light intensity Enter in default atmospheric transmission model;
Described atmospheric transmission model is carried out inverting, it is thus achieved that the pixel of fog dispersal view data.
9. according to the method described in claim 3 or 4 or 5 or 6 or 7 or 8, it is characterised in that Also include:
Described fog dispersal view data is carried out parametric calibration and/or form conversion, it is thus achieved that destination image data.
10. the processing means of a view data, it is characterised in that including:
Characteristic information acquisition module, for obtaining the feature of the laser launched from laser sensor and receive Information;
Atmospheric information detection module, for calculating current atmospheric information according to described characteristic information;
Taking module, is used for carrying out shooting operation, it is thus achieved that raw image data;
Fog dispersal processing module, for carrying out at fog dispersal described raw image data according to described atmospheric information Reason, it is thus achieved that fog dispersal view data.
11. devices according to claim 10, it is characterised in that described laser sensor has Beam splitter and laser decline and swing optical cavity, and described laser declines and swings optical cavity and connect the air outside described terminal unit, swashs Light is divided into the first sub-laser and the second sub-laser through described beam splitter;
Wherein, the air outside described first sub-laser injection to terminal unit, described second sub-laser imports Described laser declines and swings in optical cavity;
The characteristic information of described first sub-laser includes from the time difference being transmitted between reception;Described second Decline swashing of swinging that optical cavity sidewall collects at laser when the characteristic information of sub-laser includes laser gross energy, concussion Photon energy and concussion number of times;
Described atmospheric information detection module includes:
Focal distance calculating sub module, for calculating focal distance according to described time difference;
Atmospheric scattering coefficient calculations submodule, for calculating between described lasertron energy and target component Ratio, as atmospheric scattering coefficient, described target component is described concussion number of times and described laser gross energy Between product.
12. devices according to claim 11, it is characterised in that described fog dispersal processing module bag Include:
Atmospheric transmittans calculating sub module, is used for using described atmospheric scattering coefficient and described focal distance Calculate atmospheric transmittans;
Model inversion submodule, for carrying out instead described raw image data according to atmospheric transmission model Drill, it is thus achieved that eliminate the fog dispersal view data of described atmospheric transmittans impact.
CN201610188290.6A 2016-03-29 2016-03-29 A kind of processing method of image data, device and terminal device Active CN105894466B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610188290.6A CN105894466B (en) 2016-03-29 2016-03-29 A kind of processing method of image data, device and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610188290.6A CN105894466B (en) 2016-03-29 2016-03-29 A kind of processing method of image data, device and terminal device

Publications (2)

Publication Number Publication Date
CN105894466A true CN105894466A (en) 2016-08-24
CN105894466B CN105894466B (en) 2019-01-11

Family

ID=57014634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610188290.6A Active CN105894466B (en) 2016-03-29 2016-03-29 A kind of processing method of image data, device and terminal device

Country Status (1)

Country Link
CN (1) CN105894466B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107277369A (en) * 2017-07-27 2017-10-20 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and computer equipment
CN110052095A (en) * 2019-04-19 2019-07-26 济南大学 A kind of intelligent integrated control system of bag filter
CN112672006A (en) * 2020-11-17 2021-04-16 钟能俊 Automatic deicing system of camera

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436643A (en) * 2011-11-02 2012-05-02 浙江大学 Image defogging method facing to atmospheric scattering proximity effect
CN102930514A (en) * 2012-09-27 2013-02-13 西安电子科技大学 Rapid image defogging method based on atmospheric physical scattering model
CN103440629A (en) * 2013-08-29 2013-12-11 浙江理工大学 Digital image processing method of video extensometer with automatic tracking laser marker
CN103712914A (en) * 2013-12-25 2014-04-09 广州禾信分析仪器有限公司 Laser cavity ring-down spectrometer for simultaneous detection of aerosol extinction and scattering coefficients
CN104252698A (en) * 2014-06-25 2014-12-31 西南科技大学 Semi-inverse method-based rapid single image dehazing algorithm
CN104794697A (en) * 2015-05-05 2015-07-22 哈尔滨工程大学 Dark channel prior based image defogging method
CN104809707A (en) * 2015-04-28 2015-07-29 西南科技大学 Method for estimating visibility of single fog-degraded image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436643A (en) * 2011-11-02 2012-05-02 浙江大学 Image defogging method facing to atmospheric scattering proximity effect
CN102930514A (en) * 2012-09-27 2013-02-13 西安电子科技大学 Rapid image defogging method based on atmospheric physical scattering model
CN103440629A (en) * 2013-08-29 2013-12-11 浙江理工大学 Digital image processing method of video extensometer with automatic tracking laser marker
CN103712914A (en) * 2013-12-25 2014-04-09 广州禾信分析仪器有限公司 Laser cavity ring-down spectrometer for simultaneous detection of aerosol extinction and scattering coefficients
CN104252698A (en) * 2014-06-25 2014-12-31 西南科技大学 Semi-inverse method-based rapid single image dehazing algorithm
CN104809707A (en) * 2015-04-28 2015-07-29 西南科技大学 Method for estimating visibility of single fog-degraded image
CN104794697A (en) * 2015-05-05 2015-07-22 哈尔滨工程大学 Dark channel prior based image defogging method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
E. ULLAH 等: "Single Image haze removal using improved Dark Channel Prior", 《MODELLING, IDENTIFICATION & CONTROL》 *
吴迪,朱青松: "图像去雾的最新研究进展", 《自动化学报》 *
杨燕 等: "基于引导滤波的单幅图像自适应去雾算法", 《计算机工程》 *
许丽红 等: "结合图像分割的暗原色先验去雾算法", 《光电子技术》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107277369A (en) * 2017-07-27 2017-10-20 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and computer equipment
CN107277369B (en) * 2017-07-27 2019-08-16 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and computer equipment
CN110052095A (en) * 2019-04-19 2019-07-26 济南大学 A kind of intelligent integrated control system of bag filter
CN110052095B (en) * 2019-04-19 2021-10-22 济南大学 Intelligent integrated control system of bag-type dust collector
CN112672006A (en) * 2020-11-17 2021-04-16 钟能俊 Automatic deicing system of camera

Also Published As

Publication number Publication date
CN105894466B (en) 2019-01-11

Similar Documents

Publication Publication Date Title
EP3552386B1 (en) Systems and methods for video replaying
CN102014251B (en) Image processing apparatus and image processing method
EP3134850B1 (en) Method for controlling a camera based on processing an image captured by other camera
CN109167924B (en) Video imaging method, system, device and storage medium based on hybrid camera
CN104833424B (en) For the system and method for the polarization for measuring the light in image
US9438806B2 (en) Photographing apparatus and photographing method for displaying combined avatar and map information related to a subject
CN108370438A (en) The depth camera component of range gating
CN1701595B (en) Image pickup processing method and image pickup apparatus
CN106918331A (en) Camera model, measurement subsystem and measuring system
CN111246106B (en) Image processing method, electronic device, and computer-readable storage medium
CN105809647A (en) Automatic defogging photographing method, device and equipment
CN109348088A (en) Image denoising method, device, electronic equipment and computer readable storage medium
CN105894466A (en) Image data processing method and apparatus and terminal device
CN108351199A (en) Information processing unit, information processing method and program
CN105190229A (en) Three-dimensional shape measurement device, three-dimensional shape measurement method, and three-dimensional shape measurement program
KR102337209B1 (en) Method for notifying environmental context information, electronic apparatus and storage medium
CN103152522A (en) Method and terminal for adjusting shooting position
CN105892638A (en) Virtual reality interaction method, device and system
CN107277299A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN108986062A (en) Image processing method and device, electronic device, storage medium and computer equipment
CN111866388A (en) Multiple exposure shooting method, equipment and computer readable storage medium
US20190222777A1 (en) Near-infrared video compositing
US20200358955A1 (en) Image processing apparatus, image processing method, and recording medium
CN112785678B (en) Sunlight analysis method and system based on three-dimensional simulation
CN107454319A (en) Image processing method, device, mobile terminal and computer-readable recording medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant